Gregorian Calendar

“It is pleasant for an old man to be able to go to bed on September 2, and not have to get up until September 14,” wrote Benjamin Franklin in 1752. The reason for the shift was the adoption of the Gregorian calendar, devised in 1582.

Background

Roman Julius Caesar invented the modern calendar. Before then, different cultures and regions used their own calendars. These different calendars were oftentimes based on phases of the moon. This resulted in the same day falling in different parts of the year over time.

This problem still exists in some ancient calendars. For example, in the Muslim calendar shifts about 11 days earlier every year. Therefore, holidays cycle through different seasons over time. The Jewish calendar has a similar problem but added a leap-month to adjust.

As the Roman Empire expanded, they recognized the various calendars caused confusion for international panning. Rome needed for one consistent calendar where each day fell at the same time every year.

Roman astronomers computed if they divide the year into 365 days then add an extra day every fourth year the days will remain consistent. They named this the Julian Calendar, after Julius Caesar, and it remained the standard for almost 1,600 years.

Gregorian Calendar

Over time, Catholics realized the days were slowly shifting, noting that Easter was falling ever further from the spring equinox.

Catholic astronomers reworked the Roman computations. They realized the Julian Calendar added too many days, pushing days forward through the seasons. They invented a revised calendar, skipping leap years when the year is divisible by 100 unless the year is also divisible by 400. This new calendar resulted in a near-perfect match where the days of the year remain constant.

However, the Catholic Church wanted not only to start the new consistent calendar but also fix the date shift from the old calendar. They proposed rolling the calendar ahead 12 days then fixing it forever.

Catholic countries immediately adopted the change. However, Protestant countries imagined some type of Catholic conspiracy theory and resisted the change at first. Eventually, they relented. Germany switched in the year 1700 and England (including the then US colonies) in the year 1752.

The Greek and Russian Orthodox churches still have not switched. They celebrate Easter, though not Christmas, on a different day than the Gregorian Calendar.

Broadcasting

David Sarnoff

David Sarnoff is the father of broadcasting. Sarnoff was a Jewish immigrant who became his family’s breadwinner at age 15. He worked as a Morse Code operator, rising up the ranks to become a supervisor. Eventually, he transitioned to radio to transmit messages over long distances.

Early radio technology was for point-to-point communications, like a long-distance walkie-talkie. AT&T used it for long-distance telephone calls and companies communicated with ships. Sarnoff saw radio as a one-to-many technology, beaming entertainment and news directly into houses. The idea was a breakthrough.

GE acquired Sarnoff’s employer, American Marconi, and renamed it the Radio Corporation of America, RCA. Sarnoff proposed that RCA focus on broadcasting. They ignored him until his broadcast of a boxing match, in 1921, proved wildly popular. Interest was strong and drove the sales of radios. Other RCA executives then understood that content would drive radio sales.

Early Radio

There were early sporadic radio broadcasters but most were banned during WWI on national security grounds. After the war ended, in 1919, broadcast networks began to spring up all around the US. The US issued commercial broadcast licenses throughout the 1920s.

One of the first uses of radio voice broadcasts was education. Tufts College professors broadcast lectures in 1922. Other colleges followed.

Commercialization began in earnest when RCA spawned the first real network, the National Broadcasting Company, NBC. They began broadcasting in 1926, using telephone lines to connect multiple stations. William Paley created the Columbia Broadcasting System (CBS) the following year. In 1939, antitrust regulators forced NBC to spin off the “Blue Network,” a second network they owned. The spun-off company renamed itself the American Broadcasting Company (ABC).

These three networks dominated radio and television broadcasting for about 50 years until cable television became popular.

Radio Goes Global

Radio manufacturers in the United Kingdom recognized the need for content to drive radio sales. There were radio stations but they were sporadic low-quality affairs. To encourage high-quality content they formed the British Broadcasting Company (BBC).

Around this time, radio broadcasts popped up in major cities in the world. Radio Paris launched in 1922. German radio went online in 1923 but was seized by Nazi propaganda minister Joseph Goebbels a decade later. Goebbels created modern electronic propaganda and his core methods are still in use today. Furthermore, Germany broadcast propaganda to neighboring countries who responded by broadcasting their own anti-fascist messages to Germans.

In the US, broadcast networks were primarily advertising supported. Radio manufacturers benefitted from the availability of content paid for by businesses advertising goods and services. In contrast, radio sales drove manufacturers to fund the BBC. Advertising was seen as a nuisance and eventually dropped. The first head of the BBC, Lord Reith, declared that radio broadcasting is a public service, not a commercial product. Most countries throughout the world started with the European public service model but, to some extent, transitioned to the US commercial model. Conversely, the US government-funded and launched a television network, the Public Broadcasting Service, in 1970.

Television

Eventually, RCA moved into television (see the television entry) and NBC, CBS, and ABC became national US television networks. A smaller network, DuMont, tried unsuccessfully to compete. It was shuttered as a network in 1956 though the surviving stations recreated a new broadcast network, Fox Broadcasting Company, in 1986.

Computer Game

Background

Early computers used punch cards to load programs and data into computers. The software was a stack of cards, each card one line of a program. Data input were cards on the top of the stack. Eventually, then the entire thing fed into a card reader. The reader read the stack, processed the data, then printed results.

This process left the computer idle for a large amount of time since the computer did little to nothing while reading the stack of cards.

Computers in the 1950s and 1960s were extremely expensive. However, even with the lag-time and cost, using the computer was vastly faster, and thus less expensive than doing computations by hand. Therefore, companies and governments did not especially mind: the reduction in cost, even with the waiting, was still enormous.

However, this model did not work well for University’s, where many students shared a computer and needed to wait in line to try running their programs. In response, researchers created a new type of operating system, a timesharing operating system, allowing multiple people to use a computer at the same time.

As a side effect, these timesharing operating systems enabled input and output to terminals rather than through punch-cards and printouts. That is, the computer could read keyboard inputs from multiple people, sharing the time required to focus on each, and also run programs.

One side effect to timesharing is that computers became interactive. People could do something, then the computer could respond, then the user could do something else based on the response. This was a dramatically different use for computers which, until that time, functioned more like powerful calculators.

Spacewar!

In 1962, using an interactive DEC computer with circular monitors, Steve Russell created the first modern interactive computer videogame, Spacewar! Two players flew around in ships, with a star in the middle, trying to blast one another. The ships obeyed the laws of physics. The and the sun acted like a gravity well and would destroy ships. Earlier experimental interactive games were dull: tic-tac-toe and mazes.

Computing legend Alan Kay, the inventor of object-oriented programing and the laptop among other things, remarked: “the game of Spacewar blossoms spontaneously wherever there is a graphics display connected to a computer.”

Eventually, Nolan Bushnell and Ted Dabney created a coin-operated knockoff, Computer Space. That game did well and the two went on to create the first computer gaming company, Atari.

Machine Translation

Background

In 1933, Soviet scientist Peter Troyanskii presented “the machine for the selection and printing of words when translating from one language to another” to the Academy of Sciences of the USSR. Soviet aparchnicks during the Stalin era declared the invention “useless” but allowed Troyanskii to continue his work. He died of natural causes in 1950 – a noteworthy accomplishment for a professor during the Stalin era – but never finished his translation machine.

Early IBM Machine Translation

In the US, during the Cold War, Americans had a different problem: there were few Russian speakers. Whereas the Anglophone countries pushed out countless media to learn English, the Soviet Union produced far less. Furthermore, spoken Russian was different than the more formalized written Russian. As the saying goes, even Tolstoy didn’t speak like Tolstoy.

In response, the US decided the burgeoning computer field might be helpful. On January 7, 1954, at IBM headquarters in New York, an IBM 701 automatically translated 60 Russian sentences into English.

“A girl who didn’t understand a word of the language of the Soviets punched out the Russian messages on IBM cards. The “brain” dashed off its English translations on an automatic printer at the breakneck speed of two and a half lines per second.

“‘Mi pyeryedayem mislyi posryedstvom ryechyi,’ the girl punched. And the 701 responded: We transmit thoughts by means of speech.’

“‘Vyelyichyina ugla opryedyelyayetsya otnoshyenyiyem dlyini dugi k radyiusu,’ the punch rattled. The ‘brain’ came back: ‘Magnitude of angle is determined by the relation of length of arc to radius.'”

IBM Press Release

Georgetown’s Leon Dostert led the team that created the program.

Blyat

Even IBM notes that the computer cannot think for itself, limiting the usefulness of the program for vague sentences. Apparently, nobody at Georgetown or IBM ever heard real Russians speak or they’d know that vague is an understatement with a language that has dozens of ways to say the same word. Furthermore, the need to transliterate the Russian into Latin letters, rather than typing in Cyrillic, no doubt further introduced room for enormous error.

In 1966, the Automatic Language Processing Advisory Committee, a group of seven scientists, released a more somber report. They found that machine translation is “expensive, inaccurate, and unpromising.” The message was clear: the best way to translate to and from Russian, or any other language, is to learn the language.

Progress continued, usually yielding abysmal results. Computers would substitute dictionary words in one language for comparable words in another, with results oftentimes more amusing than informative.

Towards Less Terrible Translations

One breakthrough came from Japan in 1984, which favored machine learning because few Japanese people learned English. Researcher Mankoto Nagao came up with the idea of searching for and substituting phrases rather than words. This yielded far better, but still generally terrible results.

Eventually, in the early 1990s, IBM built on Nagao’s method by running accurate manual translations and building an enormous database analyzing word frequency. The translations became slightly less horrible. This led to “statistical translation” that was significantly less terrible.

As the World Wide Web shrunk the world the need for automated translations grew and the vast majority of these were some type of statistical translation. Subsequently, they continually improved to the point where Google Translate could pretty much help decipher, say, a bill.

Modern Translating

Finally, in 2016, neural networks and machine learning (artificial intelligence) started to produce vastly superior machine translations. All the sudden, translations were actually readable. As of 2019, the best online translation engine, German-based DeepL, is entirely AI-powered.

Speech Recognition

Speech recognition is the ability of a computer to recognize the spoken word.

“Alexa: read me something interesting from Innowiki.”

“Duh human, everything on Innowiki is interesting or it wouldn’t be there.”

Today, inexpensive pocket-sized phones connect to centralized servers and understand the spoken word in countless languages. Not so long ago, that was science fiction.

Background

Star Trek in 1966, The HAL 9000 of 2001: A Space Odyssey of 1968, Westworld in 1973, and Star Wars in 1977 all assumed computers will understand the spoken word. What they missed is that people would become so fast at using other input devices, especially keyboards, that speaking is viewed as an inefficient input method.

The first real speech recognition actually predates science fiction ones. In 1952, three Bell Labs scientists created a system, “Audrey,” which recognized a voice speaking digits. A decade later, IBM researchers launched “Shoebox” that recognized 16 English words.

In 1971, DARPA intervened with the “Speech Understanding Research” (SUR) program aimed at a system which could understand 1,000 English words. Researchers at Carnegie Mellon created “Harpy” which understood a vocabulary comparable to a three-year-old child.

Researched continued. In the 1980s the “Hidden Markov Model” (HMM) proved a major breakthrough. Computer scientists realized computers need not understand what a person was saying but, rather, just to listen to sounds and look for patterns. By the 1990’s faster and less expensive CPUs brought speech recognition to the masses with software like Dragon Dictate. Bell South created the voice portal phone-tree system which, unfortunately, frustrates and annoys people to this day.

DARPA stepped back in during the 2000s, sponsoring multi-language speech recognition systems.

Rapid Advancement

However, a major breakthrough came from the private sector. Google released a service called “Google 411” allowing people to dial Google and lookup telephone numbers for free. People would speak to a computer that would guess what they said then an operator would answer, check the computer’s accuracy, and delivered the phone number. The real purpose of the system was to better train computers with a myriad of voices, including difficult-to-decipher names. Eventually, this evolved into Google’s voice recognition software still in use today.

Speech recognition continues to advance in countless languages. Especially for English, the systems are nearing perfection. They are fast, accurate, and require relatively little computer processing power.

In 2019 anybody can speak to a computer though unless their hands are busy doing something else, most prefer not to.

Social Network

When they’re not rigging elections, sowing discord, or amplifying hate social networks are a fun, simple, and convenient way to stay in touch. However, they suffer serious privacy issues under current implementations.

Electronic social networks, in various forms, are older than Facebook co-founder Mark Zuckerberg.

The first online bulletin-board enabling people to chat and hang out virtually was created by David Wooley and Doug Brown in 1973 on the PLATO system. Subsequently, Usenet, a similar bulletin-board system that ran primarily on email via the internet, dates to 1979. Afterwards, online communities sprang up on private bulletin-board system in future years. America Online, CompuServe, and The Well all had some form of social networking.

The most notable modern implementation is Friendster, founded in 2002. At one point it had 115 million active users and sold for $39.5 million. However, it eventually botched a strategic pivot to a gaming site and died in June 2015. Eventually, MySpace blasted on the web in 2003, eclipsing Friendster. Rupert Murdoch’s News Corp purchased it for $508 million in July 2005. However, thanks to infamous internal political fihghts, they ran it into the ground and sold it for $35 million in June 2011.

As of 2019, there are a countless number of social networks. Unquestionably, the current reigning champ of social networking is Facebook. Founded by Mark Zuckerberg and Eduardo Saverin in 2004, Facebook boasts over two billion active users and is on-track to recognize about $69 billion in 2019 revenue. Facebook also owns social media darling Instagram, which is especially popular with young people, and communication tool WhatsApp that they paid $21.8 billion for, or $55 per user.

Peer-To-Peer File Sharing (Napster)

File sharing allows one computer to connect anonymously with others, sending and receiving files. Most files were single-track MP3s of copyright music.

Background

The original theory was that because mixtapes were legal then noncommercial “sharing” of any music was legal. The legality of mixtapes, a collection of songs from other tapes, stems from a US law, the Audio Home Recording Act of 1992. As long as the tapes were noncommercial, Americans were able to share them with friends.

However, creating a mix-tape took time and each was customized for the recipient: young lovers often created tapes for one another. Furthermore, they contained a collection of songs. In contrast, MP3 “file sharing” took little time to create or download. The songs transferred between strangers and were almost always single-tracks.

Napster

The Napster software itself resided on a centralized server but connected computers directly together to actually transfer the files. That is, the files did not exist on the Napster servers. They were on the sender’s computer which the Napster software, in effect, turned into a server for the purpose of transmitting files.

Then teenager Shawn “Napster” Fanning wrote the Napster software. His goal was circumventing hosting MP3 music files on a central web server. MP3 websites were quickly shut down by the music industry.

Fanning and co-founder Sean Parker came up with the peer-to-peer file “sharing” scheme. Napster put an easy to use interface on this system that looked like music shopping and facilitated the uploading and downloading of MP3 music files for those without technical skills. It became wildly popular.

To Every Action…

The music industry freaked out.

On Dec. 7, 1999, the empire struck back. Countless music industry participants sued Napster the company, the founders, their investors, and even many users of the system. The lead lawsuit was captioned Metallica v. Napster Inc. Altogether, the Recording Industry Association of America (RIAA) filed over 30,000 lawsuits against Napster users and even one of the music publishers, Bertelsmann, that leant Napster money.

Napster eventually shut down and sold. However, the “sharing” technology continued to evolve.

Eventually, Kazza, another music sharing technology, took its place. Rather than rely on individual servers, Kazza connected ordinary computers to one another leaving no central server or company to shutter. The RIAA continued to fight the myriad of file “sharing” services.

The launch of Napster was also, not coincidentally, the peak of revenue for the music industry. The industry refused to accept their former business model, selling entire CDs to users who wanted a single song, was no longer viable. Furthermore, the lawsuits alienated an already feisty audience.

Related image
Music Industry Revenues Peak with Launch of Napster

Eventually, starting about 2016, music industry revenues reached a bottom and started increasing thanks in large part to streaming music. Additionally, many bands reoriented their commercialization plans away from music sales and towards live concerts that could not be pirated.

Parker would go on to work as an early employee of Facebook. In 2019, Fanning remains unrepentant. The makers of Kazza eventually created Skype.

Digital Video Recorder (DVR)

Digital Video Recorders (DVR’s) record digitally, to disk or flash memory, rather than analog recording to tape. This allows end-users to quickly fast-forward, rewind, and jump to a section of a recording rather than slowly searching.

Tivo and ReplayTV both launched DVR’s at the 1999 Consumer Electronics Show. As they did with videotapes, broadcasters and content owners reviled the new technology.

At first, ReplayTV contained more features, an ability to skip commercials with the click of one button and “share” content with other players. Subsequently, litigation shut those down.

However, commercial-skipping has since become a common use of DVR’s and content makers reworked their monetization strategy to focus less on mandatory commercial breaks. Presently, many content producers sell shows to commercial-free networks, including HBO, Netflix, and Amazon. Others place commercials on-screen or for use as props in shows.

Tivo was initially more successful despite high prices and a reputation for abysmal customer service. Eventually, neither DVR maker did especially well. ReplayTV shut down in 2011 and TV guide company Rovi purchased Tivo in 2016. “Tivo, which still exists, just got bought for $1.1 billion,” read a headline at the time.

Cable companies introduced DVR functionality in their own cable set-top boxes that they own, install, and rent. However, the rise in video streaming is quickly eliminating the need for a cable box.

Wi-Fi

In 1941, Hollywood actress Hedy Lamarr devised a system and submitted a patent for radio signals that changed frequencies.

Background

Hedwig Eva Maria Kiesler (Heidi Lamar) was born in Vienna. She is most famous as the first woman to appear nude in a mainstream film. In the same movie, she was also the first woman to fake an orgasm. If that wasn’t enough, she wrote the patent for her spread spectrum technology with orchestra director George Antheil.

In an age where Tesla was still alive and Edison only recently died nobody took the Hollywood bombshell and her band director seriously. Nevertheless, their invention eventually proved as important as anything the Wizard of Menlo Park, Edison, or The Man Who Invented the 20th Century, Tesla, ever released.

Eventually, in 1985, the US Federal Communications Commission opened bandwidth for unlicensed use. Wireless phones followed as a common use case. Subsequently, bathroom was never the same.

More significantly, in 1991, NCR invented a wireless data standard named WaveLAN for use in retail. WaveLAN extended Ethernet, the wired standard invented by Robert Metcalfe at Xerox PARC, over radio waves.

Wireless Ethernet, Wi-Fi

Eventually, the Institute of Electrical and Electronics Engineers (IEEE) — the standards committee for everything electronic – realized the need to beam data over radio waves, not wires.

Vic Hayes, chair of the IEEE, worked on the 802.11 standard released in 1997. Specifically, 802.11 is the wireless extension of wired Ethernet, invented by Metcalfe.

In 1997, a consortium of equipment makers created the Wi-Fi alliance and branded wireless ethernet (802.11), as Wi-Fi, trademarking the name.

Today, Wi-Fi is everywhere from individual homes to businesses. Walk into a coffee shop in Manhattan and they’ll offer Wi-Fi. Similarly, walk into a coffee shop in Hanoi and they’re also likely to offer Wi-Fi. Consumers expect water to be sold but there is a worldwide expectation for free wireless internet access.

Web Search Engine

Noteworthy early search engines include Archie, from 1990, that searched filenames, and Gopher, from 1991, that organized files.

Early Search Engines

In March 1994, Stanford students David Filo and Jerry Yang created “Jerry and David’s Guide to the World Wide Web.” Their website contained lists arranged by category of the burgeoning
World Wide Web. Sites were added by hand, with short snippets written by site creators. Initially, there was no charge to list a site. In January 1995 they renamed their website Yahoo.

In December 1995, to showcase the power of Digital Equipment Corporation (DEC) hardware, engineers designed a computer program to read and search (index) the entire World Wide Web. Originally meant as a hardware demo their website, Alta Vista, became popular. Alta Vista was the earliest full-text search engine.

Alta Vista merely matched words a user searched for and verbiage on websites. It was extremely primitive technology that did prioritize the significance or quality of websites. Yahoo was hand curated so did a better job, but the curation process did not scale well and, eventually, they started charging a fee for inclusion. Neither site did an especially good job searching. A third search engine, Excite, founded in 1994 rounded out the top search engines of the era. There were other smaller but still popular web search engines including Lycos (1994), Ask Jeeves (1996), and LookSmart (1995).

Google

In 1996 Stanford students Larry Page and Sergey Brin worked on a computer program to determine context. They decided to read the entire web, the same way that Alta Vista did, except to rank the importance of websites. Initially, their primary criteria for importance was the number of links from other websites and the rank of those sites. This metric, called “Page Rank” (a pun on Larry Page’s last name and the utility of the technology), yielded vastly better search results than either Yahoo or Alta Vista. In late August 1996, Larry Page noted Google downloaded and indexed 207GB of content storing it in a 28GB database.

In September 1997 Page and Brin moved towards commercializing their search engine, registering the domain name google.com, a play on the word googol (a one with a hundred zeros after it).

Wish to return to their academic lives Page and Brin tried to sell their young company. They offered it to the owners of Alta Vista and Excite for $1 million. Both passed. They lowered the offer to Excite to $750,000. The company still passed. Page and Brin were all but forced to build out their budding search engine, eventually selling plain-text ads based on the search request.

In March 2005 IAC/InterActiveCorp purchased Excite, which still had significant traffic, for $1.9 billion. As of 2019 Excite has no significant search traffic. Excite was shuttered August 2013. Google parent Alphabet is worth just over $800 billion. Other search engines exist, most notably Microsoft’s Bing, but none have nearly the same number of users as Google.