Musket

While Guttenberg’s forge was working to bring about the Renaissance, a more common use was to create weapons to kill one another. One of the most noteworthy is the musket.

Background

Early muskets were more like small cannons than the later-day rifles. Sometimes two-people needed to operate the earliest weapons due to their weight. Armies responded by building even heavier armor.

However, by the mid-1600s, technology began to miniaturize. The flintlock enabled dramatically smaller and less expensive muskets. Single soldiers could carry and operate a musket. Armor made little difference. Avoidance, instead of armor, became the most effective means of defense.

By the 1700s, the musket was in common use for both fighting and hunting. Miniaturization continued and the pistol, a mini-musket, eventually evolved as a weapon in its own right.

Muskets are different from rifles in that they’re less accurate but easier to load, fire, and require less cleaning. Since even the most accurate rifle at the time could barely hit the side of a barn the extra cost wasn’t worth the hassle.

Use & Impact

The US and French revolutions, plus countless other wars, featured muskets as a primary weapon. Many soldiers fought in the US Civil War in the 1860s with old flintlocks despite the availability by then of better technology.

Musket manufacturing led to the creation of standardized parts, the American Manufacturing System. French arms maker Honoré Le Blanc originally developed standardized parts. However, the French government rejected his methods due to the perceived effect of automation on jobs. Then ambassador Thomas Jefferson brought the new system to American Eli Whitney, and the method drove US innovation for the next century.

The musket had profound changes to military and civilian culture by simplifying hunting, largely eliminating the usefulness of armor. Furthermore, it created the notion that one person with a gun can make a difference, an idea later popularized by Sam Colt.

Machine Translation

Background

In 1933, Soviet scientist Peter Troyanskii presented “the machine for the selection and printing of words when translating from one language to another” to the Academy of Sciences of the USSR. Soviet aparchnicks during the Stalin era declared the invention “useless” but allowed Troyanskii to continue his work. He died of natural causes in 1950 – a noteworthy accomplishment for a professor during the Stalin era – but never finished his translation machine.

Early IBM Machine Translation

In the US, during the Cold War, Americans had a different problem: there were few Russian speakers. Whereas the Anglophone countries pushed out countless media to learn English, the Soviet Union produced far less. Furthermore, spoken Russian was different than the more formalized written Russian. As the saying goes, even Tolstoy didn’t speak like Tolstoy.

In response, the US decided the burgeoning computer field might be helpful. On January 7, 1954, at IBM headquarters in New York, an IBM 701 automatically translated 60 Russian sentences into English.

“A girl who didn’t understand a word of the language of the Soviets punched out the Russian messages on IBM cards. The “brain” dashed off its English translations on an automatic printer at the breakneck speed of two and a half lines per second.

“‘Mi pyeryedayem mislyi posryedstvom ryechyi,’ the girl punched. And the 701 responded: We transmit thoughts by means of speech.’

“‘Vyelyichyina ugla opryedyelyayetsya otnoshyenyiyem dlyini dugi k radyiusu,’ the punch rattled. The ‘brain’ came back: ‘Magnitude of angle is determined by the relation of length of arc to radius.'”

IBM Press Release

Georgetown’s Leon Dostert led the team that created the program.

Blyat

Even IBM notes that the computer cannot think for itself, limiting the usefulness of the program for vague sentences. Apparently, nobody at Georgetown or IBM ever heard real Russians speak or they’d know that vague is an understatement with a language that has dozens of ways to say the same word. Furthermore, the need to transliterate the Russian into Latin letters, rather than typing in Cyrillic, no doubt further introduced room for enormous error.

In 1966, the Automatic Language Processing Advisory Committee, a group of seven scientists, released a more somber report. They found that machine translation is “expensive, inaccurate, and unpromising.” The message was clear: the best way to translate to and from Russian, or any other language, is to learn the language.

Progress continued, usually yielding abysmal results. Computers would substitute dictionary words in one language for comparable words in another, with results oftentimes more amusing than informative.

Towards Less Terrible Translations

One breakthrough came from Japan in 1984, which favored machine learning because few Japanese people learned English. Researcher Mankoto Nagao came up with the idea of searching for and substituting phrases rather than words. This yielded far better, but still generally terrible results.

Eventually, in the early 1990s, IBM built on Nagao’s method by running accurate manual translations and building an enormous database analyzing word frequency. The translations became slightly less horrible. This led to “statistical translation” that was significantly less terrible.

As the World Wide Web shrunk the world the need for automated translations grew and the vast majority of these were some type of statistical translation. Subsequently, they continually improved to the point where Google Translate could pretty much help decipher, say, a bill.

Modern Translating

Finally, in 2016, neural networks and machine learning (artificial intelligence) started to produce vastly superior machine translations. All the sudden, translations were actually readable. As of 2019, the best online translation engine, German-based DeepL, is entirely AI-powered.

Clusters of Regularly Interspaced Short Palindromic Repeats (CRISPR)

CRISPR is like a word processor for DNA. It allows easy and inexpensive gene editing. Edited genes are passed to future generations, making mutations permanent.

Doudna and Charpentier

Doudna and Charpentier worked on and invented the technology as a team. First, they worked on plants and, later, on animals.

History becomes murkier with the involvement of Feng Zhang. Depending on the origin story he either modified Doudna and Charpentier’s work or invented a new version that works on humans. In an initial ruling, the US patent office ruled that his work was original and awarded him a patent for the use of CRISPR in humans as opposed to plants and animals. Like similar histories in innowiki, there will no doubt be appeals and lawsuits for many years.

Charpentier and Doudna are professors at the University of California at Berkeley. Zhang is a professor at MIT.

Zhang is a founding member of The Broad Institute of Harvard and MIT. As of 2018, they have the sole right to use CRISPR in humans. They have announced academic researchers may use the technology freely, but commercial uses must be licensed.

There were many precursor innovations to CRISPR but most articles suggest it was Charpentier’s 2011 discovery, that the technology could guide gene selection, which is the core value of the technology.

Designer Babies

In late 2018 Chinese researcher He Jiankui announced the use of CRISPR to genetically alter the DNA of twin girls. He allegedly fabricated ethics approval and claims he edited the genes to make the girls immune to HIV. In any event, the Chinese government declared the work illegal.

Based partly on He’s claim, scientists now say CRISPR is not as accurate as they initially believed. They say it works “more like an ax than a scalpel” for genetic manipulation. In any event, some form of CRISPR is likely to eventually have an enormous impact.

Global Positioning System (GPS)

GPS uses satellites to compute positioning in 3D space, allowing automatic mapping and advanced navigation.

The Soviet Union launched the Sputnik satellite on October 4, 1957. Sputnik did nothing but send out radio pings audible on radio receivers on earth. Conveniently, they launched the satellite to fly over then arch-enemy the United States.

As scientists listened to the pings from Sputnik they realized it was possible to compute the satellite’s approximate location by computing the time between pings. Specifically, the doppler effect allowed an accurate computation of latitude, longitude, and altitude.

Subsequently, scientists also realized the opposite is possible. Fixed satellites, orbiting the same speed as the earth turned, could track one’s position on the ground.

Consequently, work began immediately. However, it was decades before the technology matured. Satellites needed to orbit exactly the same speed as the earth, known as geostationary orbit. An ongoing power source was required to deliver the pings. Finally, they needed to encrypt the system, hiding it from hostile adversaries.

The very first system, Transit, launched in 1960 for use by the US Navy. There were just five satellites.

The first Global Positioning System consisted of 24 satellites. The Navigation System with Timing and Ranging (NAVSTAR) launched between 1973-1978. It was not fully functional, globally, until 1995.

In the early 1980s, civilians came online. However, a partially distorted lowered accuracy for national security reasons. Subsequently, in 2000, the US government removed that restriction making the system about 10x more accurate.

As of 2019, there are six GPS systems. Four are global: GPS (US), GLONASS (Russia), Galileo (EU), and BeiDou (China). Two are regional, QZSS (Japan) and IRNSS/NavIC (India).

GPS receivers are tiny, inexpensive, and ubiquitous.

Internet

Nikola Tesla and J.C.R. Licklider both talked about a worldwide network of computers. Licklider referred to it as an “Intergalactic Network.”

Background

The internet evolved slowly over time. At first, it wasn’t much more than a series of specifications, ideas about how computers might talk to one another. Eventually, towards the late 1960s, these turned into a working system connecting a small number of computers.

The Advanced Research Projects Agency, an arm of the US military that funds far-out projects, funded the early internet, called ARPANET.

Unless you had a Ph.D. in computer science the early internet was dull and didn’t do much. Quoting one source: “ARPANET showed almost no sign of ‘useful interactions that were taking place on [it].'” However, it did use packet switching to connect computers. Simplifying, packet switching involves breaking information into pieces for transmission between computers.

Eventually, ARPANET grew but the clunkiness of early packet switching left the network unstable. Significantly, data would be scrambled and if a computer was off messages could go into a black hole. Worse, all computers needed to be physically wired to one another. There was no way to use interim computers (hosts) to route packets further along. This severely limited the ability of the young network to grow.

TCP/IP

Two technologies changed that. During the late 1970s and early 1980s Internet pioneers, Vinton “Vince” Cerf and Bob Kahn developed transmission control protocol (TCP) and internet protocol (IP).

Specifically, transmission control protocol (TCP) is a set of rules specifying how to break the information into pieces, how to ensure the packets arrive complete, how to reassemble the packets, and how to ask for a new one if a packet arrives scrambled.

Furthermore, internet protocol (IP) is a series of rules about routing the packets between computers. Significantly, internet protocol automatically re-routes packets. When hosts are down packets automatically re-route. This ability, to route information around broken or malfunctioning computers, explains the military’s early interest.

With the implementation of TCP/IP the internet was able to grow at an exponential speed. Eventually, computers anywhere could easily connect to the network.

Growth of ARPANET

Personal Computer, Xerox Alto (the “interim Dynabook”)

Dynabook was at the heart of Xerox PARC. Eventually realized as the Xerox Alto, it is essentially the first personal computer. Easy-to-use with a graphical interface, what-you-see-is-what-you-get (WYSISYG) programs, icons, the mouse, networking. Everything we take for granted today started as the Dynabook/Alto.

Background

The Dynabook dates to Kay’s doctoral thesis and the first interview with Xerox. It is the underlying principle behind much of the work at Xerox PARC.

Kay envisioned a computer for just one person. His theoretical computer notebook would cost less than $500 “so that we could give it away in schools.” Compactness was important so “a kid could take it wherever he goes to hide.” Programming should be easy: “Simple things should be simple, complex things should be possible.” “A combination of this ‘carry anywhere’ device and a global information utility such as the ARPA network or two-way cable TV will bring the libraries and schools (not to mention stores and billboards) to the home.”

Xerox refused to fund the Dynabook, it was an inappropriate project since Xerox PARC was for offices, not children. Subsequently, Kay ignored them, sneaked away and, with the help of Thacker and Lampson, built what became the Alto. Kay referred to the Alto as “the interim Dynabook.”

Xerox: Computers Won’t Make Money

When finished, in 1973, Kay released it with a graphic of Cookie Monster, from Sesame Street, holding the letter C. Xerox built about 2,000 Alto’s for company use but never fully commercialized the computer. A Xerox executive told Taylor “the computer will never be as important to society as the copier.” The Dynabook, the personal computer, did not add shareholder value.

As of mid-2019, Xerox is worth $6.5 billion. Microsoft is worth $1.01 trillion. Apple is worth $874 billion.

Of course, Steve Jobs eventually visited Xerox PARC and rolled many ideas of the Alto into an Apple computer first called the Lisa and, later, the Macintosh. Soon after, Microsoft released Windows that looks suspiciously similar.

Public-Key Cryptography (Public key encryption)

Public Key Cryptography (PKC) dramatically lowers the risk of information intercept and also lowers the risk of impersonation. PKC vastly increases security. For example, Google allows people to send queries to them encrypted. But they cannot decrypt the queries sent by others with what they give you, only Google can. Besides encrypting and decrypting, public keys can authenticate that a person is who they claim to be.

Patent fights can often obscure inventors. National security concerns intensify this problem. Inventors, in this case, worked in secret.

Significantly, Ellis, Cocks, and Williamson suggest they invented public-key cryptography about 1972, working for the British government. However, the technology was classified as secret by the British and the US National Security Agency (NSA) until 1997.

Eventually, not knowing about the classified innovation, Diffie and Hellman to “discovered” PKC in 1976. Particularly, Diffie and Hellman invented public-key encryption, maybe working for a telecom, maybe for Sun, and possibly for a government agency; it isn’t clear.

Ronald Rivest, Adi Shamir, and Leonard Adleman made a successor system, RSA, that commercialized public key encryption. Public key encryption is what allows “secure” transmission of data over the Internet, among other things.

LASER

LASER’s allow light to be intensely focused. There are many uses, from reading digital media at low power to cutting at higher powers. Countless applications rely on LASER technology.

In 1957, Arthur Schawlow and Charles Townes, of Bell Labs, worked on an infrared LASER, called an “optical MASER.” They patented the invention in 1958.

In 1960, Theodore Maiman of Hughes Research Lab created the first visible-light LASER. It was based on Schawlow and Townes (and, arguably, Gould’s) work.

Gordon Gould also claims credit for innovating the LASER, having notarized earlier notes he’d shown to Schawlow. However, Gould could not obtain patents because the work was considered classified by the US Government. Gould was a communist sympathizer. He spent 30 years fighting for laser patents and eventually won. However, by then he’d sold 80% of the royalties. Eventually, he still collected several million dollars.

Townes shared the 1964 Nobel Prize in Physics for his work on the LASER. Schawlow shared the 1981 Nobel Prize for helping to invent the LASER.

Satellites

Satellites brought the world closer together, enabling instant communication, relaying information, and fulfilling countless military and civilian uses.

Sergei Korolev designed the first satellite, the Sputnik 1. It struck fear and hope around the globe as it orbited earth sending radio pings that anybody could hear.

Korolev spent years during the Great Purge in Stalin gulags, nearly dying, but went on to win release and lead the Soviet space program. He is also the lead engineer for the rocket that launched Yuri Gagarin, the first person to reach space, April 12, 1961.

The Soviet Union worried the US would attempt to assassinate him to win the space race so Korolev lived quietly; even Soviet cosmonauts did not know his last name.

The Soviet Union did not release his name until after his death, in 1960. To this day, many streets, monuments, and even the primary Russian space city bear his name.

Nuclear Weapons

Caltech professor Robert Oppenheimer lead a team of researchers at Los Alamos to invent the atomic bomb. Along with some of the most noteworthy physicists in the world, he oversaw the development of the nuclear bomb.

The Manhattan Project, like the code-breaking at Bletchley Park, was intensely secretive. Los Alamos, in New Mexico, was built to house the many scientists, technicians, and other soldiers working on the bomb.

Nobel Laureates Enrico Fermi and Niels Bohr worked alongside countless others towards the goal of creating a super-weapon to defeat Hitler’s Nazis. Einstein did not live in Los Alamos but consulted on the project. von Neumann did not live at Los Alamos but visited frequently, helping to develop the technology.

On August 6, 1945, the US dropped the first atomic bomb on Hiroshima, Japan. Furthermore, three days later, August 9, the US dropped the second and last nuclear bomb ever used in war on Nagasaki, Japan. Thereafter, Emporer Hirohito announced an unconditional surrender on August 15, 1945, ending WWII.

“We knew the world would not be the same. A few people laughed, a few people cried. Most people were silent. I remembered the line from the Hindu scripture, the Bhagavad Gita; Vishnu is trying to persuade the Prince that he should do his duty and, to impress him, takes on his multi-armed form and says, ‘Now I am become Death, the destroyer of worlds.’ I suppose we all thought that, one way or another.”

Oppenheimer, 1965.

On Dec. 21, 1953, the US government revoked Oppenheimer’s security clearance due to his opposition to war. He died in 1967, age 62.

The Gadget in the test tower. Photo courtesy of Los Alamos National Laboratory.
First Nuclear Weapon