There is a major problem with Wikipedia, deletionists. Deletionists, as the name suggests, get off on deleting things. They tear down rather than build up.
Why would they want to do this? There are two primary reasons. First, it is an easy way to increase the number of Wikipedia edits which increases the visibility and power and an “editor” (using the Wikipedia version of the term rather than the more broadly understood meaning). Write something and a deletionist can delete it, effectively negating an edit. Delete something, on the other hand, and it tends to stay deleted unless a person continually undoes the deletion, a tiring and somewhat futile exercise. The second reason is a raw feeling of power that comes from knocking down the work of others.
Of course, like any obstructionists, deletionists do not admit to their underlying and, frankly, ugly motives. Instead, they claim to be gatekeepers to quality, albeit self-appointed ones. Their claimed benefits, besides “vandalism” (which even that they define to broadly mean anything they dislike), are nonsensical and not worth printing. In fact, their knee-jerk tendency to delete information is, in itself, a form of vandalism.
Their self-righteous justification is intense enough they deleted a Wikipedia page that used to exist about deletionism. To reiterate, Wikipedia deletionists deleted a page about deletionists and instead merged it into a page about both deletionists and “inclusionists” as if the latter group, who understand the reason behind Wikipedia, are somehow equal.
For Americans, think of Mitch McConnell, the US Senate leader who blocks everything and uses obstruction to get his way. Deletionists are the same. They derive power not from doing the hard work to create but, rather, by deleting the research and inclusions from others.
Much like McConnell uses power derived from a small minority of the American populace to rule over a larger segment, deletionists use their edit count to become “senior editors” empowering both themselves and other deletionists. That is, they’ve effectively hacked the Wikipedia rules to their benefit and realized the more they delete the higher their edit count and the more authority they have which they then abuse to delete even more material.
If this sounds dysfunctional, you’re right – it is. More than dysfunctional, it encourages abusive tactics from the very worst players, power-hungry basement dwellers who themselves have nothing to say and cannot project their voices any other way than what is, in essence, a form of vandalism.
Rather than deal with deletionists — who should, in a better thought-through scheme, be deleted themselves — we created innowiki as an alternative. We believe innovation is so important that it deserves at least brief, objective descriptions free from the constraints of deletionists. We will sometimes link to our articles from Wikipedia but find deletionists — who have long threads of deleted material — almost always and immediately then delete the links, citing bogus reasons.
There is a simple solution to deal with deletionists: do not credit deletions as edits. Only crediting accepted inclusions as changes as edits would make deletionism a pointless exercise. They may still get off on the power but they’ll likely quickly tire when it leads nowhere.
Wikipedia is a great idea though, like many things that become big, suffers growing pains. There remains a myriad of dubious information and, thanks to deletionists, there is a strong disincentive for knowledgeable people to participate in the community. This has the effect of enabling an ever-shrinking knowledge base pulled from an ever-shrinking set of sources, the opposite of Wikipedia’s original purpose.
If you’re tired of dealing with deletionists but want to write about innovation, feel free to join us. We work on these brief outlines to publish short, accurate statements devoid of political bias with no favoritism towards deletionist vandals. Interested? Join us.
Blockchain is like the gluten diet a few years back; everybody’s into it but nobody’s quite sure why and few people really need it though to those who do it’s important.
I’ll analyze it through the only metric that really matters, value. But before doing that, we need to understand what the it actually is.
Here’s a (hopefully) easy-to-understand primer about what blockchain is. Don’t sweat the terminology; it will make sense in a few hundred words.
When I run my name, Michael Olenick, through the SHA-1 hashing function it returns:
Hashing is a mathematical model that takes information and returns a unique string of letters and numbers. The same input will always return the same string. Different data will never return the same string.
No matter how long or short the data input, the length of the string will be the same. The SHA-1 hash (1) of the entire text of War & Peace is:
There’s vastly more input — the text of the entire book — but the hashed string is the same length. If I hashed a digitized movie, a photo, a contract, or anything else it would return a unique string the same length as my name. And every string would be unique unless the data input was exactly the same. War & Peace will always return the value above but nothing else will.
Getting to blockchain, if I used a prior hash or two plus something else it would make a new entirely unique hash. For examle, if I combine the hash for my name plus the words ” fell asleep reading ” then included the hash for War & Peace I’d get:
If I changed a single letter in War & Peace, or changed anything in my name, or wanted to change ” fell asleep reading ” to ” is intellectually stimulated by ” then the hashed value would change.
Hashing makes it is impossible to change anything without the hashed value also changing.
The heart blockchain is a running series of hashed values using the technique above, a chain. Every item, called a block, in a ledger includes the hash value of the prior items plus the hash value of the new ledger entry. Those combined together create a new hashed value.
Chaining these blocks makes it impossible to change anything in the ledger without invalidating all the following hashed values.
Unlike a regular ledger, the entries need not be only numbers. People can enter entire contracts into the ledger combined with downpayment amounts and digitized signatures. Biometric information like fingerprints can go in. Timestamps can and are entered. Even DNA sequences of, say, plants or animals (sure, and people) can go into the chain of hashed blocks.
How can we ensure somebody wouldn’t change all the prior blocks to distort the entire chain? Each entry in the chain is essentially a receipt. If somebody tried to change a prior entry then all the hashed values on the subsequent entries — all those receipts issued — would become invalid.
We can also demand the chain, or at least the current value of the chain, be made public. That would make it impossible to change the prior blocks without the public impression changing.
Blockchain itself is often tied to an open ledger where anonymous people record transactions and keep the running total. Maintaining this ledger is baked-into Bitcoin mining to incentivize people to keep the ledger. That is, by keeping the ledger there’s a chance to make bitcoins out of thin air; one is generated about every ten minutes.
Back to blockchain … why would ordinary people care about a system where the accounting entries of a ledger could never be changed? For that matter, why would ordinary people care about accounting at all? Accounting classes aren’t exactly the rage. If they were elective, accounting classes would filled with the pocket protector crowd and little else.
Blockchain and Bitcoin were released together in 2009 at the height of the financial crisis. There was an implicit fear that, with the chaos during that time, banks would cheat.
Somebody using the pseudonym Satoshi Nakamoto released Bitcoin and its underlying ledger, blockchain in response. Since blockchain is not controlled by any single entity, there was nobody who could cook the books was the idea.
Of course, the financial system has since healed and trust has been restored. Bitcoin has become a speculative investment. But blockchain — tying together blocks of transactions in chains — really does increase value by raising trust while lowering costs by providing an ongoing audited transaction chain.
All this leads to an obvious question: given that all blockchain does is prevent cheating, and the core hashed ledger is inexpensive to implement, why don’t existing ledgers simply adopt these techniques? Why doesn’t every receipt contain, say, an immutable hashed value of your purchases and also the hashed value confirming payment? I don’t know.
Blockchain and the hashing that underlies it open all sorts of interesting potential applications. Hashing contracts with signatures, terms, and timestamps is an obvious addition. Adding a payment ledger to the mix, a mini-blockchain — itself possibly an entry in a larger blockchain of all entries for a securitized trust – seems obvious and trivial.
There is a lot of hype and, apparently, a lot of money going into blockchain. And some of the hype makes sense; I thought about using it to track the DNA of wood to ensure it was sustainably harvested. It’s easy to imagine stamped email ensuring the sender is who they say they are and, if not — if the email is spam — to quickly delete it. Contracts and land recording are also obvious uses.
But, like most fads, a lot is far-fetched or nonsense. We’re not going to stick every web page on a chain of blocks (interestingly, one of the original purposes of hashing was security for digital rights management — it was and remains universally disliked).
Too much blockchain hype is driven by people who equate interesting technology with value. They’re jumping on a bandwagon because stuff sounds cool, or maybe because there’s a lot of investment capital.
Blockchain is likely here to stay. I can’t really see the purpose of distributed ledgers but, hey, the only thing they’re doing is wasting electricity (albeit quite a bit of it with Bitcoin). Will blockchain change the world? Who knows; predicting the future is dangerous stuff. Might it help reduce fraud and spam? Probably, once the hype dies down when technologists get their act together. Can it increase value while reducing costs? Yes, but it’s important to focus on the applications that do this, not on the technology itself.
(1)Brief but important digression: there are countless hashing algorithms; methods for turning data into a unique string. As computers get faster they get better at guessing what the string might be. In response, the algorithms become stronger. Unless you’re working for the NSA or some equivalent, the details don’t really matter except that SHA-256 is stronger than SHA-1 but they both return a unique string. Most hashing algos are available online, for free, with implementation in the major programming languages. They’re easy to use.
Innowiki founding member Michael Olenick is currently an executive fellow at the INSEAD Blue Ocean Strategy Institute, on the Fontainebleau, France campus. Michael has worked closely with Chan Kim and Renée Mauborgne since 2001, before the book Blue Ocean Strategy came out in 2005, when it was articles in Harvard Business Review. Michael learned about Blue Ocean Strategy (then called Value Innovation) at Avery Dennison as a product developer, brought it to GE, and has used it at countless companies since.
Michael advises, consults, researches, and teaches BOS throughout the world. He has implemented business strategy for companies ranging from startups to Fortune 100’s. Michael works with senior executives of countless companies and organizations to develop and/or study strategy. He focuses especially on technology, disruption, nondisruption, and the differences and similarities of B2B and B2C businesses.
Michael’s research has been cited in leading business publications including the New York Times, Wall Street Journal, Washington Post, and Bloomberg. Congress and the New York Federal Reserve have collaborated with him and relied on his research when making policy.
His research is taught by leading business schools including Harvard Business School, Stanford Graduate School of Business, the Wharton School, University of Chicago, and others. Multiple cases are bestsellers at Harvard Business Review/Harvard Business School Publishing, including:
• Driving the Future: How Autonomous Vehicles Will Change Industries and Strategy (Harvard Business School Press)
• Gaga for Wawa: Blue Ocean Retailing (Harvard Business School Press)
• The Marvel Way: Restoring a Blue Ocean (Harvard Business School Press)
• A Blue Ocean Shift from Insolvency to Excellence in Higher Education: Turning around the Universidad Privada Boliviana – A Reflection on My Journey to Blue Ocean (Harvard Business School Press)
• Nintendo Switch: Shifting from Market-Competing to Market – Creating Strategy (Harvard Business School Press)
• An Innovation that has Changed the Lives of Women in India (Harvard Business School Press)
Michael has a doctorate in law. He delivers keynote speaking, workshops, lectures and consulting all over the world.
On September 13, 1970, Milton Friedman published one of the most arguably economically destructive articles in history, “The Social Responsibility Of Business Is to Increase Its Profits,” in the New York Times. The article is available, in PDF form, for subscribers from the New York Times website.
Friedman advanced the idea that managers are agents of shareholders and that the only purpose of for-profit businesses is to increase stock price.
Managers have been debating Friedman’s “Shareholder Value Theory” for ages but nobody seems to have found the most obvious flaw from the seminal article. Milton’s sermon was directed at GM management who listened, decimating their brand, market share, and share price.
Specifically, Friedman raved against the notion that corporations have “social responsibilities” that, in this specific case, meant they should build safer, more fuel efficient and environmentally friendly cars. One can surmise this notion eventually extended, during a time when planned obsolesce was part of a business model, to quality.
In 1970, Friedman insisted businesspeople not concern themselves with issues beyond increasing shareholder value. “Businessmen who talk this way are unwitting puppets of the intellectual forces that have been undermining the basis of a free society these past decades,” wrote the Nobel laureate. Implicit is the message to GM: keep doing what you’ve been doing: build clunky, crappy cars because that strategy was profitable in the past. The Times, They Weren’t A Changin’ at GM.
In hindsight Milton Friedman was, by any measure, wrong.
No serious student of business, economics, law, history, or engineering could argue that Friedman’s business analysis was correct. GM, along with other US automakers, listened to Friedman, ignored the “reformers” (Friedman’s word), and went on to build a series of truly terrible cars. Automobiles that broke, blew up, hurt people, handled terribly, and guzzled gas. These cars were uglier than a Blobfish and polluted “like a 19th century coal-fired factory,” as Wired Magazineeloquently summarized the era.
Besides building awful automobiles those “social responsibilities” that Friedman raved against were important to many baby boomers who, in 1970, were trending to be the largest consumer group. That is, to appease faceless shareholders, Milton Friedman advised that GM and other businesses ignore demands from future customers.
The result was predictable. As my mentor Prof. Robert Ayres recalls, he bought a Honda. I’m younger but remember when my father replaced the family jalopy with a new Corolla. Subaru’s became cool. Japanese cars were fuel efficient, environmentally friendly, relatively safe, and incredibly reliable. They were built by companies that took exceptional care of their workers who, in turn, cared exceptionally about their businesses and the products they build. Japanese executives must have been aware of Friedman’s theory and actively rejected the advice; they took, and continue to take, those “social responsibilities” seriously.
Did Toyota and Honda ignore shareholder value? With booming sales and sterling reputations I imagine their shareholders were pleased. How about those GM shareholders that Friedman praised for voting against exploring social responsibilities? People who purchased GM stock in 1965 — the one’s Friedman praised for voting down social policy considerations — did see their stock increase in value … in 1993. I’m not sure how that constitutes shareholder value.
Milton Friedman’s Exhibit A on shareholder value — the notion that GE must reject a call for “social responsibility” and ignore buyer demands — resulted in one of the worst business disasters in history, the gutting of General Motors.
Others have pointed out that the rest of Friedman’s theory is bunk.
First are the business executives: those who run actual businesses, something Milton Friedman never did. As detailed in this article from Forbes, Jack Welch called it “the dumbest idea in the world.” Paul Polman, CEO of Unilever, referred to followers as a “cult.” Alibaba CEO Jack Ma reminds that “customers are number one; employees are number two and shareholders are number three.” Marc Benioff, founder and CEO of Salesforce, added it is “wrong .. the business of business isn’t just about creating profits for shareholders.”
Great business executives care about social issues. Apple CEO Tim Cook famously told an analyst, when questioned about Apple’s use of renewable energy, “I don’t consider the bloody ROI,” adding that Apple does “a lot of things for reasons besides profit motive. We want to leave the world a better place than we found it.” Google’s founding motto was “Don’t be evil.” They eventually dropped that because it set the bar too low. Facebook actively works on connectivity for poor countries. Well managed businesses are menschkeit, taking care of their customers, employees, and communities while earning a lot of money for shareholders. Lesser businesses, or those driven by short-term activist shareholders, are parasitic, milking their customers, employees, and the organization itself dry until there is little left for shareholders or anybody else.
Legal experts explain that Friedman’s theory, that managers are agents with a responsibility to increase stock returns, is outright wrong. Lynn Stout, distinguished professor of corporate and business law at Cornell Law School, argues Friedman bungled the law; managers are legally not agents of shareholders. She wrote a book on the subject, The Shareholder Value Myth. Prof. Stout writes “the idea of a single shareholder value is intellectually incoherent. No wonder the shift to shareholder value thinking doesn’t seem to be turning out well — especially for shareholders.”
Shareholder Value Theory remains alive and well. Michael Jensen and William Meckling published a 1976 article, “Theory of the Firm,” that repeated the myth managers are agents of shareholders. Despite that by 1976 GM’s struggles were apparent, and that one would think the question of agency is for lawyers rather than economists, their paper became and remains one of the most widely cited in academic literature.
Part I, “Automation Armageddon: a Legitimate Worry?” reviewed the history of automation, focused on projections of gloom-and-doom.
Part II, “Automation: Robots in Real Life” reviewed how robots are ubiquitous and create jobs.
My first real job was creating a print estimating and production control system for five print plants scattered around the US. Each had at least one web press. These are enormous presses that use large rolls of paper and run at high speeds. They’re typically used for high-speed, high-volume printing.
Each plant also had at least one and often several sheetfed presses. These are presses that print on cut sheets of paper. Sheetfed presses are typically higher quality, used for things like high-end flyers, posters, or the covers of booklets.
Both the web and sheetfed presses had at least four units,
one for cyan, magenta, yellow, and black, CMYK. If that sounds familiar, it’s
because your inkjet printer uses the same four print heads because, when the
first three of those colors are mixed together they can create any other color.
They can also create “process black” but it’s cheaper to just black ink, plus
the black ink looks better.
Most presses had extra units, inline print presses, to add “spot color’ – specific colors that popped out because they were not mixed together.
The covers, created by the sheetfed presses, were combined with the innards of the “book” (which wasn’t a book but that’s what it was called), in the “bindery,” the part of a print plant that does just what the name implies.
Of course, the presses must be fit with printing plates created by a prepress group, fit correctly on the press units, and the whole thing tied together to work within a fraction of a millimeter.
I showed up to get a feel for the place and, three steps in, was grabbed by somebody who said “get out: no loose clothing is allowed in here, especially ties.” Stepping back he explained the web press pulls 4-6 rolls of paper that weigh about 550 pounds (250kg) each at high speeds. If a hand gets stuck inside the person will lose their hand or arm. If a jacket gets stuck inside your shoulder or side is going to be crushed. But if a tie gets caught up you’re “186’d,” the color number for Pantone blood-red ink.
All this came to mind because of a conversation I had discussing the origins of automation technology. My first thought (and that of the person I was speaking to) was mill technologies of the 1700s are the first true automation machines. These are the mills that automated creating thread, yarn, and cloth. He must’ve thought about it then posited the printing press is arguably the first real piece of automation tech. I agreed – and still agree – but something felt wrong.
Sleeping on it, I realized the problem. The automated mill technologies required essentially no skill. People feed the machines raw materials and collect the output but the appeal of those machines as a business is anybody could operate them. Six-year-old children worked in those mills.
The printing press, on the other hand, always required skilled labor.
With Gutenberg’s first press, the magic was not the press itself, which was the simplest component. The technology came from the interchangeable letters, the ability to assemble them into a frame, specialized inks and papers, and an understanding about how to combine it all. It was so complex that Gutenberg spent his entire sizable inheritance inventing the whole thing then went into debt and bankrupt trying to commercialize it.
The printing press required vocational skills that could not be learned overnight. Even the earliest printers often belonged to guilds which initially certified quality and skill. These later morphed into trade unions but that was not their original purpose.
Conversely, the mill workers required virtually no skill. Life was miserable for mill workers until they were unionized.
High-skill workers traditionally have more bargaining power than lower-skill workers. Of course, even the most skilled laborers are still vulnerable to union-busting, keeping in mind the PATCO air controller strike and Reagan’s union-busting response. However, absent an ability to order military air controllers in – something no private business would be able to do – those air controllers should have been union-busting proof due to the skill needed to do the job.
The worry with the upcoming disruption by artificial intelligence is the value of these skills, and the bargaining power they bring will decrease.
And maybe that’s true. Let’s think of a few examples.
Will we need radiologists when an AI can do the same diagnosis faster, more accurately, and at far less cost? What’s the point of pharmacists even today when a computer can sort through every imaginable drug interaction? I’ve been tailgated by reckless semi-trailer truck drivers, probably trying to make a delivery, wishing a more patient computer was behind the wheel.
While we surely care about the potential job loss, it’s hard
to argue that lower healthcare and transportation costs at higher quality with
increased safety sounds bad.
Getting back to the printing press, in September 1959, Xerox released the Xerox 914 plain-paper copier. It required no skill whatsoever. Place a printed on the glass, press the button, and an exact replica pops out on plain paper. Sixty years later, it doesn’t seem like much but at the time it was magic. I can’t find any statistics but the 914 didn’t seem to displace any jobs. Surely there were people operating mimeographs and Watt Copying Presses but there’s no record of mass firings. Most likely these people were reassigned to more interesting work (a low bar since few jobs are less interesting).
Gutenberg’s original press and the myriad that followed didn’t displace jobs for scribes. They seem to have morphed into working as printers or teachers since illiteracy before the press was widespread. Many were monks and I’d imagine they found some other monk job.
As books became cheaper, more people learned to read. As a market for books grew, people not only wrote more books but the quality increased since authors — who wanted to be paid — added their names. Gutenberg’s press did not eliminate any jobs but did eliminate ignorance and eventually led to the Renaissance.
This doesn’t mean that automation technology does not facilitate the destruction of jobs. The high cost of the machines, especially when first introduced, ensures only the wealthiest can afford them. Patents granted by governments or natural monopolies oftentimes cause control of the automation tech to be concentrated into a small group of people who, throughout history, abuse that power. But it’s not the technology per se, it’s those who own and control it.
Richard Arkwright is the original English mill owner and inventor of the modern no-skill factory and company town. Historians seem to agree he was a terrible person. But his automation and business practices are two separate issues. There was more than enough profit for Arkwright to treat his workers better but no regulatory nor financial incentive to do so, plus he was just a bad person.
I’ll admit that this line of reasoning, that it’s the people and not the equipment, runs perilously parallel to a “guns don’t kill people – people kill people” argument. But I think that’s a false equivalence. In the case of handguns and the AR-15/AK-47 style weapons, killing people is their primary purpose. They have no other utility value beyond killing people. In contrast printing presses, cloth mills, and the countless other types of automation tech we’ve listed have lots of utility value. Turning people into robotic-like wage slaves is an abusive side effect of predatory businesspeople, not the primary purpose of the technology.
Part I, “Automation Armageddon: a Legitimate Worry?” reviewed the history of automation, focused on projections of gloom-and-doom.
“It smells like death,” is how a friend of mine described a nearby chain grocery store. He tends to exaggerate and visiting France admittedly brings about strong feelings of passion. Anyway, the only reason we go there is for things like foil or plastic bags that aren’t available at any of the smaller stores.
Before getting to why that matters – and, yes, it does
matter – first a tasty digression.
I live in a French village. To the French, high-quality food is a vital component to good life.
My daughter counts eight independent bakeries on the short drive between home and school. Most are owned by a couple of people. Counting high-quality bakeries embedded in grocery stores would add a few more. Going out of our way more than a minute or two would more than double that number.
Despite so many, the bakeries seem to do well. In the half-decade I’ve been here, three new ones opened and none of the old ones closed. They all seem to be busy.
Bakeries are normally owner operated. The busiest might employ a few people but many are mom-and-pop operations with him baking and her selling.
To remain economically viable, they rely on a dance of people
Flour arrives in sacks with high-quality grains milled by machines.
People measure ingredients, with each bakery using slightly different recipes.
A human-fed robot mixes and kneads the ingredients into the dough.
Some kind of machine churns the lumps of dough into baguettes.
The baker places the formed baguettes onto baking trays then puts them in the oven.
Big ovens maintain a steady temperature while timers keep track of how long various loaves of bread have been baking. Despite the sensors, bakers make the final decision when to pull the loaves out, with some preferring a bien cuit more cooked flavor and others a softer crust.
Finally, a person uses a robot in the form of a cash register to ring up transactions and processes payments, either by cash or card.
Nobody — not the owners, workers, or customers — think twice about any of this. I doubt most people realize how much automation technology is involved or even that much of the equipment is automation tech.
There would be no improvement in quality mixing and kneading the dough by hand. There would, however, be an enormous increase in cost.
The baguette forming machines churn out exactly what a person would do by hand, only faster and at a far lower cost.
We take the thermostatically controlled ovens for granted. However,
for anybody who has tried to cook over wood controlling heat via air and fuel, thermostatically
controlled ovens are clearly automation technology.
Is the cash register really a robot? James Ritty, who invented it, didn’t think so; he sold the patent for cheap. The person who bought the patent built it into NCR, a seminal company laying the groundwork of the modern computer revolution.
Would these bakeries be financially viable if forced to do all this by hand? Probably not. They’d be forced to produce less output at higher cost; many would likely fail. Bread would cost more leaving less money for other purchases. Fewer jobs, less consumer spending power, and hungry bellies to boot; that doesn’t sound like good public policy.
Getting back to the grocery store my friend thinks smells
like death; just a few weeks ago they started using robots in a new and, to
many, not especially welcome way.
As any tourist knows, most stores in France are closed on Sunday afternoons, including and especially grocery stores. That’s part of French labor law: grocery stores must close Sunday afternoons.
Except that the chain grocery store near me announced they are opening Sunday afternoon. How? Robots, and sleight-of-hand. Grocers may not work on Sunday afternoons but guards are allowed.
I stopped in to get a feel for how the system works. Instead
of grocers, the store uses security guards and self-checkout kiosks.
When you step inside, a guard reminds you there are no grocers. Nobody restocks the shelves but, presumably for half a day, it doesn’t matter. On Sunday afternoons, in place of a bored-looking person wearing a store uniform and overseeing the robo-checkout kiosks sits a bored-looking person wearing a security guard uniform doing the same. There are no human-assisted checkout lanes open but this store seldom has more than one operating anyway.
I have no idea how long the French government will allow this loophole to continue. I thought it might attract yellow vest protestors or at least a cranky store worker – maybe a few locals annoyed at an ancient tradition being buried – but there was nobody complaining. There were handly any customers, either.
The use of robots to sidestep labor law and replace people, in one of the most labor-friendly countries in the world, produced a big yawn.
Paul Krugman and Matt Stoller argue convincingly that it’s the bosses, not the robots, that crush the spirits and souls of workers. Krugman calls it “automation obsession” and Stoller points out predictions of robo-Armageddon have existed for decades. The well over 100+ examples I have of major automation-tech ultimately led to more jobs, not fewer.
Jerry Yang envisions some type of forthcoming automation-induced dystopia. Zuck and the tech-bros argue for a forthcoming Star Trek style robo-utopia.
My guess is we’re heading for something in-between, a place where artisanal bakers use locally grown wheat, made affordable thanks to machine milling. Where small family-owned bakeries rely on automation tech to do the undifferentiated grunt-work. The robots in my future are more likely to look more like cash registers and less like Terminators.
It’s an admittedly blander vision of the future; neither utopian nor dystopian, at least not one fueled by automation tech. However, it’s a vision supported by the historic adoption of automation technology.
Right now, we have 122 major innovations that involve some type of automation. Click here to see the list. Putting it mildly, many of them were not met with enthusiasm. For example, Frenchman Barthélemy Thimonnier invented the sewing machine only to see his factory burnt down by worried tailors. The “American Manufacturing Method” using standardized parts was invented by Frenchman Honoré Le Blanc but post-revolutionary France had enough problems without alienating gunsmiths; Thomas Jefferson brought it to the US.
The first and most famous automation freakout involves the infamous Luddites.
Nobody is sure if the legendary Ned Ludd, the inspiration for the Luddites, is a real person or more of a Robin Hood legend. Ned was supposed to be a factory worker who smashed up a knitting machine in 1779.
In some versions of the story that the machine is the Stocking Frame mechanical knitter, invented by William Lee in 1589. It could’ve also been a Spinning Jenny invented by James Hargreaves in 1764. But my guess is that it was supposed to be the Spinning Mule, invented by Samuel Crompton in 1779.
Combined with improved efficiency of the other two machines, textile work transformed from largely skilled to largely unskilled labor.
The original Ludd story is unlikely. These machines weren’t like the latest mobile phone – the smallest were huge and made with lots of wood – not easily smashable even by the angriest Englishman. Plus, it’s unlikely Ned would’ve been near a Spinning Mule the year it was released.
Whoever Ludd was or wasn’t his Luddites were the real deal, smashing up Spinning Mule’s and other automation equipment, literally back in the day and more figuratively lately.
The much-maligned Luddites had a point. Industrialist Richard Arkwright essentially weaponized the Mule and its predecessor equipment to control the lives of ordinary laborers.
Arkwright invented the modern factory. Through the use of automation equipment, he realized high-volume mills could be operated by women and children rather than skilled laborers.
Child labor was common in England at this time and Arkwright’s factories were no exception. Technically, his minimum age employee was six but exceptions were made. Kids were especially useful for dodging under powerful looms on an upstroke to retrieve something that fell behind, waiting behind on the downstroke, then bringing it back on the next upstroke.
Mistakes were not uncommon:
“Cotton factories are highly unfavourable, both to the health and morals of those employed in them. They are really nurseries of disease and vice… When I was a surgeon in the infirmary, accidents were very often admitted to the infirmary, through the children’s hands and arms having being caught in the machinery; in many instances the muscles, and the skin is stripped down to the bone, and in some instances a finger or two might be lost. Last summer I visited Lever Street School. The number of children at that time in the school, who were employed in factories, was 106. The number of children who had received injuries from the machinery amounted to very nearly one half. There were forty-seven injured in this way.”
Dr. Michael Ward relating conditions in an Arkwright Mill, March 25, 1819
Ward was testifying due to a government investigation caused by widespread unease. On one hand, Luddites spent months “machine breaking” in 1811-1812. The British government responded by breaking Luddites, sending 14,000 troops against their own people.
It would be irresponsible to not mention Arkwright’s mills were fed by an increasing supply of cheap cotton produced by slaves in the early United States. In the mid-1700s, growing cotton even with slaves was often unprofitable. In 1793, Eli Whitney invented the cotton gin, automating the process of separating cotton from seed. The cotton gin vastly increased the profitability of slavery and resulted in a dramatic increase in slaves, including many kidnapped from Africa.
Returning to England, Arkwright’s mills expanded despite civil unrest. Waterwheels powered most mills which required they be located near fast-moving rivers. Arkwright hired local unskilled laborers, vacuuming the women and children of entire villages into his mills. The mills required ever more people so he encouraged families to move. He built the first company town, with homes, stores, and of course many mills.
Eventually, Samuel Slater smuggled Arkwright’s mill technology to the US. In 1834, the “mill girls” of Lowell, Massachusetts organized the first strike. Over the next century, countless women textile workers organized and protested. By 1912, 23,000 men, women and children organized the Bread & Roses Strike.
In 1933, Frances Perkins became the first woman in the Cabinet, as labor secretary. By then, atrocious labor conditions combined with the Great Depression caused the Textile Workers Strike of 1934, the longest in US history.
Over the years, automation ebbed and flowed but, for the most part, marched forward. We have countless innovations that contributed directly or indirectly to innovation. Despite understandable anxiety, automation tended to eventually lead to a net increase in jobs. Those automated mills the Luddites broke up led to cheap fabric and the garment industry. Clothing design, manufacturing, and retail are much larger industries. Plus, bespoke cloth and tailoring still exist.
Of course, a net increase doesn’t help the jobless with little or no skills.
Automation went on to destroy countless other jobs but, for the most part, they were awful jobs. Harvesting machines vastly decreased the number of people needed to pick corn and, later, tomatoes. Tomato pickers protested the invention of the tomato harvesting machine, predicting the end of manual labor. Of course, that didn’t happen and there were plenty of jobs picking other crops. The stepping switch and later computer tech largely eliminated the need for switchboard operators. While the job paid the bills (barely) it was boring work.
Nobody knows what will happen as advances in artificial intelligence make things like self-driving cars a reality. Like the many other innovations here, there will no doubt be some amount of displacement. But will a substantive number of good-paying jobs disappear? Or will drivers find something similar but different which computers aren’t good at, while tireless robots pilot cars that rarely if ever cause accidents? Can’t radiologists find something more interesting to do than examine the same images repeatedly, with boredom often causing mistakes? Do pharmacists really need eight years of school when computers more accurately watch out for drug contradictions?
We can’t answer any of these questions except to say that, if history is a guide, all this tech will cause more jobs, not fewer, and those jobs will be more interesting.
Part II, “Automation: Robots in Real Life” reviewed how robots are ubiquitous and create jobs.
Sure, there are CEO’s who committed crimes, CEO’s who bankrupt their businesses, and CEO’s who looted their businesses. There are crooks, those who hire cronies, people who paid bribes, plenty who demanded sex or servitude, and countless sociopaths.
In fairness to him, Archie did none of these things. Which makes winning his place as the worst CEO of all time all the more remarkable.
McCardell is a University of Michigan MBA who started his career at Ford focused on finance. At Ford, Archie trained under Robert McNamara, the future US Secretary of Defense. McNamara is a key person who fabricated the Gulf of Tonkin invasion by North Vietnam to justify a massive escalation in the Vietnam War. Eventually, by the time the US effectively surrendered on March 29, 1973, 57,939 Americans and about a quarter-million South Vietnamese died in the conflict. Vietnam remained communist for about a decade then eventually transformed to capitalism, proving the entire war pointless in hindsight. McNamara trained his prodigy, McCardell, well.
After Ford, Archie started at Xerox in 1966. They promoted him to president in 1971. For three years, Xerox continued to announce record profits, just as they had for the prior two decades. Xerox had two research centers, one in New York and the then-new Xerox Palo Alto Research Center, Xerox PARC, in California. Significantly, McCardell pushed the Rochester center for profits but largely ignored the quirkier California group. Later, Archie’s executives were ready to cancel a New York-based project led by Gary Starkweather to image a copier drum by lasers, the laser printer. However, Starkweather negotiated a last-minute relocation to Palo Alto and saved his project.
Other interesting projects happening in Palo Alto, a center set up before McCardell’s time, included computer work building on Douglas Engelbart’s work at the Stanford Research Institute (SRI). Engelbart demonstrated video teleconferencing, intuitive interactive interfaces for computers, editable lists on computers, windows, dynamic file linking, and a new input device, the mouse. Subsequently, Xerox PARC hired many of Engelbart’s researchers and supplemented them with others led by the legendary Robert Taylor.
Archie Blows the Future
During McCardell’s reign, Xerox PARC created the modern computer interface, building on and perfecting windows, the mouse, icons, visual files, and intuitive interactive computing. Eventually, they created the idea of the personal computer, internally called the “Dynabook.” Object-oriented programming, the building block of all modern computer systems, came from Xerox PARC. Ethernet networking, which is how virtually all computers connect (WiFi is wireless Ethernet) is from there and so are spline fonts and What You See is What You Get on-screen displays and printing. And, of course, Starkweather perfected his laser printer that also came from PARC.
McCardell purposefully threw it all away. The Xerox Alto, developed at Xerox PARC, was the first modern personal computer. The Alto is the Mac before the Mac. “At Xerox, McCardell and [Ford alum head of engineering Jim] O’Neill created a numbers culture where decisions were put through the NPV test. Not surprisingly, the Alto failed,” reads an analysis.
After he left Xerox, they’d eventually commercialize an enterprise laser printer but the executive team he put in place – and the toxic environment Archie left behind – ignored the most valuable technology since the invention of the internal combustion engine and the car.
Simultaneously, while ignoring all the PARC technology, McCardell also ceded Xerox’s core copier business to the Japanese.
Blowing the third industrial revolution should be enough to secure McCardell’s position as the worst CEO ever. However, most historical records barely mention Archie’s disastrous Reign of Error at Xerox. He was just getting started.
In 1977, Archie took over as CEO of International Harvester. At this time, International Harvester was the third most valuable American business. McCardell’s starting salary was $460,000, making him one of the highest-paid CEO’s in the world. He also accepted a $1.5 million signing bonus and a $1.8 million loan at 8 percent (an interest rate which, at that time, was considered low).
Quoting the Washington Post: “The company had been directed primarily by family members since its founding by American inventor Cyrus McCormick in 1831, but the board decided it was getting stodgy and turned to a high-powered executive from the outside.” Management consulting firm Booz Allen Hamilton advanced the perception an outsider was needed and recruited McCardell.
Archie cut spending by $640 million and invested $879 million,
over three years, into modernization. The latter figure seems impressive except
it was essentially just copying International Harvester’s competitors.
Eventually, in the fall of 1979, Archie tired of trying to grow the business or cut costs traditionally and opted for a different approach. He purposefully picked a fight with the United Auto Workers, the trade union virtually all plant workers belonged to. McCardell insisted on pay cuts and increasing the use of non-union labor.
An American Icon, Destroyed
Archie singlehandedly caused a 172-day strike that began November 1, 1979, the longest-ever strike at International Harvester.
By the time the strike ended, International Harvester lost $479.4 million then lost an additional $397.3 million in the next fiscal year directly due to fallout. In the end, the union conceded virtually nothing. International Harvester’s suppliers were devasted; the strike bankrupted Chicago’s Wisconsin Steel.
Besides the losses, International Harvester’s inability to deliver caused a loss of customer confidence. Sales slid by almost half. The business took on debt to keep the company afloat, eventually reaching a staggering $4.5 billion of early 1980s high-interest debt.
McCardell restructured the debt to $4.15 billion, cut $200 million, and demanded union concessions. At the same time, Archie paid out $6 million in executive bonuses. Seeing the dismal condition of the firm, the union agreed to $200 million in wage and benefit cuts.
The union agreed to contract concessions on May 2, 1982. Archie was fired the next day.
The firm’s stock, trading in the mid $40s when McCardell was hired, traded at $2 by the time he left. International Harvester was forced to sell off many business units, including the venerable farm machinery division. Eventually, 6,400 jobs were lost. What remained was renamed Navistar.
“I don’t think we made any one major mistake,” McCardell said in a 1986 UPI interview. “I feel very good about my years at Harvester.” Later, he adds, “I think I was underpaid.” In a different interview with the New York Times he said: “I think I rate myself superb.”
Pundits aren’t as enthusiastic. One speculated he might have been carrying out “an industrial sabotage operation.”
Archie didn’t do much after International Harvester. There was a land development project he labeled “a disaster.” He launched a turnaround business but refused to name his clients noting that knowledge of his involvement could “add to their problems.”
McCardell did have one insight that resonates: “I don’t know many CEOs who didn’t reach their positions without some good luck along the way. I had incredibly good luck as a young man. I also had ability, but luck plays a very important part,” he told UPI.
Archie McCardell died July 10, 2008, as the US was heading into the worst financial crisis since the Great Depression.
Archie McCardell Award
We realized Archie’s management talents aren’t unique. Granted, nobody is likely to replicate his success decimating two businesses in entirely separate industries. Archie’s ability to destroy was positively Romanesque in scope, unlikely to be repeated anytime soon.
However, their failure to fail, and to flame-out spectacularly, won’t be for lack of ambition. In this spirit, we’ve decided to create an award, the Archie McCardell Award, for absolutely horrendous management.
We originally put Archie Award winners here but decided they needed their own category. To view the Archie Award Winners, click here.