How 260 Tons of Thanksgiving Leftovers Gave Birth to an Industry

H/T Smithsonian Magazine.

A look back on the invention of the TVdinner.

The birth of the TV dinner started with a mistake

Had my hyperkinetic mother been inclined to meditate, her mantra would have consisted of two brand names: Birds Eye and Swanson. Mom was a working woman in the early 1950s, when that was far from the norm and, in suburban New Jersey, at least, not encouraged. For the record, my mother worked for my father at his real estate office in Westfield. Dad was a handsome man admired by women, and I have long suspected that part of her job was to keep an eye on him. But whatever her motives, she put in her days at the office and then came home to cook for the family, a necessary but unloved chore. So when Birds Eye presented her with frozen peas, she took it as a personal favor and did her best to serve the handy little cryogenic miracles at least five times a week. And when C.A. Swanson & Sons introduced the TV dinner in 1954, relieving mom of responsibility for the entire meal (except for the My-T-Fine tapioca pudding she favored for dessert), she must have thought the world a mighty fine place indeed.

If convenience was the mother of my mother’s contentment, the mother of the TV dinner was that old serial procreator, necessity. In 1953, someone at Swanson colossally miscalculated the level of the American appetite for Thanksgiving turkey, leaving the company with some 260 tons of frozen birds sitting in ten refrigerated railroad cars. Enter the father of invention, Swanson salesman Gerry Thomas, a visionary inspired by the trays of pre-prepared food served on airlines. Ordering 5,000 aluminum trays, concocting a straightforward meal of turkey with corn-bread dressing and gravy, peas and sweet potatoes (both topped with a pat of butter), and recruiting an assembly line of women with spatulas and ice-cream scoops, Thomas and Swanson launched the TV dinner at a price of 98 cents (those are Eisenhower-era cents, of course). The company’s grave doubts that the initial order would sell proved to be another miscalculation, though a much happier one for Swanson; in the first full year of production, 1954, ten million turkey dinners were sold.

The original marketing campaign for TV dinners was, if you will allow me, tray chic. A typical magazine ad showed a stylish woman wearing a smart green suit, a pert feathered hat and black gloves taking a TV dinner out of a grocery bag. In the background sits her smiling husband, in a tan suit and bow tie, comfortably reading his newspaper. The copy line for this bit of Ozzie and Harriet heaven reads: “I’m late—but dinner won’t be.”

My mother, every bit as well turned out as Madison Avenue’s version of the happy housewife, didn’t serve TV dinners every night, of course—the shame factor for failing to provide home cooking was considerably higher then than it is today. But she was quick to see in this manna from Swanson a magic that made it more pleasing to her children (though perhaps not to my father) than a meatloaf or roast chicken done from scratch. At the risk of trying to read the mind of the kid I was at the time, I suspect that the orderliness of the three precisely separated servings contrasted with the general turmoil of growing up, or the specific chaos of my bedroom. And in a culture where packaging is paramount, the idea that a complete meal could be contained in one slim, stackable container appealed mightily to the American yearning for simplicity, economy and efficiency.

But beyond those obvious attractions, Swanson’s brave new product was aided immeasurably by its synergy with another increasingly powerful package, the television set. TV had already made inroads on the Norman Rockwell sanctity of the dinner hour. After all, once the day at school was discussed (reluctantly) by the kids, and the day at work was described (wearily) by father, and the weather and the state of the world were exhausted as subjects, the temptation arose, even in those more conversational days, to let the tube take over.

As home entertainment shifted from the piano (once a ubiquitous and nearly essential home accessory) to the big wooden box with its small flickering screen, the idea of watching—instead of listening to—programs at home seemed transformative, a tipping point into a changed world. Swanson’s marketers clearly realized that this was a medium you could tie your message to; after all, the company had not tried to market Radio Dinners. The idea of pre-prepared meals, heated up at the last moment, seemed to fit right in with the spontaneous excitement of gathering around the screen to watch Milton Berle, Jack Benny and a couple of endearing hand puppets, Kukla and Ollie, along with their human friend, Fran.

Much has changed since then. Having invented the form, Swanson, now owned by Pinnacle Foods in Mountain Lakes, New Jersey, retains only 10 percent of the annual $1.2 billion frozen dinner market. With the advent of microwave ovens, the aluminum tray was replaced by paper. And way back in 1962, Swanson dropped the “TV” from its product label. But those of us who were there at the beginning, when meals and Uncle Miltie fatefully merged, will always think of TV dinners as one of the great hits of television’s early years.

Your Grocery Store Apple Could Be a Year Old But That’s OK

H/T How Stuff Works.

I learned a little more about apples and how long some of them are stored.

There’s nothing like biting into a crisp, juicy apple to evoke the spirit of autumn. Though today they’re available year-round in many parts of the world, apples were once strictly a fall-time treat, and they remain one of the cornerstones of seasonal cooking in the U.S.

If you live in the United States, your apples probably didn’t travel too far to reach you. Only 5 percent of the apples sold in the U.S. are imported; the rest are grown domestically in temperate states like Washington, New York and Michigan.

But the apples in grocery store bins are usually not sold when they’re harvested. Instead, they might have been in storage for up to a year. Unless you take a trip to your local orchard, how do you know whether the apples you’re buying are actually fresh? And if they’re not, does that matter?

One Bad Apple

Picture yourself walking down the aisle of your local grocery store, strolling past piles upon piles of shiny round apples. How do you know which ones to buy? Start by looking at the surrounding apples.

“Apples are a climacteric fruit, meaning the fruits continue to develop and ripen after they are removed from the tree,” says Jessica Cooperstone, a food scientist at The Ohio State University, by email.

Apples and their ilk are highly sensitive to ethylene, the chemical compound that causes fruits to convert starchy cellulose into sugars (otherwise known as ripening). As they ripen, apples release more ethylene, which leads the fruit around them to ripen faster as well. In this way, one bad apple actually can spoil the whole bunch. Other climacteric fruits include bananas and avocados, while non-climacteric fruits include things like strawberries and cherries.

Since ethylene is pretty much a universal chemical signal for “ripe” in climacteric plants, it will even help ripen fruit across species. (You can harness its power for yourself: Try putting a hard avocado in the same bowl as an apple and see how much quicker the avocado ripens.)

Apple-harvesting season is very short (about two months in the fall), so in order to extend their lives after picking, apples are usually treated with a gaseous compound called 1-methylcyclopropene (1-MCP) that blocks ethylene.

That’s not all. “By modifying the environment that apples are stored in (mostly by modifying oxygen, carbon dioxide and ethylene and keeping apples cool), certain varieties of apples can be stored up to one year,” says Cooperstone. “This is a really impressive feat of post-harvest storage technology, and most of this development happened in the first part of the 20th century.”

It’s called Controlled Atmosphere storage. When apples are exposed to less oxygen and more carbon dioxide than what’s found in the air, they in a sense “go to sleep” and don’t finish the ripening process. So, these apples won’t spoil. The exact combination of gases and temperature will vary with the type of apple.

Perhaps unsurprisingly, the types of apples that can handle this process — like Fuji, Gala, Granny Smith, Honeycrisp and Red Delicious — are the ones you are most likely to encounter in a mass-market grocery store. But not every apple is equally storeable. Some fragile-skinned types, like Cortland, Jonagold and Crispin, should be eaten soon after they are picked. Otherwise, they might become too soft and mealy for non-cooking applications. Of course, these days science can step in to create new types of more resilient fruit.

“Apple producers are always looking to develop new varieties that keep their fresh characteristics for as long as possible,” Cooperstone says. For example, the RubyFrost apple, which was developed by Cornell University especially for wintertime consumption. These hybrids — a cross between Braeburn and Autumn Crisp — are bred to reach peak sweetness in mid to late January, months after they’re harvested.

While some new types of apple, like the RubyFrost, are the product of careful selective breeding, others are the result of more direct genetic engineering. Arctic Apples, which are genetically modified to resist browning, became one of the first GMO fruits to be approved by the U.S. Department of Agriculture in 2015.

How Do You Like Them Apples?

There isn’t a way to tell when an apple was picked just by looking at it in a grocery store. And in some ways, it doesn’t matter. But you still want to make sure the apples you buy will be tasty and ripe. In general, a ripe apple is red or yellow. But some red varieties will be red even if not yet ripe. Touch might be a better bet. The apple should be firm but not hard (press it with your thumb) and not have bruises, according to the University of Wiconsin-Extension.

Once you bring them home, store the apples in a cool, dry place, like your refrigerator’s crisper drawer. But even non-crunchy apples have uses. “If I’ve kept apples for a long time and find they’re shriveling enough that I don’t want to eat them fresh, I will use them in a cooked application,” says Cooperstone, like oatmeal or a pie.

A slightly wrinkly apple may not look as pretty as a freshly picked one, but both are totally safe to eat. In the days before refrigeration, drying apples was a standard way to store up food for the winter.

While wrinkled or even bruised apples can still make for good eating, you definitely want to avoid apples that have mold growing on them or that have begun to ooze liquid. Your chances of getting food poisoning from an apple are slim, but not zero, so it’s important to wash your apples before you chow down as well.

Once your apples are out of storage and thoroughly cleaned, it’s time for some good old-fashioned fall cooking. From caramel to crumble to cider and cake, the possibilities are all delicious.

Simon the Cat Received A Military Honor After Suffering Injuries During the Yangtze Incident

H/T War History OnLine.

Simon the only cat to be awarded the Dicken Medal it is the animals version of the Victorian Medal.

From March 1948 to November 1949, the British frigate HMS Amethyst was accompanied by a friendly cat named Simon. The crew of the ship took an immediate liking to Simon and looked after him as one of their own. Simon was injured during the Amethyst Incident and was not expected to live. But he survived the ordeal and was awarded the animal equivalent of the Victoria Cross: the Dicken Medal.

Simon

Simon the Cat
(Original Caption): While the men of the “Yangtse Incident” have been receiving a hero’s welcome home, Simon, the ship’s cat of HMS Amethyst and winner of the Dickin Medal… (Photo by PA Images via Getty Images)

Simon’s journey with Amethyst started in March of 1948 when he was found wandering around a Hong Kong dockyard by ordinary seaman George Hickinbottom. The cat was about one year old, underweight and sickly, but Hickinbottom thought he was the perfect tool to eradicate the rats aboard HMS Amethyst.

The 17-year-old Hickinbottom smuggled the cat onto the ship and kept him in his cabin. He was given the name Simon and quickly became popular among Amethyst’s crew, even the captain took a liking to him.

Simon’s presence raised morale and he was appreciated for his work on the lower decks catching rats. He had a cheeky side too, often leaving dead rats and mice in the crew’s beds as “presents.” He would also curl up in the captain’s hat.

Simon was so popular that when the then-commander, Ian Griffiths, was replaced by Lieutenant Commander Bernard Skinner, the ship kept the cat.

The Amethyst Incident

HMS Amethyst
HMS Amethyst, after action on the Yangtze River, 20th April 1949. (Photo by The Print Collector/Print Collector/Getty Images)

In April 1949 Amethyst made her way up the Yangtze River from Shanghai on what was Skinner’s first mission while in command of the ship. Her destination was Nanking, where she would relieve HMS Consort of her duties. At the time China was embroiled in a civil war between Mao Tse Tung’s People’s Liberation Army (PLA) and the ruling nationalist Party of Chiang Kai-shek. Britain had not taken a side in the conflict, so Amethyst’s journey up the Yangtze was expected to be safe.

By the morning of April 20, she was still over 50 miles from Nanking. Suddenly, Amethyst was rocked by shellfire from a Chinese PLA field gun battery on the northern bank of the river. The ship was peppered with shells and battered by explosions.

One of the first rounds hit the captain’s cabin, which fatally wounded Captain Skinner. But during the attack Simon the friendly cat became a casualty too, receiving severe wounds.

The casualties totaled 19 dead and 27 wounded. Amethyst took shelter further up the river and negations began for the release of the ship.

At the time of the attack, Simon was likely curled up in the captain’s cabin. A shell tore a foot-wide hole in the ship, sending four pieces of shrapnel into Simon’s body and burning his face and whiskers. He either fled or was thrown by the explosion, and was found a few days later.

Simon the Cat and a war medal
(Original Caption) While the men of the “Yangtse Incident” have been receiving a hero’s welcome home, Simon, the ship’s cat of HMS Amethyst and winner of the Dickin Medal – the animals Victoria Cross – has been resting at the Hackbridge Quarantine Kennels in Surrey. (Photo by PA Images via Getty Images)

He was rushed to the sickbay and treated for his wounds like any other crew member would be. Medical officer Michael Fearnley proceeded to remove the shrapnel embedded in Simon’s legs and back and stitched him up. His burns were treated too. However, Fearnley doubted he would survive the night.

But survive Simon did, sending much of his recovery time in the sickbay with his fellow crew and helping improve morale.

Simon the hero

Simon The Cat Medal
The Dickin Medal was awarded to Simon the Cat (Photo by PA Images via Getty Images)

Amethyst remained on the Yangtze river for the next 100 days, with every attempt to move being met by Chinese shell fire. During this time the conditions on the ship deteriorated and food supplies ran dangerously low. Throughout this low period, Simon continued his morale and rat-catching duties, helping to keep down their ever-growing population.

On July 20, the ship’s new captain, Lieutenant Commander John Kerans, made a break for it under the cover of darkness. He managed to sail the ship 104 miles to open sea without any issues. Once he linked up with other Royal Navy ships, Kerans sent a message in typical British fashion: “Have rejoined the fleet south of Woo Sung. No damage or casualties. God save the King.”

After the incident, the crew was hailed as heroes, in particular, Simon. He immediately became an international celebrity and was captured in photographs and newsreels alongside his shipmates by the press. Letters from all over the world written to Simon came in. There were so many letters that a dedicated “cat officer” was tasked with sorting through it all.

The PDSA put Simon in for the Dicken Medal, the highest award for animals. It was to be presented to him in December 1949 after the crew had returned to England.

The crew reached England in November, and, like all animals returning from war, Simon was placed in a six-month quarantine.

Sadly, he would not survive.

Shortly after being placed in quarantine, he contracted a virus that his body, weakened by his war wounds, was unable to fight off. Despite desperate attempts by medical staff, Simon passed away on November 28, 1949.

Simon the Cat Tombstone
Simon the cat’s grave. (Photo by Cate Gillon / Getty Images)

Today, Simon remains the only cat to have ever received the Dicken Medal.

The Stories Behind History’s Most Iconic War Photos

H/T War History OnLine.

Calling these photographs iconic is an understatement.

For nearly two centuries, photographers have been using pictures to document the horrors of war. This has led to some of history’s most famous photos, yet many are unaware of the events that led up to them. Here are the real stories behind nine of the most iconic war photos ever taken.

The Valley of the Shadow of Death (1855)

Early war photography was limited in scope, given the infancy of the technology, but that doesn’t mean the images aren’t any less jarring. British photographer Roger Fenton was sent to cover the fighting between Britain and Russia during the Crimean War. He wasn’t permitted to photograph the combat as it happened, but did cover its aftermath.

Empty dirt road
The Valley of the Shadow of Death. (Photo Credit: Roger Fenton / Wikimedia Commons)

The area pictured above was dubbed “The Valley of the Shadow of Death” by the British, due to the amount of shelling that occurred there. It was often covered with cannonballs fired during the fighting between the two sides.

While considered the “first iconic photograph of war,” some question its authenticity, as there’s a secondary photo of the same area without cannonballs strewn across the ground. After some investigation, it was determined soldiers likely gathered and placed them in ditches to reuse later.

Hitler visiting Paris landmarks (1940)

Adolf Hitler was conscientious of his public image, meaning few were privy to his personal life. One such person was Heinrich Hoffman, who, by 1940, was the only individual allowed to photograph the Führer. He was there in June 1940, when Hitler and his Nazi generals toured Paris‘ famous landmarks.

Hitler and other Nazi officials walking in front of the Eiffel Tower
Hitler touring the Eiffel Tower. (Photo Credit: Heinrich Hoffmann / Wikimedia Commons CC BY-SA 3.0 DE)

This photo in front of the Eiffel Tower is one from Hitler’s first and only visit to the French capital. According to the Führer, the most impactful moment of the trip was a visit to Napoleon Bonaparte‘s tomb, after which he remarked, “That was the greatest and finest moment of my life.”

During the trip, he also ordered the destruction of two World War I monuments – of General Charles Mangin and Edith Cavell – as he didn’t want reminders of Germany’s prior defeat.

Warsaw Ghetto boy (1943)

The exact history surrounding this photograph isn’t known, but there are theories about the boy and the individual who took the photo. According to multiple sources, this image was likely captured by Franz Konrad, a Nazi photographer, and depicts those in the Warsaw Ghetto being rounded up and taken to concentration camps.

Polish Jews holding their hands up in surrender
Forcibly pulled out of bunkers. (Photo Credit: Wikimedia Commons)

It’s speculated the young boy is Tsvi Chaim Nussbaum, who hid in a bunker during the final liquidation of the ghetto before being found by German soldiers. SS-Rottenführer Josef Blösche is pointing a submachine gun in his direction to keep the boy and the rest of the crowd in line.

This photo became one of the most famous of the Holocaust, and the boy came to represent its Jewish victims and the children who suffered at the hands of the Nazis. If he is actually Nussbaum, he survived the war and went on to become a doctor in New York.

Raising the Flag on Iwo Jima (1945)

Arguably the most recognizable photograph from the Pacific War is Joe Rosenthal’s Raising the Flag on Iwo Jima. The image was snapped by the Associated Press photographer atop Mount Suribachi on February 23, 1945, and is a symbol of America’s resolve during their fight against the Japanese.

Soldiers raising the American flag on Iwo Jima
Raising the Flag on Iwo Jima. (Photo Credit: Joe Rosenthal / Wikimedia Commons)

On that day, US Marine commander Colonel Dave Severance was leading E Company, 2nd Battalion, 28th Marines, 5th Marine Division during the Battle of Iwo Jima. The fight was important, as the US needed the island’s strategically-placed airstrips.

After defeating the Japanese, Severance sent his company to the top of Mt. Suribachi to plant the American flag, an action initially photographed by Sergeant Louis Lowery. However, Secretary of the Navy James Forrestal wanted the flag as a memento, so the commander sent a second group up the mountain to install another flag. It was this effort that Rosenthal captured on film.

V-J Day in Times Square (1945)

Many are aware of this photograph by Alfred Eisenstaedt, taken in Times Square on August 14, 1945. It depicts a US Navy sailor kissing a stranger (a dental assistant) on Victory Over Japan Day – better known as “V-J Day” – in New York City.

Sailor kissing a woman in Times Square
V-J Day in Times Square. (Photo Credit: Alfred Eisenstaedt / Getty Images)

President Harry Truman was anticipated to announce the end of the war that evening, and a spontaneous celebration occurred in Times Square. According to Eisenstaedt, he was unable to collect the names of those he was photographing, given the speed at which everything was happening. All that’s known for certain is this image was shot south of 45th Street, looking north from where Broadway and Seventh Avenue converge, around 5:51 P.M.

Over the years, there have been attempts to identify the two individuals. Unfortunately, their names (and faces) remain under speculation to this day.

Flower Power (1967)

The March on the Pentagon was a large-scale demonstration against the Vietnam War on October 21, 1967. More than 100,000 protestors attended a rally at the Lincoln Memorial, after which 50,000 marched to the Pentagon. It was there The Washington Star photographer Bernie Boston snapped Flower Power, showing George Harris placing a carnation into the barrel of a soldier’s M14 rifle.

Men placing flowers in the stocks of military guns
Flower Power. (Photo Credit: The Washington Post / Getty Images)

The March on the Pentagon was organized by the National Mobilization Committee to End the War in Vietnam. Those who participated were met by soldiers from the 503rd Military Police Battalion, and it was at this point Harris stepped forward and started placing flowers in the M14 barrels.

The photo is seen as a symbol of the Flower Power movement, which began as a way to protest against the Vietnam War. The movement used non-violent objects, as opposed to violence, to share its opposition.

“Tank Man” (1989)

The violence in Beijing in 1989 shocked the world. The student-led protests aimed to bring democracy to China, and many held firm, despite being faced with armed troops who fired at those blocking the military advance into Tiananmen Square. On June 4, 1989, the Chinese government declared martial law and sent the People’s Liberation Army to occupy central Beijing. Thousands were killed and even more injured.

Lone male protestor blocking four tanks in the middle of the road
“Tank Man.” (Photo Credit: Archive Photos / Getty Images)

The most iconic image of the incident was taken the next day, when an unknown man stood in front of a row of tanks leaving Tiananmen Square. He continually shifted his position as the tanks tried to manoeuvre past him. Sadly, there is no reliable information regarding the fate of the protestor, as China has censored the image and the accompanying events.

Kuwaiti oil fires (1991)

With Iraqi forces retreating from Kuwait following the International Peacekeeping Coalition’s invasion in 1991, Saddam Hussein ordered the destruction of the country’s oil fields. It’s reported between 605 and 732 oil wells, along with an unspecified amount of oil-filled areas, were destroyed by the Iraqi military.

Fighter jets flying over burning oil fields
F-16A, F-15C and F-15E fighter jets flying during Operation Desert Storm. (Photo Credit: U.S. Air Force)

While the fires began in January and February 1991, the first wasn’t extinguished until April, and the last wasn’t capped until November 6. While concrete figures aren’t available, it’s believed between four and six million barrels of crude oil were burnt per day, along with between 70 and 100 million cubic meters of natural gas.

This image, taken by the US Air Force, shows F-16A, F-15C and F-15E fighter jets patrolling Kuwait during the fires. The smoke not only caused a Royal Saudi Air Force C-130H to crash, but provided the Iraqi forces a smokescreen during the Battle of Phase Line Bullet.

Liberian militia commander Joseph Duo (2003)

Joseph Duo was a militia commander loyal to the Liberian government. This image, taken by Chris Hondros, shows the moment after he grabbed his rocket launcher and fired. It detonated amid a group of rebels, causing the militia leader to leap in joy. The photo came to define the strife within Liberia.

Joseph Duo jumping in the air while holding a grenade launcher
Joseph Duo, Liberian militia commander. (Photo Credit: Chris Hondros / Getty Images)

Hondros once said, “Sometimes a picture captures things that people respond to. This is a picture of fighting that shows some of the uncomfortable realities of war. One of those is that [some] people in war enjoy it – they get a bloodlust.”

While Duo shares he was celebrating because he was defending his country, he now doesn’t like looking at the photo, saying, “It gives me the memories of war.”

The 8 Greatest Comebacks in Military History

H/T War History OnLine.

A military comeback is an opportunity rarely given to commanders. The chance to switch the tide of battle to one’s favor is incredibly rare, and equally difficult to do, often requiring the alignment of random factors like weather, or significant external help.

This list features 8 important comebacks that had major implications on their respective wars or political climates.

Battle of Stirling Bridge

Battle of Stirling Bridge
A Victorian depiction of the battle. The bridge collapse suggests that the artist has been influenced by Blind Harry’s account. (Photo Credit: C Hanley, History Of Scotland / Wikipedia / Public Domain)

On 11 September 1297 Scottish forces defeated the English near the River Forth, during the First War of Scottish Independence. Scottish King John Balliol had recently surrendered to the English and was undermined by King Edward I of England. Scottish nobles overthrew Balliol and allied with France, resulting in King Edward invading Scotland.

In 1297 William Wallace and Andrew de Moray led a revolt against the English, who battled each other over a bridge near Stirling. The outnumbered Scots managed to defeat the British forces, the first major Scottish victory in decades.

Battle of Saratoga

Battle of Saratoga
Battle of Saratoga. General Arnold was wounded in the attack on the Hessian Redourt. (Photo Credit: Bettmann / Getty Images)

The Battle of Saratoga is considered to be a pivotal moment in the American Revolutionary War against the British. The British planned to cut off New England from the mid-Atlantic colonies by sending large amounts of surplus troops into Albany. While the three British armies were en route to Albany, one of them, led by Sir William Howe, abandoned the plan and instead attempted to invade Pennsylvania.

The army led by General John Burgoyne battled Continental forces at Freeman’s Farm and Saratoga, suffering heavy losses while waiting for reinforcements from an army that would never arrive. The defeat of British forces led to France officially becoming America’s ally.

The Defeat of the Spanish Armada

Spanish Armada Defeat
Contemporary Flemish interpretation of the launching of English fire-ships against the Spanish Armada, 7 August 1588 (Photo Credit: Royal Museums Greenwich Collections / Public Domain)

In 1588 Spain sent their formidable Armada to Great Britain with the hopes of invading the country and overthrowing Queen Elizabeth I to remove Spain’s then-rival. However, Britain’s faster ships were able to successfully battle Spain’s Armada along their southern coast.

The Armada was devastated by Britain, with Spain losing 15,000 troops. The victory solidified Britain as a global force to be reckoned with.

Prussia during the Seven Years War

Seven Years War
The Battle of Fehrbellin was a battle at Fehrbellin of the Seven Years’ War between Swedish and Prussian forces fought on 28 September 1758, historical illustration. (Photo by: Bildagentur-online/Universal Images Group via Getty Images)

In 1756 Prussian King Frederick the Great invaded Saxony, kicking off the Seven Years War, which saw Prussia and Great Britain face off against Austria and France. The Anglo-Prussian alliance was outnumbered by the combined forces of Russia, France Sweden, and Austria.

Just as it looked like Prussia would be defeated, Russia switched sides when Tsar Peter III ascended to the throne in 1762, sending reinforcements to Frederick. The war ended shortly after Russia’s change of allegiance.

Battle of Gettysburg

Battle of Gettysburg
July 1863: US Civil War 1861-65. A wide view of a portion of the Battle of Gettysburg, Pennsylvania, 1-3 July 1863. An 1884 color illustration. (Photo by Stock Montage/Stock Montage/Getty Images)

The Battle of Gettysburg was part of the Confederate’s invasion of the North which was hoped would earn the South recognition from foreign nations. The battle was fought between June 1 and June 3 1863 and came just weeks after the Confederate success at Chancellorsville in Virginia. The Confederate forces, led by Robert E. Lee, clashed with Union troops at the town of Gettysburg, with the battle initially leaning in the South’s favor.

But after a few days of savage battles that claimed thousands of lives, Union forces held their ground and emerged victoriously. Overall, the Battle of Gettysburg resulted in over 35,000 casualties.

Battle of Thermopylae

Battle of Thermopylae
The Battle of Thermopolye. Leonidas attacks. BPA 2 #2274 (Photo Credit: Bettmann / Getty Images)

The Battle of Thermopylae is one of the most famous battles in history, fought in 480 BC between King Leonidas I of Sparta and the Achaemenid Empire of Xerxes I. Leonidas was massively outnumbered by Persian forces, so he utilized a bottleneck that the Persians were forced to pass through. Days into the fight, a local resident betrayed the Greeks when they revealed a path that could be used by the Persians to outflank the Greeks.

Realizing they were about to be attacked from the rear, Leonidas instructed his forces to retreat, while leaving a small group of Spartan warriors who fought to the death.

Battle of Midway

Battle of Midway
Mikuma cruiser during the midway battle, japan, second world war, 1942. (Photo by: Marka/Universal Images Group via Getty Images)

The Battle of Midway took place between June 4 and June 7, 1942, and just like the Battle of Saratoga, it was a turning point for the belligerents involved. The Japanese aimed to lure US aircraft carriers into a trap and knock these powerful assets out of the war while capturing Midway, which would allow Japan to extend its reach across the Pacific.

If successful, the trap would be another in a series of Japanese victories in the early stages of the Pacific War.

However, US cryptographers had cracked Japanese communications weeks before, so the US knew where and when the Japanese would strike. The ensuing clash claimed four Japanese aircraft carriers, 3,000 men, and 300 aircraft. In return, the US lost 360 men and 145 aircraft. It has gone down as one of the greatest naval battles ever.

Battle of Waterloo

Battle at Waterloo
Center of the British army in action at Waterloo 18 June 1815, the last battle of the Napoleonic Wars. After W Heath. (Photo by Universal History Archive/Getty Images)

This battle took place on 18 June 1815 and brought about the end of the Napoleonic Wars. In 1814 Napoleon was forced to abdicate the throne after butting heads with powerful European countries. However, Napoleon briefly returned to power in 1815 and began the Hundred Days campaign. A large coalition of European nations formed to stop Napoleon, although he still emerged victorious over them a number of times.

This would change at the Battle of Waterloo, which saw Britain and their allies finally stop Napoleon. They were aided by poor weather which slowed Napoleon’s movements. Napoleon abdicated four days later.

Who Really Invented the Electric Guitar?

H/T Popular Mechanics.

After 80 years, we still don’t really know.

The Rich History of a Favorite Dessert

H/T Cheesecake.com.

My favorite cheese cake is cherry followed by strawberry cheese cake.

Cheesecake is a beloved dessert around the world. While many assume that it has its origins in New York, it actually dates back much further. Let’s go back over 4,000 years to ancient Greece! Sit back, grab a creamy slice of cheesecake and learn all about this dessert’s rich history.

Cheesecake Travels the Globe

The first “cheese cake” may have been created on the Greek island of Samos. Physical anthropologists excavated cheese molds there which were dated circa 2,000 B.C. Cheese and cheese products had most likely been around for thousands of years before this, but earlier than this goes into prehistory (that period in human history before the invention of writing) so we will never really know. In Greece, cheesecake was considered to be a good source of energy, and there is evidence that it was served to athletes during the first Olympic games in 776 B.C. Greek brides and grooms were also known to use cheesecake as a wedding cake. The simple ingredients of flour, wheat, honey and cheese were formed into a cake and baked – a far cry from the more complicated recipes available today!

The writer Athenaeus is credited for writing the first Greek cheesecake recipe in 230 A.D. (By this time, the Greeks had been serving cheesecake for over 2,000 years but this is the oldest known surviving Greek recipe!) It was also pretty basic – pound the cheese until it is smooth and pasty – mix the pounded cheese in a brass pan with honey and spring wheat flour – heat the cheese cake “in one mass” – allow to cool then serve.

When the Romans conquered Greece, the cheesecake recipe was just one spoil of war. They modified it including crushed cheese and eggs. These ingredients were baked under a hot brick and it was served warm. Occasionally, the Romans would put the cheese filling in a pastry. The Romans called their cheese cake “libuma” and they served it on special occasions. Marcus Cato, a Roman politician in the first century B.C., is credited as recording the oldest known Roman cheesecake recipe.

As the Romans expanded their empire, they brought cheesecake recipes to the Europeans. Great Britain and Eastern Europe began experimenting with ways to put their own unique spin on cheesecake. In each country of Europe, the recipes started taking on different cultural shapes, using ingredients native to each region. In 1545, the first cookbook was printed. It described the cheesecake as a flour-based sweet food. Even Henry VIII’s chef did his part to shape the cheesecake recipe. Apparently, his chef cut up cheese into very small pieces and soaked those pieces in milk for three hours. Then, he strained the mixture and added eggs, butter and sugar.

It was not until the 18th century, however, that cheesecake would start to look like something we recognize in the United States today. Around this time, Europeans began to use beaten eggs instead of yeast to make their breads and cakes rise. Removing the overpowering yeast flavor made cheesecake taste more like a dessert treat. When Europeans immigrated to America, some brought their cheesecake recipes along.

Adding Signature Ingredient

Cream cheese was an American addition to the cake, and it has since become a staple ingredient in the United States. In 1872, a New York dairy farmer was attempting to replicate the French cheese Neufchatel. Instead, he accidentally discovered a process which resulted in the creation of cream cheese. Three years later, cream cheese was packaged in foil and distributed to local stores under the Philadelphia Cream Cheese brand. The Philadelphia Cream Cheese brand was purchased in 1903 by the Phoenix Cheese Company, and then it was purchased in 1928 by the Kraft Cheese Company. Kraft continues to make this very same delicious Philadelphia Cream Cheese that we are all familiar with today.

New York Style Cheesecake

Of course, no story of cheesecake is complete without delving into the origins of the New York style cheesecake. The Classic New York style cheesecake is served with just the cake – no fruit, chocolate or caramel is served on the top or on the side. This famously smooth-tasting cake gets its signature flavor from extra egg yolks in the cream cheese cake mix.

By the 1900s, New Yorkers were in love with this dessert. Virtually every restaurant had its own version of cheesecake on their menu. New Yorkers have vied for bragging rights for having the original recipe ever since. Even though he is best known for his signature sandwiches, Arnold Reuben (1883-1970) is generally credited for creating the New York Style cheesecake. Reuben was born in Germany and he came to America when he was young. The story goes that Reuben was invited to a dinner party where the hostess served a cheese pie. Allegedly, he was so intrigued by this dish that he experimented with the recipe until he came up with the beloved NY Style cheesecake.

More Variations in America

New York is not the only place in America that puts its own spin on cheesecakes. In Chicago, sour cream is added to the recipe to keep it creamy. Meanwhile, Philadelphia cheesecake is known for being lighter and creamier than New York style cheesecake and it can be served with fruit or chocolate toppings. In St. Louis, they enjoy a gooey butter cake, which has an additional layer of cake topping on the cheesecake filling.

Cheesecake Around the World

Each region of the world also has its own take on the best way to make the dessert. Italians use ricotta cheese, while the Greeks use mizithra or feta. Germans prefer cottage cheese, while the Japanese use a combination of cornstarch and egg whites. There are specialty cheesecakes that include blue cheese, seafood, spicy chilies and even tofu! In spite of all the variations, the popular dessert’s main ingredients – cheese, wheat and a sweetener –remain the same.

No matter how you slice it, cheesecake is truly a dessert that has stood the test of time. From its earliest recorded beginnings on Samos over 4,000 years ago to its current iconic status around the world this creamy cake remains a favorite for sweet tooths of all ages.

9 Delicious Facts About Oysters

H/T Mental Floss.

Raw oysters either you love them or hate them.

I personally enjoy smoked oysters.

Some people think oysters are slimy and taste far too salty. For others, they’re a delicacy. Oysters may provoke a love-hate response, but they also have impressive ecological properties, and the leftover shells have been used in some surprising ways. Here are 9 fascinating facts about the bivalve.

1. OYSTERS HAVE BEEN AROUND SINCE THE TRIASSIC PERIOD.

Oysters first appeared over 200 million years ago, when the earliest dinosaurs roamed Pangaea. Evidence of human oyster consumption dates back to about 164,000 years ago, according to a 2007 paper in Nature describing human ancestors’ first modern behaviors. A 2013 study found that Stone Age people in Denmark ate so many oysters that piles of the discarded shells show a marked decrease in the bivalves’ size over the years.

2. IN THE VICTORIAN ERA, OYSTERS WERE THE FOOD OF THE POOR, NOT THE RICH.

Pickled oysters were liberally consumed by London’s poor. They were sold as bar snacks and by stalls on street corners, and for those who couldn’t afford beef or mutton, oysters made up the protein in soups and stews. Oyster pie was also a popular dish with the lower classes.

3. A SCOTTISH ESTUARY ONCE HELD THE WORLD’S LARGEST NATIVE OYSTER BED.

The oyster-free Firth of Forth, ScotlandGEORGECLERK/ISTOCK VIA GETTY IMAGES

Covering more than 150 square kilometers (about 58 square miles), the oyster bed of the Firth of Forth on Scotland’s east coast, near Edinburgh, was a veritable gold mine of shellfish. Historians estimate that 30 million oysters could be harvested from it annually in the 1700s, to be sold in London and Europe. Sadly, over-harvesting meant that the oyster bounty from the Firth of Forth couldn’t last. By the late 19th century the beds were badly depleted, and only around 1200 oysters were harvested per year. Today there are no native oysters in the Forth.

4. DISCARDED OYSTER SHELLS WERE USED TO BUILD CITIES.

Speaking of Edinburgh, remnants of oyster shells found in walls have offered clues to Edinburgh’s culinary past. Residents of the Scottish capital reportedly put away 100,000 oysters daily during the 17th century, and walls containing oyster shells were uncovered during work on tenement buildings in the city. The shells, which appear to have been used as a filler between stone and brick, most likely came from a tavern located in the basement of the building, as oyster shells were typically left to pile up on floors.

5. A CONTAMINATED OYSTER KILLED THE DEAN OF WINCHESTER CATHEDRAL.

Oysters glean nutrients from seawater as it passes through their gills. They can filter more than 50 gallons of water a day, leaving a cleaner environment. But oysters can also be contaminated by substances in the water, and they developed a dangerous reputation in early 20th century England thanks to increasing water pollution. In 1902, the Dean of Winchester attended a mayoral banquet where oysters were served. The shellfish had been harvested from the Hampshire village of Emsworth, where a sewage spill had occurred, and the dean and several other guests died of enteric fever following the dinner. The food poisoning scandal devastated the oyster trade in Emsworth, leaving many jobless.

6. BALTIMORE OYSTER PACKERS INVENTED A KNIFE CALLED THE “CHESAPEAKE STABBER.”

The Baltimore area came to dominate the American oyster industry in the 19th century, with 90 percent of the country’s oyster packing industry—more than 100 companies—located in the Maryland city. Whole oysters were shipped by railroad from Baltimore to inland cities on ice. Later, canning extended the oysters’ shelf life and allowed them to be shipped greater distances cheaply. The packers developed a particular kind of oyster knife known as the Chesapeake stabber, with a straight, sharp, thin blade meant to separate the shells through the oyster’s lip. Today’s champion shuckers still use the Chesapeake stabber in their trade.

7. A LOCAL ESCARGOT SHORTAGE LED TO A CLASSIC OYSTER DISH.

Oysters (not snails) RockefellerSBOSSERT/ISTOCK VIA GETTY IMAGES

In 1889, a snail shortage drove the son of the founder of the famed New Orleans restaurant Antoine’s to get creative with an appetizer. He substituted oysters for snails, and Oysters Rockefeller was born. In this dish, instead of being served raw, the oysters are baked in the half shell along with spinach, butter, breadcrumbs, and herbs. Why “Rockefeller”? The story goes that a patron commented that the oysters tasted as rich as their namesake.

8. OYSTER SHELLS ARE RECYCLED TO HELP BUFFER COASTLINES FROM CLIMATE CHANGE.

The Coalition to Restore Coastal Louisiana launched an oyster shell recycling program in 2014, the first such initiative in the state. The shells are returned to the water to restore oyster reefs, which protect shorelines from erosion and storms. The oysters’ rough, ridged shells provide extra surface area to absorb wave energy better than dykes and levees; plus, the reefs provide a place for baby oysters to anchor themselves. The program has collected more than 4000 tons of shells so far.

In New York City, the Billion Oyster Project is restoring 100 million oysters to New York Harbor to mitigate the effects of storm surges. Organizers hope that the oyster beds will reduce flooding and provide a cleaner environment (through their filter feeding) for other species.

9. THE JURY’S STILL OUT ON WHETHER OYSTERS ARE APHRODISIACS.

Oysters are particularly rich in zinc—which is known to be vital for sexual function in men—and have been thought of as aphrodisiacs for centuries. (Not to mention their resemblance to female genitalia.) Renowned seductor Giacomo Casanova supposedly ate multiple oysters for breakfast daily, suggesting he viewed the mollusks as “the nectar of the gods.” These days, scientists remain unconvinced that there is a clear relationship between oysters and libido.

MAKING TIRES BLACK, INSTEAD OF THE NATURAL WHITE COLOR OF RUBBER, PRODUCES A MUCH STRONGER AND LONGER LASTING TIRE

H/T Today I Found Out.

A bit of tire history.

Today I found out making tires black, instead of the natural white color of rubber, produces a much stronger and longer lasting tire.

Originally rubber tires were white, which is the natural color of rubber.  In the early 1900s, Binney & Smith began selling their carbon black chemicals to Goodrich Tire Company, as it was found that the use of carbon black in rubber manufacturing significantly increased certain desirable qualities for rubber meant to be turned into tires.  (Binney & Smith would later switch to making school products, and, eventually, re-name their company after their most popular product, Crayola Crayons.)

In any event, carbon black works as a reinforcing filler in rubber, which increases the durability and strength of the rubber.  Specifically, adding about 50% by weight of carbon black increases the road-wear abrasion of the produced tire by as much as 100 fold and improves the tensile strength of the tire by as much as 1008%.  For the uninitiated, the tensile strength is the amount of force needed to pull something to its breaking or bursting point.

Adding carbon black also helps conduct heat away from certain hot spots on the tire; specifically, in the tread and belt areas, which can get particularly hot at times while driving.  This reduces thermal damage on the tire, further extending its lifespan.

From a purely cosmetic standpoint, black tires are also much easier to keep looking clean, which also makes them desirable over natural white rubber tires.  However, in modern times, white wall tires or fully white tires are sometimes thought as more luxurious, particularly on classic cars.  However, when fully black tires first came out, they were considered the more desirable tire for their prestige and tended to only be found on high end luxury cars.

Carbon black itself is simply nearly pure elemental carbon in colloidal particle form.  It is classically made by simply charring any organic material.  Examples of this are Ivory Black, made by charring ivory or bones, and Lamp Black, made from the soot of oil lamps. Carbon black for industrial use today is typically produced as Furnace Black and Thermal Black.  Furnace Black is produced using heavy aromatic oils.  Thermal Black is produced using natural gas, generally methane, injected into a very hot furnace where, in the absence of much air, carbon black and hydrogen are produced.

Bonus Facts:

  • Rather than using carbon black in shoes, the more common additive to the rubber is fumed silica, which has similar reinforcing properties as carbon black, but leaves the rubber white.  The downside of using silica-based additives on automotive tires is that they have much worse abrasion wear properties than tires with carbon black.  However, they do offer better handling on wet surfaces and have a lower rolling loss, which increases fuel efficiency.  Because of this, there are some tires that are starting to be made with silica-based additives, instead of carbon black, but this is still relatively rare.
  • Around 70% of all carbon black pigment used in the word today is used for tires.  Another 20% goes into belts, hoses, and other such rubber items.  Most of the remaining 10% go into black coatings for items, as well as inks and toner in printing.
  • Carbon black is not the same thing as activated carbon or soot.  Carbon black has a much higher surface area to volume ratio than soot and also has much less polycyclic aromatic hydrocarbon in it.
  • Despite research indicating carbon black may be a carcinogen, it is used in certain food coloring.
  • Binney & Smith, which later became Crayola, is not only credited for making tires black, instead of white, but also is the company that originally created the red paint color that is now traditional on barns, which was a red oxide pigment.
  • No one knows exactly where the word “tire” derives from.  The leading theories are that it either derives from “attire” or from “to tie”.  The earliest tires were simply bands of iron or other metal.  The application of the metal band on the wooden wheels was accomplished by heating the metal tire, then placing it over the wooden wheel.  Next, they would douse it in cold water, which would cause the metal to rapidly contract and secure itself to the wheel, with the outer ring “tying” the wheel together, hence the proposed “tie” origin.
  • Binney & Smith’s “carbon gas blacks” earned them a gold medal in chemical and pharmaceutical arts.  The company was originally founded in 1864 and produced types of Charcoal and Lamp blacks.
  • The first practical pneumatic tire was developed by John Boyd Dunlop, who was originally a veterinarian.  He created the tire to help his son who suffered from headaches when riding his bike.  The rubber tire made for a much smoother ride for him on rough roads than wooden wheels.
  • Around 1 billion tires are made annually.
  • The earliest available carbon black product used for commercial purposes was “lamp black”, produced by the Chinese over 3500 years ago.  However, these early forms of carbon black were relatively impure compared to modern carbon black.

A Brief History of the Rubber Band

H/T Gizmodo.com.

The rubber band has a very interesting history.

Cheap, reliable, and strong, the rubber band is one of the world’s most ubiquitous products. It holds papers together, prevents long hair from falling in a face, acts as a reminder around a wrist, is a playful weapon in a pinch, and provides a way to easily castrating baby male livestock… While rubber itself has been around for centuries, rubber bands were only officially patented less than two centuries ago. Here now is a brief history of the humble, yet incredibly useful, rubber band.

It has only recently been discovered that Mesoamerican peoples (which includes Aztecs, Olmecs, and Mayans) were making rubber (though they didn’t call it this) three thousand years ago. Mixing milky-white sap known as latex from the indigenous Hevea brasiliensis trees (later called Para rubber trees) with juices from the morning glory vines, they could create a solid that was, surprisingly, quite sturdy. The civilizations used this ancient rubber for a variety of purposes, from sandals to balls to jewelry. In fact, while Charles Goodyear is generally credited with the invention of vulcanized rubber (a more durable and non-sticky rubber compound via the addition of sulfur and heat), it seems that the Aztecs were simply varying the ingredient proportions (between the latex and the morning glory juice) to create different variations in strength.

When Spanish explorers arrived in South America in the 16th century, they discovered for themselves the many uses of this elastic, malleable sap. When the French explorer Charles de la Condamine “discovered” it in the 1740s, he called it “caoutchouc”, a French word, but a variation on the South American word for latex. In attempting to figure out what it was exactly, Condamine came to a wrong conclusion – he thought it was condensed resinous oil. The name “rubber” was only attributed to this latex material when, in 1770, the famed British chemist Joseph Priestley (who also discovered oxygen) noted that the material rubbed pencil marks right off paper, thereby inventing the eraser and giving the “rubbing material” a name. By the end of the 18th century, the material was forever known as “rubber.”

In 1819, Englishmen Thomas Hancock was in the stagecoach business with his brothers when he attempted to figure out better ways to keep his customers dry while traveling. He turned to rubber to develop elastic and waterproof suspenders, gloves, shoes, and socks. He was so enamored with the material that he began to mass produce it, but he soon realized he was generating massive amounts of wasted rubber in the process. So, Hancock developed his “Pickling machine” (later called a masticator) to rip up the leftover rubber into shreds. He, then, mashed the malleable rubber together, creating a new solid mass, and put it into molds to design whatever he wanted. One of his first designs were bands made out of rubber, though he never marketed or sold them, not realizing the practically of rubber bands. Plus, vulcanization hadn’t been discovered yet (which we will discuss in a moment), so the bands would soften considerably on hot days and harden on cold days. In short, these rubber bands simply weren’t very practical at this stage of the game, in terms of many of the types of things rubber bands would later be used for. Hancock didn’t patented his machine or the shreds of rubber it produced, instead hoping to keep the manufacturing process completely secret. This would end up being a rather large mistake.

In 1833, while in jail for failure to pay debts, Charles Goodyear began experimenting with India rubber. Within a few years, and after he got out of jail, Goodyear discovered his vulcanization process. Teaming with chemist Nathaniel Hayward, who had been experimenting with mixing rubber with sulfur, Goodyear developed a process of combining rubber with a certain amount of sulfur and heating it up to a certain point; the resulting material became hard, elastic, non-sticky, and relatively strong. A few years later, in 1844, he had perfected his process and was taking out patents in America for this process of vulcanization of rubber. He then traveled to England to patent his process oversees, but ran into a fairly large problem – Thomas Hancock had already patented the nearly identical process in 1843.

There seems to be conflicting reports on whether Hancock had developed the vulcanization process independently of Goodyear or if, as many claim, that he had acquired a sample of Goodyear vulcanized rubber and developed a slight variation on the process. Either way, Hancock’s patent stopped Goodyear from being able to patent his process in England. The ensuing patent battle dragged on for about a decade, with Goodyear eventually coming to England and watching in person as a judge proclaimed that, even if Hancock had acquired a sample prior to developing his own process for this type of rubber, as seems to have been the case, there was no way he could have figured out how to reproduce it simply by examining it. However, famed English inventor Alexander Parkes claimed that Hancock had once told him that running a series of experiments on the samples from Goodyear had allowed him to deduce Goodyear’s, at the time, unpatented vulcanization process.

But in the end, in the 1850s the courts sided with Hancock and granted him the patent, rather than Goodyear, quite literally costing Goodyear a fortune; had they decided otherwise, Goodyear would have been entitled to significant royalties from Thomas Hancock and fellow rubber pioneer Stephen Moulton.

Though he had a right to be bitter over the ruling, Goodyear chose to look at it thusly, “In reflecting upon the past, as relates to these branches of industry, the writer is not disposed to repine, and say that he has planted, and others have gathered the fruits. The advantages of a career in life should not be estimated exclusively by the standard of dollars and cents, as is too often done. Man has just cause for regret when he sows and no one reaps.”

Goodyear, though eventually receiving the credit he deserved, died in 1860 shortly after collapsing upon learning of his daughter’s death, leaving his family approximately two hundred thousand dollars in debt (about $5 million today).

The patent dispute with Goodyear also had a profound, ultimately negative, effect on Hancock as well. As he was entangled in the time-consuming mess for years, others began to reap the benefits on Hancock not patenting his masticator process nor patenting the seemingly useless bands that they created. Specifically, in 1845, Stephen Perry, working for Messers Perry and Co, Rubber Manufacturers of London, filed a patent for “Improvements in Springs to be applied to Girths, Belts, and Bandages, and Improvements in the Manufacture of Elastic Bands.” He had discovered a use for those rubber bands – holding papers together. In the patent itself, Perry distances himself and his invention from the ongoing vulcanized rubber dispute by saying,

“We make no claim to the preparation of the india rubber herein mentioned, our invention consisting of springs of such preparation of india rubber applied to the articles herein mentioned, and also of the peculiar forms of elastic bands made from such manufacture of india rubber.”

While the rubber band was invented and patented in the 19th century, at this point it was mostly used in factories and warehouses, rather than in the common household. This changed thanks to William Spencer of Alliance, Ohio. The story goes, according the Cincinnati Examiner, that in 1923, Spencer noticed the pages of the Akron Beacon Journal, his local newspaper, were constantly being blown across his and his neighbors’ lawns. So, he came up with a solution for this. As an employee of the Pennsylvania Railroad, he knew where to acquire spare rubber pieces and discarded inner tubes – The Goodyear Rubber Company also located in Akron. He cut these pieces into circular strips and began to wrap the newspapers with these bands. They worked so well that the Akron Beacon Journal bought Spencer’s rubbers bands to do the deed themselves. He then proceeded to sell his rubber bands to office supply, paper goods, and twine stores across the region, all the while continuing to work at Pennsylvania Railroad (for more than a decade more) while he built his business up.

Spencer also opened the first rubber band factory in Alliance and, then, in 1944 the second one in Hot Springs, Arkansas. In 1957, he designed and patented the Alliance rubber band, which ultimately set the world rubber band standard. Today, Alliance Rubber is the number one rubber band manufacturer in the world, churning out more than 14 million pounds of rubber bands per year.

So, next time you are shooting a friend with this little elastic device, you can thank the Mayans, Charles de la Condamine, Thomas Hancock, Charles Goodyear, and William Spencer for the simple, yet amazingly useful rubber band.