With the help of Pearl Harbor survivors, Janet Glen Tomlinson created Home of the Brave Tours Museum, a one-of-a-kind WWII Military Base Tour along with the largest private collection of 1940’s memorabilia in the Pacific. As curators of this extensive collection, the Tomlinsons have received numerous awards and accolades for their work in educating the public about the rich heritage, sacrifices and traditions of the United States military.
The Home of the Brave Museum is a one-of-a-kind treasure trove of artifacts, stories, and memories of our American Military that fought to save our country and liberate the world during our darkest hours. The extensive collection exists to preserve wartime legacies, as well as to honor the sacrifice and victory of our nation’s great servicemen and women.
Their goal is to maintain the extensive collection and expand the property into an interactive learning center to further promote awareness, gratitude, and documentation of America’s military heritage for public interest and educational purposes.
Last year, the revenue needed to operate the museum was cut off due to the termination of their exclusive military base tour. This was due to security concerns from Homeland Security increased competition from larger tour operators who offer larger commission structures to the sales agents selling and promoting Pearl Harbor Tours. The five star “mom pop” tour operation just couldn’t compete with the “big boys.”
The Foundation offers exciting and engaging ways to delve into America’s military legacy as well as educational (hand-on history) and entertainment opportunities for school groups, senior centers, local, military, and island visitors.
“Our debt to the heroic men and valiant women in the service of our country can never be repaid. They have earned our undying gratitude. America will never forget their sacrifices.” – President Harry S. Truman
As the initial results were seen of Germany’s invasion of the Soviet Union on June 22 1941, observers around the world had every reason to believe that Germany was on a course to win the war and become one of the most powerful nations in all of history.
Four million men crossed the border into the Soviet Union during the invasion and quickly claimed large swaths of territory and inflicted heavy casualties on the Soviet Union. By the end of the summer, the Wehrmacht had swept through the Baltic states, the Soviet portion of Poland, and the western half of Ukraine. German forces had made their way to Moscow by October before the bitter cold set in and Soviet General Zhukov could organize a successful defense of the city.
The following spring, Hitler devised a new offensive in the East that would target the oil fields of southern Russia and capture the city of Stalingrad on the Volga river. Capturing the city would disrupt supply routes along the Volga and allow German forces to turn north and once again encircle Moscow. The Soviets were no less determined to defend the city as an important industrial and transportation center and a psychologically important city that bore the name of the Soviet Premiere.
In September 1943, German forces entered Stalingrad, which provoked fierce house by house and street by street fighting. The brutality of the fighting is difficult to even imagine. 1.1 million Soviet soldiers became casualties, along with another 800,000 Axis fighters. But beyond the gruesome statistics, the battle for Stalingrad was the psychological turning point of all of the Second World War.
According to British historian Antony Beevor, Soviet soldiers shouted to German Prisoners of War after they had been captured, “This is how Berlin is going to look!” Advances to the east by German soldiers were about to be replaced with Soviet marches westward and no intelligent German believed that they could ever win a war of attrition, which is what Stalingrad became.
The fighting was also romanticized almost immediately. First, by the Soviet propaganda machine and later by Hollywood, which produced films like Enemy at the Gates. The center of the fighting is now strewn with monuments celebrating Communism’s great victory over Fascism.
The real legacy of Stalingrad, however, was the wasted lives of so many young men (and women). Germany did not need to make Stalingrad a city of vital strategic importance. The main aim of the campaign was the capture of oil fields in the caucuses. That could have been achieved without taking Stalingrad. Hitler directly intervened to overrule his Generals, who were about to withdraw from Stalingrad to capture targets further south en route to the Baku oil fields. Hitler’s primary aim was the propaganda value of the city.
Soviet preparedness for war was one of the primary reasons why Germany was able to so effectively fight the Red Army in the opening months of the war. Stalin had only recently purged the armed forces of needed officers out of a desire to further consolidate his power. Zhukov was spared but later said he always kept an emergency bag packed in case a knock on the door arrived. Had he been purged, it seems likely the Soviet Union would not have been able to win the war.
Stalingrad was ultimately a microcosm of the broader war in the east: a war of ugliness, cruelty, hatred, racism, misogyny, rape, and plunder. It is a leading candidate for the most brutal war ever fought.
Of course, some very important military principles were either learned or reinforced through Stalingrad. Military leaders must be cautious in spreading their troops too thinly, soldiers and civilians in a fight for their own country will often prove more motivated and willing to expend themselves, and that guerrilla warfare is a completely different matter than conventional warfare.
But, the symbolism of Stalingrad does continue to live on in more inspiring ways than simple cruelty due to the clashing egos of two very powerful men. It really did change the course of history and involved ordinary men and women, in addition to professional soldiers, fighting for their very lives. Today, in Volgograd, there are echoes of the conflict still, as the descendants of those fighters continue on in the legacy left for them.
Jet engines, air-to-air rockets, drones. World War II was filled with flashy technological breakthroughs that would change warfare, both during that conflict and in wars to follow. But it was one humble piece of equipment that got an early upgrade that may have actually tipped the war in America’s favor: the fuse.
Specifically, impact and timed fuses were switched out for a weapon that had been hypothetical until then: the proximity fuse.
Anti-aircraft guns fire during World War II. Air defenders using timed fuses had to fire a lot of rounds to bring anything down.
Anti-aircraft and other artillery rounds typically consist of an outer shell packed with a large amount of high explosives. These explosives are relatively stable, and require the activation of a fuse to detonate. Before World War II, there were two broad categories of fuses: impact and timed.
Impact fuses, sometimes known as crush fuses, go off when they impact something. A split-second later, this sets off the main explosives in the shell and causes it to explode in a cloud of shrapnel. This is great for hitting armored targets where you need the explosion pressed as closely as possible against the hull.
A U.S. bomber flies through clouds of flak with an engine smoking. While flak and other timed-burst weapons could bring down planes, it typically took entire batteries firing at high rates to actually down anything.
(U.S. Air Force)
But for anti-personnel, anti-aircraft, or just wide-area coverage fire, artillerymen want the round to go off a couple feet or a couple yards above the ground. This allows for a much wider spread of lethal shrapnel. The best way of accomplishing this until 1940 was with a timed fuse. The force of the shell being propelled out of the tube starts a timer in the fuse, and the shell detonates after a set duration.
The fuses could be set to different times, and artillerymen in the fire direction center would do the math to see what time setting was needed for maximum shrapnel burst.
But timed fuses were less than perfect, and small math errors could lead to a round going off too early, allowing the shrapnel to disperse and slow before reaching personnel and planes, or too late, allowing the round to get stuck deep into the dirt before going off — the dirt then absorbs the round’s energy and stops much of the shrapnel.
The Applied Physics Laboratory at Johns Hopkins University succeeded in creating a revolutionary fuse that would tip battles in America’s favor.
So, in 1940, the National Defense Research Committee asked the Carnegie Institution and Johns Hopkins University to complete research on a tricky project, proximity fuses that worked by sending out radio waves and then measuring the time it takes for those waves to bounce back, allowing it to detonate a set distance from an object. This required shrinking down a radio transmitter and receiver until it was small enough to fit in the space allotted for a fuse.
This, in turn, required all sorts of breakthroughs, like shrinking down vacuum tubes and finding ways to cradle all the sensitive electronics when a round is fired out of the tube.
That may not sound like a great rate, but it was actually a bit of a miracle. Air defenders had to fire thousands of rounds on average to bring down any of the fast, single-engine bombers that were becoming more and more popular — and deadly.
So, to suddenly have rounds that would explode near their target half the time, potentially bringing down an enemy plane in just a few dozen or few hundred shots, was a revelation.
This solved a few problems. Ships were now less likely to run out of anti-aircraft ammunition while on long cruises and could suddenly defend themselves much better from concerted bomber attacks.
Sailors man anti-aircraft guns during World War II on the USS Hornet.
In fact, for the first while after the rounds were deployed, gains were only made at sea because the technology was deemed too sensitive to employ on land where duds could be captured and then reverse-engineered.
The fuses’ combat debut came at Guadalcanal where the USS Helena, one of the first three ships to receive it, fired on a dive bomber heading for its task force. The Helena fired two rounds and the fuses’ first victim burst into flame before plunging to a watery grave.
Two rounds, at a time when thousands used to fail to bring down an enemy plane.
From then on, naval commanders steered ships loaded with the advanced shells into the hearts of oncoming enemy waves, and the fuse was credited with 50 percent of the enemy kills the fleet attained even though only 25 percent of the ammo issued to the fleet had proximity fuses.
That means the fuse was outperforming traditional rounds three to one in routine combat conditions.
A fireball from a kamikaze attack engulfs the USS Columbia during a battle near the Philippines in 1945. The Columbia survived, but 13 crew members were killed.
It even potentially saved the life of one of its creators, Dr. Van Allen. During the Battle of the Philippine Sea, where U.S. planes and gunners brought down over 500 Japanese planes, Dr. Van Allen was exposed on the USS Washington when it came under kamikaze attack. He later described what happened next:
“I saw at least two or three 5-inch shell bursts in the vicinity of the plane, and then the plane dove into the water several hundred yards short of the ship,” he said. “It was so close I could make out the pilot of the plane.”
The rounds were finally authorized for ground warfare in 1944, and their greatest moment came during the Battle of the Bulge when Gen. George S. Patton ordered them used against a concentration of tank crews and infantry.
The rounds were set to go off approximately 50 feet above the ground. Shrapnel tore through men and light equipment and took entire armored and infantry units out of play due to the sheer number of wounded and killed service members.
“The new shell with the funny fuse is devastating,” General Patton later wrote to the War Department. “I’m glad you all thought of it first.”
Gen. William “Billy” Mitchell was an Army officer at the beginning of the 1900s who campaigned for a separate Air Force that would revolutionize warfare. While most of his predictions about American airpower ultimately came true, Mitchell was dismissed as a radical in his day and convicted of insubordination.
Mitchell rose through the ranks quickly and was named deputy commander of Army Aviation shortly after his promotion to major. He requested permission to become an Army pilot, but as a 38-year-old major he was declared too senior in age and rank to become a pilot.
Mitchell eventually got his wish, and a series of demonstrations were scheduled for Jun.-Jul. 1921 where Mitchell’s forces would bomb three captures German ships and three surplus U.S. ships.
The crown jewel of the test targets from the German battleship Ostfriesland, scheduled for bombing Jul. 20-21. The tests were a resounding success. In full view of Navy brass and the American press, every ship was torn apart by aerial bombardment.
The Ostfriesland was hit with armor piercing, 2,000-pound bombs specially designed for use against naval ships. Unfortunately, the Navy claimed that Mitchell overstepped the parameters of the test and Congress just ignored the results.
The friction between Mitchell and the Navy and Congress grew, until two major accidents by the Navy. In one, three planes flying from the West Coast to Hawaii were lost and in another the USS Shenandoah Airship was destroyed with the loss of 14 sailors.
Mitchell took to the press to blast the Navy and Army brass who he believed had failed their subordinates.
“These incidents are the direct result of the incompetency, criminal negligence and almost treasonable administration of the national defense by the Navy and War Departments,” Mitchell said. “The bodies of my former companions in the air moulder under the soil in America, and Asia, Europe and Africa, many, yes a great many, sent there directly by official stupidity.”
His trial was a national sensation, attended by societal elite and crowds of veterans. Mitchell’s lawyer tried to argue that Mitchell’s freedom of speech trumped his duties as an officer, but the defense easily ripped through the argument by pointing out allowing complete freedom of speech in the military could create anarchy.
Mitchell was sentenced to five years suspension without pay or duty, during which time he could not accept civilian employment. When the decision reached President Calvin Coolidge, Coolidge amended them to allow the general half pay and a subsistence allowance.
Mitchell opted to resign his commission instead. He launched a speaking tour that traveled around the country and promoted air power.
He died in 1936 and so was not able to see his prophecies come true in World War II. The Air Force Association tried to get his conviction overturned in 1955, but the secretary of the Air Force left it in place because Mitchell did commit the crimes. President Harry S. Truman authorized a special posthumous award for Mitchell in 1946, recognizing Mitchell’s work to create modern military aviation.
Your grandparents and great grandparents fighting in World War II were hit with just as much safety rules as troops are today, it’s just those rules rarely make it to the history books.
But they weren’t always given their safety rules in boring briefings. When the 1940s War Department and Department of the Navy really wanted to drive safety rules home, they made snazzy safety videos and posters.
The Navy used “Ensign Dilbert,” a soup-sandwich who always breaks safety rules, to highlight the grisly results of incompetency in aviation.
And Dilbert does some truly stupid stuff. He mishandles his weapons, tows aerial targets into ground crews, and even accidentally kills a civilian his first flight of the day. And the Navy isn’t afraid to show the (PG-13) bodies of his victims.
Domestic animals are rarely associated with Antarctica. However, before non-native species (bar humans) were excluded from the continent in the 1990s, many travelled to the far south. These animals included not only the obvious sledge dogs, but also ponies, sheep, pigs, hamsters, hedgehogs, and a goat. Perhaps the most curious case occurred in 1933, when US Admiral Richard E. Byrd’s second Antarctic expedition took with it three Guernsey cows.
The cows, named Klondike Gay Nira, Deerfoot Guernsey Maid and Foremost Southern Girl, plus a bull calf born en route, spent over a year in a working dairy on the Ross Ice Shelf. They returned home to the US in 1935 to considerable celebrity.
Keeping the animals healthy in Antarctica took a lot of doing — not least, hauling the materials for a barn, a huge amount of feed and a milking machine across the ocean and then the ice. What could have possessed Byrd to take cows to the icy south?
Klondike the Guernsey cow waits on the dock in Norfolk, Virginia, alongside the alfafa, beet pulp and dairy feed that would keep them alive in the far south
(With permission of Wisconsin Historical Society, WHS-127998, contact for re-use, CC BY-ND)
The answer we suggest in our recently published paper is multi-layered and ultimately points to Antarctica’s complex geopolitical history.
Solving the “milk problem”
The cows’ ostensible purpose was to solve the expedition’s so-called “milk problem”. By the 1930s, fresh milk had become such an icon of health and vigour that it was easy to claim it was needed for the expeditioners’ well-being. Just as important, however, were the symbolic associations of fresh milk with purity, wholesomeness and US national identity.
Powdered or malted milk could have achieved the same nutritional results. Previous expeditions, including those of Ernest Shackleton and Roald Amundsen, had survived just fine with such products. What’s more, William Horlick of Horlick’s Malted Milk sponsored Byrd’s second Antarctic expedition; the seaplane Byrd used was named for this benefactor.
Crates of Horlick’s Malted Milk destined for Byrd’s second expedition. With its carefully placed sledge, husky and sign, the shot seems posed for publicity purposes.
(With permission of Wisconsin Historical Society, WHS-23703, contact for re-use, CC BY-ND)
So if fresh milk was not actually a health requirement, and other forms were readily available, why go to the trouble of lugging three cows and their accoutrements across the ice?
The cows represented a first, and Byrd well knew that “firsts” in the polar regions translated into media coverage. The expedition was privately funded, and Byrd was adept at attracting media attention and hence sponsorship. His backers expected a return, whether in the form of photographs of their product on the ice or mentions in the regular radio updates by the expedition.
The novelty value that the cows brought to the expedition was a valuable asset in its own right, but Byrd hedged his bets by including a pregnant cow — Klondike was due to give birth just as the expedition ship sailed across the Antarctic Circle. The calf, named “Iceberg”, was a media darling and became better known than the expeditioners themselves.
The celebrity attached to the cows helped the expedition remain in the headlines throughout its time in Antarctica, and they received an enthusiastic welcome upon its return. Although the unfortunate Klondike, suffering from frostbite, had to be put down mid-expedition, her companions made it home in good condition. They were feted on their return, meeting politicians in Washington, enjoying “hay cocktails” at fancy hotels, and making the front page of The New York Times.
It would be easy, then, to conclude that the real reason Byrd took cows south was for the publicity he knew they would generate, but his interest in the animals may also have had a more politically motivated layer.
Eyeing a territorial claim
A third reason for taking cows to Antarctica relates to the geopolitics of the period and the resonances the cows had with colonial settlement. By the 1930s several nations had claimed sectors of Antarctica. Byrd wanted the US to make its own claim, but this was not as straightforward as just planting a flag on the ice.
According to the Hughes Doctrine, a claim had to be based on settlement, not just discovery. But how do you show settlement of a continent covered in ice? In this context, symbolic gestures such as running a post office — or farming livestock — are useful.
Domestic animals have long been used as colonial agents, and cattle in particular were a key component of settler colonialism in frontier America. The image of the explorer-hero Byrd, descended from one of the First Families of Virginia, bringing cows to a new land and successfully farming them evoked this history.
Richard Byrd with Deerfoot in a publicity shot taken before departure.
(With permission of Wisconsin Historical Society WHS-130655, contact for re-use, CC BY-ND)
The cows’ presence in Antarctica helped symbolically to turn the expedition base — not coincidentally named “Little America” — into a frontier town. While the US did not end up making a claim to any sector of Antarctica, the polar dairy represented a novel way of demonstrating national interest in the frozen continent.
The Antarctic cows are not just a quirky story from the depths of history. As well as producing milk, they had promotional and geopolitical functions. On an ice continent, settlement is performed rather than enacted, and even Guernsey cows can be more than they first seem.
The Cold War spawned decades’ worth of bizarre weapon ideas as the West and the Soviet Union strove towards gaining the strategic upper hand over their superpower rival.
The US was responsible for at least seven nuclear weapon designs during the Cold War that now seem outlandish or ill-advised. But the US wasn’t alone in its willingness to build seemingly absurd weapons systems to gain some kind of advantage over the Soviets.
In the 1950s, the UK designed a nuclear landmine that would be placed in West Germany to stop a hypothetical Soviet assault on the rest of Europe, the BBC reports. The landmine, dubbed Operation Blue Peacock, would be operated remotely so that it could be detonated at the moment when it could inflict maximal damage on the invading Red Army.
But the weapon had a major hitch. Buried underground, it was possible that the mine would become cold to the point that the detonator would be unable to trigger a nuclear blast. In 1957, British nuclear physicists found a solution: chickens
“The birds would be put inside the casing of the bomb, given seed to keep them alive and stopped from pecking at the wiring,” the BBC notes. The chickens’ body heat would be enough to maintain the triggering mechanism’s working temperature. In all, the chickens would be estimated to survive for a week, after which time the bomb would return to a possibly cooled and inoperable state.
In all, the landmines designed in Operation Blue Peacock were thought to yield a 10-kiloton explosion which would produce a crater 375 feet in diameter, according to the American Digest. Such destructive potential ultimately led to the abandonment of the project as the British realized that there would be an unacceptable amount of nuclear fallout from such a blast — never mind the complicated issue of burying nuclear weapons within the territory of an allied nation.
By 1958, after the production of only two prototypes, Operation Blue Peacock was abandoned.
This article originally appeared on Business Insider. Follow @BusinessInsider on Twitter.
In the hours following the Japanese attack on Pearl Harbor, the forces of the Empire of Japan also struck a number of other strategic targets. But those weren’t surprise hit and run attacks. Japan’s army and navy invaded Thailand, the Philippines, Guam, Wake Island, the Gilbert Islands, Borneo, British Hong Kong, Malaya and the Dutch East Indies.
That was just in 1941. The following year, Japan also invaded New Guinea, Singapore, Burma, India, the Solomon Islands, Timor, Christmas Island and the Andaman Islands.
The defenders of these Pacific possessions had mixed success in holding off or repelling their attackers, but many fell to the surprise attacks. In all the Japanese took 140,000 Allied prisoners during the war. An estimated 36,000 were sent back to Japan but many would not survive the trip.
One of the biggest reasons for this were the transports they were packed into. These notorious transports were called “Hell Ships” by the prisoners aboard them – and for a good reason.
Prisoners taken by the Japanese were beaten and starved, if not killed outright when captured. Those who did survive captivity were often pushed into forced labor, used in mines and factories all over Imperial Japan and its newly-acquired territories.
When prisoners were taken at any one of the battles fought to “acquire” the new Japanese possessions, Allied forces expected treatment in line with the rules regarding POWs under the 1929 Geneva Convention, which forbade their use in wartime production and hostilities against their home countries. The agreement before 1949 was intended not to punish those prisoners for being taken captive, but only to prevent their further participation in the war.
When captured by the Japanese, however, POWs were not treated humanely as the Geneva Convention prescribed. The Japanese saw surrender as a dishonorable act and treated their prisoners as if they were dishonored.
As a result, an estimated 40% of American prisoners taken by Japan died in captivity. The Hell Ships that transported them to all regions of the empire are indicative of why. Like the Bataan Death March and the conditions of tropical prisons on land, prisoners on hell ships endured the harsh treatment of their overseers, a lack of food and water, and all the diseases found among large groups of forcibly incarcerated people.
Unlike the prison camps and the death march, the prisoners aboard hell ships also had to contend with little access to air and proper ventilation. They had to endure the extreme heat of being held in a cargo hold. Worst of all, the ships also carried war supplies and auxiliary troops so they couldn’t be flagged as a non-combatant ship.
As a result, hell ships were frequently targeted by Allied air and naval forces, meaning they (and the prisoners of war aboard them) could be strafed, torpedoed, and sunk but aircraft, submarines and other Allied naval ships.
These attacks happened much more than anyone would like to admit. An estimated 20,000 Allied prisoners went down aboard the hell ships, targeted by friendly forces. If they were attacked or sunk, their treatment in the situation was not guaranteed. If they survived the attacks, some were killed trying to escape the incoming water. If they were allowed to escape the sinking ship, there was no telling if the Japanese would try to rescue them.
When they did escape the sinking ship and were rescued by the Japanese, they found themselves right back to being captive, often in just the same horrifying conditions they’d just escaped.
At least 14 hell ships were sunk by the Allies during the war, killing thousands of Allied POWs.
The loss of the nuclear attack submarine USS Scorpion (SSN 589) was the last peacetime loss of a Navy vessel until the Avenger-class mine countermeasures vessel USS Guardian ran aground off the Philippines. Unlike the case of the Guardian, 99 sailors lost their lives when USS Scorpion sank after an explosion of undetermined origin.
For the time, America’s Skipjack-class submarines were very fast. According to the “13th Edition of Ships and Aircraft of the U.S. Fleet,” these 3,075-ton submarines had a top speed of over 30 knots. Armed with six 21-inch torpedo tubes capable of firing anything from World War II-vintage Mk 14 torpedoes to the early versions of the multi-role Mk 48, this sub was as lethal as they come.
The USS Scorpion was the second of the six vessels to be completed and was commissioned in 1960. According to GlobalSecurity.org, she carried out a number of patrols between then and 1967 before being slated for an overhaul. However, this overhaul was cut short by operational needs. The Scorpion was sent out on Feb. 15, 1968, for what would become her last patrol.
After operating in the Mediterranean Sea, she began her return voyage, diverting briefly to monitor a Soviet naval force. The last anyone heard from the sub was on May 21, 1968. Six days later the Scorpion failed to arrive at Norfolk, where families of the crew were waiting.
The Navy would declare her to be “overdue and presumed lost,” the first time such an announcement had been made since World War II. The sub would not be found until October of that year.
The Navy would look into the disaster, but the official court of inquiry said the cause of the loss could not be determined with certainty. But there are several theories on what might have happened.
One centered around a malfunction of a torpedo. But others suspect poor maintenance may have been the culprit, citing the rushed overhaul.
Check out this video about what it was like to be on the Scorpion.
Believe it or not, some of the greatest pioneers in the use of military helicopters were Coast Guardsmen. These early breakthroughs took place during World War II when the Navy was too busy expanding traditional carrier operations to focus on rotary wing, and the Army had largely sequestered helicopters to an air commando group. The Coast Guard, meanwhile, was working on what would be the first-ever helicopter carrier.
USCGC Governor Cobb underway after its conversion into a helicopter carrier.
(U.S. Coast Guard)
Obviously, we’re talking about a ship that carries helicopters, not an aircraft carrier that flies like a helicopter. The Avengers aren’t real (yet).
The potential advantages of helicopters in military operations were clear to many of the military leaders who witnessed demonstrations in the early 1940s. Igor Sikorsky had made the first practical helicopter flight in 1939, and the value of an aircraft that could hover over an enemy submarine or take off and land in windy or stormy weather was obvious.
But the first helicopters were not really up to the most demanding missions. For starters, they simply didn’t have the power to carry heavy ordnance. And it would take years to build up a cadre of pilots to plan operations, conduct staff work, and actually fly the missions.
The Army was officially given lead on testing helicopters and developing them for wartime use, but they were predominantly interested in using it for reconnaissance with a secondary interest in rescuing personnel in areas where liaison planes couldn’t reach.
So, the Coast Guard, which wanted to develop the helicopter for rescues at sea and for their own portion of the anti-submarine fight, saw a potential opening. They could pursue the maritime uses of helicopters if they could just get a sign off from the Navy and some money and/or helicopters.
The commandant of the Coast Guard, Vice Adm. Russell R. Waesche, officially approved Coast Guard helicopter development in June 1942. In February 1943, he convinced Chief of Naval Operations Navy Adm. Ernest King to direct that the Coast Guard had the lead on maritime helicopter development. Suddenly, almost every U.S. Navy helicopter was controlled by the Coast Guard.
A joint Navy-Coast Guard board began looking into the possibilities with a focus on anti-submarine warfare per King’s wishes. They eventually settled on adapting helicopters to detect submarines, using their limited carrying capacity for sensors instead of depth charges or a large crew. They envisioned helicopters that operated from merchant ships and protected convoys across the Atlantic and Pacific.
The Coast Guard quickly overhauled the steam-powered passenger ship named Governor Cobb into CGC Governor Cobb, the first helicopter carrier. The Coast Guard added armor, a flight deck, 10 guns of various calibers, and depth charges. Work was completed in May 1943, and the first detachment of pilots was trained and certified that July.
Coast Guard Lt. Cmdr. Frank A. Erickson stands beside an HNS-1 Hoverfly and his co-pilot Lt. Walter Bolton sits within.
(U.S. Coast Guard)
The early tests showed that the HNS-1 helicopters were under-powered for rough weather and anti-submarine operations, but were exceedingly valuable in rescue operations. This was proven in January 1944 when a destroyer exploded between New Jersey and New York. Severe weather grounded fixed-wing aircraft, but Coast Guard pilot Lt. Cmdr. Frank A. Erickson took off in an HNS-1s.
He strapped two cases of plasma to the helicopter and took off in winds up to 25 knots and sleet, flew between tall buildings to the hospital and dropped off the goods in just 14 minutes. Because the only suitable pick-up point was surrounded by large trees, Erickson had to fly backward in the high winds to get back into the air.
According to a Coast Guard history:
“Weather conditions were such that this flight could not have been made by any other type of aircraft,” Erickson stated. He added that the flight was “routine for the helicopter.” The New York Times lauded the historic flight stating: It was indeed routine for the strange rotary-winded machine which Igor Sikorsky has brought to practical flight, but it shows in striking fashion how the helicopter can make use of tiny landing areas in conditions of visibility which make other types of flying impossible….Nothing can dim the future of a machine which can take in its stride weather conditions such as those which prevailed in New York on Monday.
Still, it was clear by the end of 1944 that a capable anti-submarine helicopter would not make it into the fight in time for World War II, so the Navy slashed its order for 210 helicopters down to 36, just enough to satisfy patrol tasks and the Coast Guard’s early rescue requirements.
This made the helicopter carrier Governor Cobb surplus to requirements. It was decommissioned in January 1946. The helicopter wouldn’t see serious deployment with the Navy’s fleet until Sikorsky sent civilian pilots in 1947 to a Navy fleet exercise and successfully rescued four downed pilots in four events.
But the experiment proved that the helicopters could operate from conventional carriers, no need for a dedicated ship. Today, helicopters can fly from ships as small as destroyers and serve in roles from search and rescue to anti-submarine and anti-air to cargo transportation.
These days, aircraft designers aren’t exactly household names. Quick, can you tell me who designed the F-22? How about the F-35? No? Don’t worry, not too many can.
Back in the day, aircraft designers were big names. Kelly Johnson of Lockheed is rightly famous for designing the SR-71 and P-38, among other planes. But only one man can say that he designed aircraft that helped avenge both Pearl Harbor and the 9/11 attacks.
His name is Ed Heinemann, and he holds the distinction of having designed both the plane that won one of the most pivotal battles in naval history and today’s best multi-role fighter. According to the National Aviation Hall of Fame, he became the chief engineer at Douglas Aircraft Corporation’s El Segundo plant in the 1940s.
While there, he designed the A-20 Havoc and, more notably, the SBD Dauntless. The SBD is most famous for what it did in the span of roughly five minutes on the morning of June 4, 1942, about 175 miles north-northwest of Midway Atoll. In that timeframe, three Japanese aircraft carriers, the Akagi, Soryu, and Kaga, were fatally damaged by dive-bombers launched from aircraft carriers USS Enterprise (CV 6) and Yorktown (CV 5).
The SBD wasn’t all. While with Douglas Aircraft Corporation, Heinemann also designed some Cold War standbys of the United States: The A-3 Skywarrior and the A-4 Skyhawk.
Heinemann left Douglas in 1962 to join a company called General Dynamics. In the wake of the Vietnam War, that company would be one of two asked to develop a lightweight fighter for the United States Air Force that took into account lessons learned from fighting the Communists. Heinemann oversaw the project team, which would produce a multi-role fighter that would become almost as widely exported at the Skyhawk, until his retirement in 1973.
That plane, Ed Heinemann’s last aviation creation, would win the competition, and even get to star in a movie, all while becoming the backbone of the United States Air Force in Desert Storm (where it served alongside some modified A-3s) as well as the War on Terror. According to a book he co-authored on aircraft design, Ed Heinemann, in the last days of his career, oversaw the development of the F-16 Fighting Falcon.
According to Disney, princes are the most charming, handsome men in all the land. Historically, that’s far from the truth. Royal families were typically pretty obsessed with power. No matter how much they had, they wanted more, and they wanted to keep it. One way to do that was by keeping it in the family; AKA, they slept with their cousins. Back then, incest wasn’t so taboo. Marriages between uncles and nieces and other close relations happened frequently.
Unfortunately, it wasn’t just power that was passed down to future generations. Genetic disorders that were uncommon among the general population were condensed in royal bloodlines to the point that sickness was as much of a royal inheritance as wealth. The result? A ton of really weird royals, including the infamous Henry the 8th who was known for his paranoia and tyrannical behavior. Keep scrolling to discover all the strange effects that inbreeding had on the royal families of yesteryear.
The Habsburg Jaw
The German-Austrian Habsburg family had an empire encompassing everything from Portugal to Transylvania, partially because they married strategically to consolidate their bloodline. Because of their rampant incest, the Habsburgs accidentally created their own trademark facial deformities, collectively known as the Habsburg jaw. Those who inherited the deformity typically had oversized jaws and lower lips, long noses, and large tongues. It was most prevalent in male monarchs, with female family members experiencing fewer external deformities. Charles II had such a severe case that he had trouble speaking and frequently drooled…yikes.
For most people, cuts and bruises are no big deal. For those with hemophilia, a scraped knee can turn serious. Hemophilia is a rare blood disorder in which your body doesn’t produce enough clotting factor. When someone with hemophilia starts to bleed, they don’t stop. The disease is recessive, so it’s very uncommon; both of your parents must carry the gene for you to develop symptoms. Unfortunately, it was easy for inbred royals to produce unfortunate gene combinations.
Queen Victoria and her husband, Prince Consort Albert, both carried the gene for hemophilia, as they were first cousins. Their son, Leopold, struggled with the disease until it eventually killed him when he was only 31. Hemophilia was passed down to Russian Czar Nicholas II’s family. His son and heir, Alexei, suffered from hemophilia, inherited from his great-grandmother, Queen Victoria. Even in the early 1900s, the life expectancy of someone with hemophilia was only about 13 years.
Spanish royalty was particularly prone to the genetic condition of hydrocephalus, in which fluid builds up deep in the brain. The extra fluid puts pressure on the brain and spinal cord, causing everything from mild symptoms to death. It occurs most frequently in infants, which was often the case in inbred royalty. The royal children who suffered from it were born with abnormally large heads and often suffered from growth delays, malnourishment, muscular atrophy, poor balance, and seizures.
Hydrocephalus also affected British royalty, including Prince William, the oldest surviving child of Queen Anne and Prince Consort George of Denmark. The two royals were cousins, and they were so genetically similar that they struggled to reproduce any healthy offspring, losing 17 children to genetic disease. You’d think they’d figure it out after the first few, but they were determined to produce an heir. Prince William made it until age 11, when he died of hydrocephalus combined with a bacterial infection.
Royal inbreeding existed before the European monarchy was even a thing. Ancient Egyptians practiced marriage within the royal family with the intent of keeping their bloodline pure, and it backfired big time. King Tutenkhamen, AKA King Tut, was one of Egypts most famous pharaohs, but he was a bit of a genetic mess. Modern-day studies showed that he had a cleft palate, a club foot, and a strangely elongated skull. Some researchers believe King Tut’s mother wasn’t really Queen Nefertiti, but King Akhenaten’s sister. Sibling-sibling inbreeding tends to have severe effects, giving poor King Tut a compromised immune system that led to his eventual death.
King Charles II married twice, yet he never successfully fathered an heir. Like many other royals, he struggled with fertility, likely the result of his inbred heritage. Queen Anne, the first monarch of Great Britain, was a great ruler, but not so great at producing healthy children. Only one of 18 of her offspring made it past their toddler years, with eight miscarried and five stillborn. Considering the great pressure to produce heirs to inherit the throne, infertility caused a great deal of royal strife. In some ways, however, it was a boon. Since Charles II never had children, his laundry list of genetic issues, including the infamous Habsburg jaw, died with him.
Speaking of Charles II, he didn’t say a word until he was four and didn’t learn how to walk until he was eight. He was the child of Philip IV of Spain and Mariana of Austria, who were uncle and niece. His family’s long history of inbreeding was so severe that he was more severely inbred than he would have been had his parents been siblings. While inbreeding doesn’t automatically lower intelligence, it does make it more likely to inherit recessive genes linked to low IQ and cognitive disabilities, resulting in a royal family with just as many mental challenges as physical ones.
George III was King of England at the time of the American Revolution, and many wonder if his mental illness had something to do with his failure as a ruler. Another member of Queen Victoria’s highly inbred family, George III was known for his manic episodes and nickname of “The Mad King”. Initially, historians believed that he had porphyria, a chronic liver disease that results in bouts of madness and causes bluish urine. Today, it’s believed that George III actually suffered from bipolar disorder, causing his sudden manic episodes and rash decision making.
Other royals suffered from mental illness as well, including Queen Maria the Pious. She was so obsessively devout that when her church’s confessor died, she screamed for hours about how she would be damned without him. She shared a doctor with King George III, who employed all kinds of strange and ineffective treatments, like ice baths and taking laxatives.
Joanna of Castile, also known as Joanna the Mad, also struggled with irrational behavior and uncontrollable moods. Like most women, she was furious when she discovered her husband’s mistress. Unlike most people, she proceeded to stab her in the face. She remained obsessed with her husband after his infidelity, however. She loved him so much that she slept beside him even after he died. You read that right. She snuggled a corpse. M’kay then.
Monarchs have a reputation for reckless, harsh, and sometimes cruel behavior. Is it possible that many of their worst deeds were tied to inbred insanity? Totally. Does that make their tyrannical reign any less terrifying? Not even a little bit. While their stories are fascinating to read about, let’s keep the inbreeding and dictatorships in the history books, okay? Okay.
One of the biggest questions of the Revolutionary War is this: How did the British of 1776, with immense advantages in troops and ships and an effective plan, manage to lose the war?
When you look at the material state of affairs, the 13 colonies really didn’t stand a chance. So, how did the British lose the war despite all of their advantages?
The reason was not a lack of strategy. After the battles of Lexington and Concord, the British assumed that the American uprising was a number of local rebellions. It wasn’t until 1776 that they realized that they were dealing with a uniform rebellion across all 13 colonies. Granted, some states were more rebellious than others (Massachusetts being the most notable), but they had a big problem due to the sheer size of East Coast.
At the Battle of Long Island, the actions of the Delaware Regiment kept the American defeat from becoming a disaster. Fighting alongside the 1st Maryland Regiment, the soldiers from Delaware may well have prevented the capture of the majority of Washington’s army — an event that might have ended the colonial rebellion. (Image courtesy of DoD)
So, they came up with a strategy. The British plan was to first seize New York City to use as a forward base. Next, they’d move one force north while a second force, from Canada, moved south. The goal was to meet somewhere near Albany in 1777. This would cut New England off from the rest of the colonies and, hopefully, strangle the rebellion.
This was not a bad strategy. The problem was, after coming up with the plan, they flubbed the execution. They seized New York and, in fact, George Washington had a close call trying to escape the British. But then, Washington, with a successful Christmas strike on Trenton and beating Hessian mercenaries at the Battle of Princeton, drew the attention of General Howe. Instead of going north, Howe chased after Washington’s army and the Continental Congress, completely discarding the strategy. There was no on-scene commander-in-chief to reign him in.
The British force moving south from Canada was eventually defeated at the Battle of Saratoga and forced to surrender. Meanwhile, Howe managed to seize Philadelphia but didn’t get the Continental Congress. Meanwhile, Washington’s army battled well at the Battle of Germantown. The combination of defeats at Saratoga and Germantown doomed the British strategy. The French and Spanish, now convinced the colonists had a chance, joined in and forced Britain into a multi-front war.
Watch the video below to see a rundown of how British strategy evolved during the Revolutionary War.