One of the reasons why American weapons systems have been so dominant is computer power. Whether it’s helping the M1 Abrams keep its gun on target or helping secure communications, computers give American troops an edge. Now, the Pentagon wants to bring the next generation of computers, quantum computers, into the fight.
You’re probably asking yourself, “what, exactly, is quantum computing and how would it give our troops an edge? Well, here’s a quick rundown.
As explained by one of America’s top tech companies, IBM, quantum computing is a form of computing that uses quantum mechanics — the mathematics of subatomic particle movements. Current processors distill all information down, eventually, to a simple ‘0’ or ‘1.’ Quantum processors, however, can distill information down to ‘0’ and ‘1.’ In short, this has the potential to greatly increase the baseline speeds at which computers operate. To put that increase in speed into perspective, it’s the difference between using a horse to go from New York to San Diego and using a SR-71 Blackbird for that same trip.
Joint Direct Attack Munitions, currently dependent on GPS, could become more accurate thanks to quantum clocks.
(U.S. Air Force photo by Staff Sgt. Michael B. Keller)
So, how might this translate to military operations? Well, one application could be a replacement for the Global Positioning System. The satellite-based system relies on multiple updates per day, and there have been concerns the system could be vulnerable to attack. Quantum clocks could provide GPS-like accuracy when the satellite system is down.
Quantum computing could help make satellite communications more secure.
A number of other countries, including the United Kingdom, Israel, Canada, and Australia, are also working on quantum computing programs. The Air Force Research Laboratory expects to have working prototypes in five years, with other systems rolling out later. In one sense, this program is an urgent one: China is also working on quantum computers, and has reportedly launched a purportedly unhackable satellite using that technology — and it’s not a good idea to be technologically outgunned if tensions should boil over.
The Lahti anti-tank rifle looks a little unusual, showing a pair of skis on the front. But then again, it does come from Finland.
According to Modernfirearms.net, the Lahti L-39, also known as the Norsupyssy — or “elephant gun” — fired a 20x138mm round and had a 10-shot clip. While not effective against the most modern tanks, like the Russian T-34, the rifle proved to be useful against bunkers and other material targets. One variant was a full-auto version used as an anti-aircraft gun.
This semi-auto rifle was kept in Finnish military stocks until the 1980s, when many were scrapped. This makes the M107 Barrett used by the United States military look like a mousegun.
A number of these rifles, though, were declared surplus and sold in the United States in the early 1960s. The Gun Control Act of 1968, though, placed these rifles under some very heavy controls — even though none were ever used in crimes.
In this video, the punch this rifle packed is very apparent. The people who set up the test put up 16 quarter-inch steel plates. You can see what that shell does to the plates in this GIF.
When the Playstation 2 was first released to the public, it was said the computer inside was so powerful it could be used to launch nuclear weapons. It was a stunning comparison. In response, Iraqi dictator Saddam Hussein opted to try and buy up thousands of the gaming consoles – so much so the U.S. government had to impose export restrictions.
But it seems Saddam gave the Air Force an idea: building a supercomputer from many Playstations.
Just 10 years after Saddam Hussein tried to take over the world using thousands of gaming consoles, the United States Air Force took over the role of mad computer scientist and created the worlds 33rd fastest computer inside its own Air Force Research Laboratory. Only instead of Playstation 2, the Air Force used 1,760 Sony PlayStation 3 consoles. They called it the “Condor Cluster,” and it was the Department of Defense’s fastest computer.
The USAF put the computer in Rome, New York near Syracuse and intended to use the computer for radar enhancement, pattern recognition, satellite imagery processing, and artificial intelligence research for current and future Air Force projects and operations.
Processing imagery is the computer’s primary function, and it performs that function miraculously well. It can analyze ultra-high-resolution images very quickly, at a rate of billions of pixels per minute. But why use Playstation consoles instead of an actual computer or other proprietary technology? Because a Playstation cost $300 at the time and the latest and greatest tech in imagery processing would have run the USAF a much more hefty cost per unit. Together, the Playstations formed the core of the computer for a cost of roughly $1 million.
The result was a 500 TeraFLOPS Heterogeneous Cluster powered by PS3s but connected to subcluster heads of dual-quad Xeons with multiple GPGPUs. The video game consoles consumed 90% less energy than any alternative and building a special machine with more traditional components to create a processing center, the Air Force could have paid upwards of $10 million, and the system would not have been as energy-efficient.
It was the Playstation’s ability to install other operating systems that allowed for this cluster – and is what endangered the program.
In 2010, Sony pushed a Playstation firmware update that revoked the device’s ability to install alternate operating systems, like the Linux OS the Air Force used in its supercomputer cluster. The Air Force unboxed hundreds of Playstations and then imaging each unit to run Linux only to have Sony run updates on them a few weeks later. The Air Force, of course, didn’t need the firmware update, nor could Sony force it on those devices. But if one of the USAF’s Playstations went down, it would be the end of the cluster. Any device refurbished or newly purchased would lack the ability to run Linux.
The firmware update was the death knell for the supercomputer and others like it that had been produced by academic institutions. There was never any word on whether Saddam ever created his supercomputer.
The Navy tends to be very strict when people recover items from sunken wrecks. In fact, when an Enigma machine was taken from the wreck of U-85, the Navy intervened. They even tried to grab a plane they left lying around in a North Carolina swamp for over 40 years.
According to a 2004 AP report, the plane in question was very valuable. It was the only known surviving Brewster F3A “Corsair.” Well, let’s be honest here. The F3A can best be described as a Corsair In Name Only, or CINO. Brewster’s Corsairs had problems — so much so that in July, 1944, the Navy cancelled the contract and Brewster went out of business less than a month after D-Day.
Brewster was also responsible for the F2A Buffalo, a piece of crap that got a lot of Marine pilots killed during the Battle of Midway.
According to that AP report, the story began with a fatal accident on Dec. 19, 1944, which killed Lt. Robin C. Pennington, who was flying a training mission in the F3A. The Navy recovered Pennington’s body and some gear from the Corsair, then left the wreck. Eventually, the plane was recovered by Lex Cralley in 1990, who began trying to restore the plane. A simple case of “finders keepers, losers weepers,” right?
Nope. The Navy sued Cralley in 2004 to get the plane back. After the report appeared, comments were…not exactly favorable towards the Navy at one normally pro-military forum.
Eventually, then-Representative Walter Jones (R-NC) got involved. According to a May 28, 2004 report by Hearst News Service, Jones eventually authored an amendment that settled the lawsuit by having the Navy turn the F3A over to Cralley.
The Navy usually has been very assertive with regards to wrecks. According to admiraltylawguide.com, in 2000, the Navy won a ruling in the 11th Circuit Court of Appeal preventing Doug Champlin from salvaging a TBD Devastator that had survived both the Battle of the Coral Sea and the Battle of Midway.
Spring 2019, Brig. Gen. Anthony Potts, head of PEO Soldier, plans to brief the Army’s senior leadership for a decision on whether to move forward on a new version of the Enhanced Small Arms Protective Insert, or ESAPI, that features a more streamlined design.
“We are looking at a plate with the design that we refer to as a shooter’s cut,” he told reporters recently. “We believe that an increase in mobility provides survivability just as much as coverage of the plate or what the plate will stop itself.”
Potts said the new design offers slightly less coverage in the upper chest closest to the shoulder pocket.
The Modular Scalable Vest being demonstrated at Fort Carson.
(U.S. Army photo by Staff Sgt. Lance Pounds)
“Our soldiers absolutely love it, and the risk to going to a higher level of injury is .004 meters squared. I mean, it is minuscule, yet it takes almost a full pound off of the armor,” he said.
Potts said he plans to brief Army Vice Chief of Staff James C. McConville in the next couple of months on the new plate design, which also features a different formula limiting back-face deformation — or how much of the back face of the armor plate is allowed to move in against the body after a bullet strike.
“Obviously, when a lethal mechanism strikes a plate, the plate gives a little bit, and we want it to give a little bit — it’s by design — to dissipate energy,” Potts said. “The question is, how much can it give before it can potentially harm the soldier?”
The Army has tested changing the allowance for back-face deformation to a 58mm standard instead of the 44mm standard it has used for years.
“We have found what we believe is the right number. We are going to be briefing the vice chief of staff of the Army, and he will make the ultimate decision on this,” Potts said.
“But right now, with the work that we have done, we think we can achieve, at a minimum, a 20 percent weight reduction. … We have been working with vendors to prove out already that we know we can do this,” he said.
This article originally appeared on Military.com. Follow @militarydotcom on Twitter.
Before the advent of stealth aircraft, the U.S. military had a very different approach on how to operate its planes in contested airspace. That approach could be summarized in two words:
In those early years of air defense system development, the U.S. was less interested in developing sneaky aircraft and more concerned with developing untouchable ones– utilizing platforms that leveraged high altitude, high speed, or both to beat out air defenses of all sorts — whether we’re talking surface to air missiles or even air superiority fighters.
Lockheed’s legendary Kelly Johnson, designer of just about every badass aircraft you can imagine from the C-130 to the U-2 Spy Plane, was the Pentagon’s go-to guy when it came to designing platforms that could evade interception through speed and altitude. His U-2 Spy Plane, designed and built on a shoestring budget and in a span of just a few months, first proved the concept of flying above enemy defenses, but then America needed something that could also outrun anything Russia could throw its way. The result was the Blackbird family of jets, including the operational SR-71 — an aircraft that remains the fastest operational military plane ever to take to the sky.
You could make a list of 1000 amazing facts about the SR-71 without breaking a sweat — but here are three even a few aviation nerds may not have of heard before:
The Blackbird had over 4,000 missiles fired at it. None ever hit their target.
The SR-71 Blackbird remained in operational service as a high speed, high altitude surveillance platform for 34 years — flying at speeds in excess of Mach 3 at altitudes of around 80,000 feet. This combination of speed and altitude made it all but untouchable to enemy anti-air missiles, so even when a nation knew that there was an SR-71 flying in their airspace, there was next to nothing it could do about it. According to Air Force data collected through pilot reports and other intelligence sources more than 4,000 missiles were fired at the SR-71 during its operational flights, but none ever managed to actually catch the fast-moving platform.
Its windshield gets so hot it had to be made of quartz.
Flying at such high speeds and altitudes puts incredible strain on the aircraft and its occupants, which forced Lockheed to find creative solutions to problems as they arose. One such problem was the immense amount of heat — often higher than 600 degrees Fahrenheit — that the windshield of the SR-71 would experience at top speeds. Designers ultimately decided that using quartz for the windshield was the best way to prevent any blur or window distortion under these conditions, so they ultrasonically fused the quartz to the aircraft’s titanium hull.
The SR-71 was the last major military aircraft to be designed using a ‘slide rule.’
There are countless incredible facts about the SR-71 that would warrant a place on this list, but this is one of the few facts that pertains specifically to the incredible people tasked with developing it. Not long after the SR-71 took to the sky, the most difficult mathematical aspects of aircraft design were handed off to computers that could crunch the numbers more quickly and reliably — but that wasn’t the case for the Blackbird. Kelly Johnson and his team used their “slide rules,” which were basically just specialized rulers with a slide that designers could use to aid them in their calculations in designing the mighty Blackbird. Years later, the aircraft was reviewed using modern aviation design computers only to reveal that the machines would not have suggested any changes to the design.
Just for fun, here’s Major Brian Shul’s incredible “Speed Check” story about flying the Blackbird.
Major Brian Shul, USAF (Ret.) SR-71 Blackbird ‘Speed Check’
If you ever watched “The Jetsons,” an animated sitcom (1963-1964) about a family living in fictional Orbit City in the 2060s, you likely remember the iconic depiction of a futuristic utopia complete with flying cars and robotic contraptions to take care of many human needs. Robots, such as sass-talking housekeeper Rosie, could move through that world and perform tasks ranging from the mundane to the highly complex, all with human-like ease.
In the real world, however, robotic technology has not matured so swiftly.
What will it take to endow current robots with these futuristic capabilities? One place to look for inspiration is in human behavior and development. From birth, each of us has been performing a variety of tasks over and over and getting better each time. Intuitively, we know that practice, practice, and more practice is the only way to become better at something.
We often say we are developing a “muscle memory” of the task, and this is correct in many ways. Indeed, we are slowly developing a model of how the world operates and how we must move to influence the world. When we are good at a task—that is, when our mental model well captures what actually happens—we say the task has become second nature.
‘WHAT A PIECE OF WORK IS A MAN’
Let’s consider for a moment several amazing tasks performed by humans just for recreational purposes. Baseball players catch, throw, and hit a ball that can be moving faster than 100 miles per hour, using an elegant fusion of visual perception, tactile sensing, and motor control. Responding to a small target at this speed requires that the muscles react, at least to some degree, before the conscious mind fully processes visually what has happened.
The most skilled players of the game typically have the best mental models of how to pitch, hit, and catch. A mental model in this case contains all the prior knowledge and experience a player has about how to move his or her body to play the game, particularly for the position.
The execution of an assumed mental model is called “feed forward control.” A mental model that is incorrect or incomplete, such as one used by an inexperienced player, will reduce accuracy and repeatability and require more time to complete a task.
We can assume that even professional baseball players would need significant time to adjust if they were magically transported to play on the moon, where gravity is much weaker and air resistance is nonexistent. Similarly, another instance of incorrect models can be observed in the clumsy and uncoordinated movements of quickly growing children; their mental models of how to relate to the world must constantly change and adapt because they are changing.
Nevertheless, humans are quite resilient to change and, with practice, they can adapt to perform well in new situations.
A major focus of much current research going on now at the U.S. Army Research Laboratory (ARL) is moving toward creating a robot like Rosie, capable of learning and executing tasks with the best precision and speed possible, given what we know about our own abilities.
NOT QUITE ‘INFINITE IN FACULTY’
In general, we can say that Rosie-like robot performance is possible given sufficient advances in the areas of sensing, modeling self-motion, and modeling interactions with the world.
Robots “perceive” the world around them using myriad integrated sensors. These sensors include laser range scanners and acoustic ranging, which provide the distance from the robot to obstacles; cameras that permit the robot to see the world, similar to our own eyes; inertial measurement sensing that includes rate gyroscopes, which sense the rate of change of the orientation of the robotic device; and accelerometers, which sense acceleration and gravity, giving the robot an “inner ear” of sorts.
All these methods of sensing the world provide different types of information about the robot’s motion or location in the environment.
Sensor information is provided to the algorithms responsible for estimating self-motion and interaction with the world. Robots can be programmed with their own versions of mental models, complete with mechanisms for learning and adaptation that help encode knowledge about themselves and the environment in which they operate. Rather than “mental models,” we call these “world models.”
‘IN FORM AND MOVING HOW EXPRESS AND ADMIRABLE,’ SORT OF
Consider a robot acting while assuming a model of its own motion in the world. If the behavior the robot actually experiences deviates significantly from the behavior the robot expects, the discrepancy will lead to poor performance: a “wobbly” robot that is slow and confused, not unlike a human after too many alcoholic beverages. If the actual motion is closer to the anticipated model, the robot can be very quick and accurate with less burden on the sensing aspect to correct for erroneous modeling.
Of course, the environment itself greatly affects how the robot moves through the world. While gravity can fortunately be assumed constant on Earth, other conditions can change how a robot might interact with the environment.
For instance, a robot traveling through mud would have a much different experience than one moving on asphalt. The best modeling would be designed to change depending on the environment. We know there are many models to be learned and applied, and the real issue is knowing which model to apply for a given situation.
Robotics today are developed in laboratory environments with little exposure to the variability of the world outside the lab, which can cause a robot’s ability to perceive and react to fail in the unstructured outdoors. Limited environmental exposure during model learning and subsequent poor adaptation or performance is said to be the result of “over-fitting,” or using a model created from a small subset of experiences to maneuver according to a much broader set of experiences.
At ARL, we are researching specific advances to address these areas of sensing, modeling self-motion, and modeling robotic interaction with the world, with the understanding that doing so will enable great enhancements in the operational speed of autonomous vehicles.
Specifically, we are working on knowing when and under what conditions different methods of sensing work well or may not work well. Given this knowledge, we can balance how these sensors are combined to aid the robot’s motion estimation.
A much faster estimate is available as well through development of techniques to automatically estimate accurate models of the world and of robot self-motion. With the learned and applied models, the robot can act and plan on a much quicker timescale than what might be possible with only direct sensor measurements.
Finally, we know that these models of motion should change depending on which of the many diverse environmental conditions the robot finds itself in. To further enhance robot reliability in a more general sense, we are working on how to best model the world such that a collection of knowledge can be leveraged to help select an appropriate model of robot motion for the current conditions.
If we can master these capabilities, then Rosie can be ready for operation, lacking only her signature attitude.
DR. JOSEPH CONROY is an electronics engineer in ARL’s Micro and Nano Materials and Devices Branch. He holds a doctorate, an M.S. and a B.S., all in aerospace engineering and all from the University of Maryland, College Park.
MR. EARL JARED SHAMWELL is a systems engineer with General Technical Services LLC, providing contract support to ARL’s Micro and Nano Materials and Devices Branch. He is working on his doctorate in neuroscience from the University of Maryland, College Park, and holds a B.A. in economics and philosophy from Columbia University.
This article will be published in the January – March 2017 issue of Army ALT Magazine.
Subscribe to Army ALT News, the premier online news source for the Acquisition, Logistics, and Technology (ALT) Workforce.
You’ve heard the jokes about the French. Their surplus rifles have never been fired, just dropped once. Raise your right hand if you like the French, raise both hands if you are French.
But there is one thing that isn’t a joke: France’s “force de frappe.” No, this isn’t some fancy drink that McDonald’s or Starbuck’s is serving. The force de frappe – translated at strike force – is France’s nuclear deterrence force.
The French nuclear force is often ignored, though it did play a starring role in Larry Bond’s 1994 novel Cauldron, where an attempted nuclear strike on American carriers resulted in the U.S. taking it out.
France’s nuclear deterrence is a substantial force, though.
According to a 2013 CNN report, France has about 300 nukes. According to the Nuclear Weapons Archive, these are presently divided between M51 and M45 submarine-launched ballistic missiles, and ASMP missiles launched from Super Etendard naval attack planes, Mirage 2000N bombers, and Rafale multi-role fighters.
When launching a nuke, the French have options.
The M51 ballistic missile is carried by the Le Triomphant-class nuclear-powered ballistic missile submarines. According to the 16th Edition of Combat Fleets of the World, three of these submarines carry 16 M45 ballistic missiles, which have a range of just over 3,100 miles and deliver six 150 kiloton warheads.
The fourth carries 16 M51 ballistic missiles with six 150-kiloton warheads and a range of almost 5,600 miles. The first three subs will be re-fitted to carry the M51.
The ASMP is a serious nuke, with a 300-kiloton warhead that is about 20 times as powerful as the one dropped on Hiroshima. It has a range of 186 miles and a top speed of Mach 3, according to Combat Fleets of the World.
Furthermore, the fact that it can be used on Super Etendard and Rafale fighters means that the French nuclear-powered aircraft carrier Charles de Gaulle now serves as a potential strategic nuclear strike weapon.
While Globalsecurity.org notes that F/A-18s from American aircraft carriers can carry nuclear gravity bombs like the B61, the retirement of the AGM-69 Short-Range Attack Missile in 1990 and the cancellation of the AGM-131 SRAM II mean that the United States lacks a similar standoff nuclear strike capability from its carriers.
In other words, France’s carrier can do something that the carriers of the United States Navy can’t.
The “Bermuda Triangle” is a geographical area between Miami, Florida, San Juan, Puerto Rico, and the tiny island nation of Bermuda. Nearly everyone who goes to the Bahamas can tell you that it doesn’t necessarily mean you’ll die a horrible death.
From 1946 to 1991, there have been over 100 disappearances. These are some of the military disappearances that have been lost in the Bermuda Triangle.
1. U.S.S. Cyclops – March 4th, 1918
One of the U.S. Navy’s largest fuel ships at the time made an unscheduled stop in Barbados on its voyage to Baltimore. The ship was carrying 100 tons of manganese ore above what it could typically handle. All reports before leaving port said that it was not a concern.
The new path took the Cyclops straight through the Bermuda Triangle. No distress signal was sent. Nobody aboard answered radio calls.
This is one of the most deadly incidents in U.S. Navy history outside of combat, as all 306 sailors aboard were declared deceased by then-Assistant Secretary of the Navy Franklin D. Roosevelt.
2. and 3. USS Proteus and USS Nereus – November 23rd and December 10th 1941
Two of the three Sister ships to the U.S.S. Cyclops, The Proteus and Nereus, both carried a cargo of bauxite and both left St. Thomas in the Virgin Islands along the same exact path. Bauxite was used to create the aluminum for Allied aircraft.
Original theories focused on a surprise attack by German U-Boats, but the Germans never took credit for the sinking, nor were they in the area.
According to research by Rear Adm. George van Deurs, the acidic coal cargo would seriously erode the longitudinal support beams, thereby making them more likely to break under stress. The fourth sister ship to all three of the Cyclops, Proteus, and Nereus was the USS Jupiter. It was recommissioned as the USS Langley and became the Navy’s first aircraft carrier.
3. Flight 19 – December 5th, 1945
The most well known and documented disappearance was that of Flight 19. Five TBM Avenger Torpedo Bombers left Ft. Lauderdale on a routine training exercise. A distress call received from one of the pilots said: “We can’t find west. Everything is wrong. We can’t be sure of any direction. Everything looks strange, even the ocean.”
Later, pilot Charles Taylor sent another transmission: “We can’t make out anything. We think we may be 225 miles northwest of base. It looks like we are entering white water. We’re completely lost.”
After a PBM Mariner Flying Boat was lost on this rescue mission, the U.S. Navy’s official statement was “We are not even able to make a good guess as to what happened.”
4. MV Southern Districts – 5 December 1954
The former U.S. Navy Landing Ship was acquired by the Philadelphia and Norfolk Steamship Co. and converted into a cargo carrier. During its service, the LST took part in the invasion of Normandy.
Its final voyage was from Port Sulphur, Louisiana, to Bucksport, Maine, carrying a cargo of sulfur. It lost contact as it passed through the Bermuda Triangle. No one ever heard from the Southern Districts again until four years later, when a single life preserver washed on the Florida shores.
5. Flying Box Car out of Homestead AFB, FL – June 5th, 1965
The Fairchild C-119G and her original five crew left Homestead AFB at 7:49 PM with four more mechanics to aid another C-199G stranded on Grand Turk Island. The last radio transmission was received just off Crooked Island, 177 miles from it’s destination.
A month later on July 18, debris washed up on the beach of Gold Rock Cay just off the shore of Acklins Island (near where the crew gave its last transmission).
The most plausible theory of the mysterious disappearances in the Bermuda Triangle points to confirmation bias. If someone goes missing in the Bermuda Triangle, it’s immediately drawn into the same category as everything else lost in the area. The Coast Guard has stated that “there is no evidence that disappearances happen more frequently in the Bermuda Triangle than in any other part of the ocean.”
Of course, it’s more fun to speculate that one of the most traveled waterways near America may be haunted, may have alien abductions, or hold the Bimini’s secret Atlantean Empire.
The sea is a terrifying place. When sailors and airmen go missing, it’s a heartbreaking tragedy. Pointing to an easily debunkable theory cheapens the lose of good men and women.
Flight equipment is on its way through a major overhaul. The biggest change coming to the equipment is it is being designed with measurements from female aviators.
Joint Base Langley-Eustis, Virginia, held a Female Fitment Event, June 4, 2019, where Air Force and Navy female aviators gathered to have their measurements taken, which will be used to design new prototypes for female flight equipment.
“We wanted to bring together a large enough group of women to get our different sizing both in our uniforms, helmets and masks,” said Lt. Col. Shelly Mendieta, plans and requirements officer. “When you go to a squadron to go to a fitment event, there’s usually only a couple of women, so to get a full spectrum of what is going to work for women aviators, we needed to bring them all together in one place.”
In the past, flight equipment has been designed to the measurements of males because there are statistically more male aviators. This means more male measurements were used as opposed to their female counterparts. Department of Defense leadership hopes to change that.
A female aviator has her measurements taken while in a flight suit during a Female Fitment Event at Joint Base Langley-Eustis, Va., June 4, 2019.
(U.S. Air Force photo by Airman 1st Class Marcus M. Bullock)
“The chief of staff of the United States Air Force is committed to seeing us make progress and better integrate humans into the machine environment mix,” said Brig. Gen. Edward Vaughan, Air Force directorate of readiness and training, assistant to the director. “What has happened over the years is that a lot of our data and information we use to design these systems have traditionally been based on men.”
Female aviators using flight equipment designed to the specifications of males presents a problem for their combat effectiveness. When it comes to the mission, the tools airmen use play a big role in mission success.
Vaughan explained that if flight equipment, from harness straps to flight suits, does not meet the needs of the human, as well as of the various machines used for our missions, then service members are not going to be as effective and ready for combat.
The information gathered from the event is going to be crucial in the development of not only female flight equipment, but female aviators as a whole across multiple branches.
A group of Air Force and Navy female aviators discuss some of the improvements they want to see made to their flight equipment during a Female Fitment Event at Joint Base Langley-Eustis, Va., June 4, 2019.
(U.S. Air Force photo by Airman 1st Class Marcus M. Bullock)
“The goal is to ensure that the equipment that we are developing is going to fit properly, so that we have a safe and ready force,” Mendieta said. “By measuring a spectrum of women at different stages in their career, we can ensure that we have better equipment.”
Many officers participating in this event are hoping to be able to disseminate information to other bases regarding female flight equipment.
“When I look across the enterprise, this is an historic event and it’s important that we get this word out,” Vaughan said. “It’s not just the data that we are collecting and the fact that we are going to improve the equipment we use in combat, it’s also important to make people aware that this is one of the challenges that we are facing right now. It’s an airmen challenge.”
For many female aviators, this marks a monumental push to ensure they are combat ready and their opinions are being heard.
“Women have been flying in the Air Force for a very long time,” Mendieta said. “We have made progress but this is the first time in my 20-year career that we have had the kind of momentum that we have to get this right. We have the opportunity to get this right and we have to grab that and take it for all it’s worth.”
The Lockheed SR-71 was an awesome plane. It could go fast, it could go high, and it was very hard to detect on radar. The problem was, the United States didn’t build that many of them — a grand total of 32 planes were built. The SR-71 was retired in 1990 by George H. W. Bush but was brought back, briefly, in the ’90s before being sent out to pasture for good.
But there have been rumors of a replacement — something called the “Aurora.” This rumored replacement appeared in a 1985 budget line item in the same category as the U-2 Dragon Lady and the SR-71. The name stuck as the speculated successor to the SR-71, which the Air Force seemed all too happy to retire.
In 2006, Aviation Week editor, Bill Sweetman, declared he’d found budgetary evidence that the Aurora had been operating, saying,
My investigations continue to turn up evidence that suggests current activity. For example, having spent years sifting through military budgets, tracking untraceable dollars and code names, I learned how to sort out where money was going. This year, when I looked at the Air Force operations budget in detail, I found a $9-billion black hole that seems a perfect fit for a project like Aurora.
But there is another successor — one that doesn’t require a crew. This is the SR-72, and it may be twice as fast as the Mach 3 Blackbird. The Mach 6 drone is said to be able to reach some of the same heights as the SR-71. What’s unique about this unmanned aircraft is that it will carry two types of engines. There will be a normal jet engine to get the plane up to Mach 3 and a ramjet to push it to its top speed.
The SR-72 may be slated to enter service in 2030, but Popular Mechanics reported that Lockheed had announced progress on the project. More tellingly, that same publication reported that a demonstrator was seen at the Skunk Works plant. America’s super-fast eye in the sky may be here sooner than expected.
Learn more about this new plane in the video below:
The first time you select afterburner in a fighter is an experience you’ll never forget. Over a decade later, I can still remember every second of it.
I had made it through the attrition of pilot training and was now in the 9-month B-Course learning to fly the F-16. After several months of academics—going over every system on the jet and how to troubleshoot malfunctions, it was time to finally get in the air.
The way the jet is configured makes a big difference in terms of its performance. Usually, there are several weapons, pods, and fuel tanks hanging off the jet, which makes it much more capable in combat. However, they add a significant amount of weight and drag to the airframe.
The squadron leadership had decided to completely clean off the jets for our initial phase of flying—nothing external would be added, making it the stripped-down hot-rod that John Boyd famously envisioned back in the ’70s. It’s a rare configuration that I’ve only seen a handful of times during my career.
On the day of the flight, after I strapped in, I started the engine and could feel the F-16 coming to life: the slow groan of the engine transforming into a shrieking roar.
After the ground-ops checks, my instructor and I taxied to the end of the runway—as a wingman, my job was to follow him throughout the sortie. Once we received clearance to take off, he taxied onto the runway and pushed the throttle into afterburner.
I could see the nozzle of his engine clamp down as the engine spun-up into full military power—the highest non-afterburning setting. The nozzle then rapidly opened as the afterburner kicked in and a 10-foot bluish-red flame shot out of the back of the engine. Looking into the engine, I could only see a few feet of the nozzle before it disappeared into a whitish-yellow fire, similar to the sun. As he rapidly accelerated down the runway, I taxied into position.
After 15 seconds, I pushed the throttle forward until it hit the military power stop. I then rotated the throttle outward, which allowed me to push it further into the afterburner settings. Nothing happened for what seemed like a minute, but in reality, it was only a few seconds. It was enough time for me to look down to make sure nothing was wrong when, suddenly, the thrust hit me in the chest.
Before flying the F-16, I had flown a supersonic jet trainer called the T-38, so I was familiar with high-performance aircraft… But this acceleration was on another level. Before I knew it, a second jolt of thrust hit me, further increasing my acceleration—and the engine wasn’t even at full thrust yet.
There are five rings in the back of the engine that make up the afterburner. Each ring has hundreds of holes, through which fuel is sprayed at high pressure and then ignited. In order to not flood the engine, each ring sequentially lights off. So far, only two of the five rings had started spraying fuel.
The interesting thing about the way a jet accelerates is that as it goes faster, it accelerates faster (to a point). This is unlike a car, which starts off quickly and then slows down. As each afterburner ring lit off, my acceleration further increased. Before I knew it, I was at my rotation speed of 150 knots, or 175 mph. As soon as I was airborne, I began retracting my gear, reducing my drag, which further increased my acceleration. Even though it takes just a few seconds to retract the gear, I came dangerously close to overspending the 300-knot limit.
The one thing that stands out about that takeoff is that even though I was operating way behind the jet, I was smiling the whole time–it was an awesome experience that I’ll never forget.
NASA’s Voyager 2 probe exited our solar system nearly a year ago, becoming the second spacecraft to ever enter interstellar space.
It followed six years behind its sister spacecraft, Voyager 1, which reached the limits of the solar system in 2012. But a plasma-measuring instrument on Voyager 1 had been damaged, so that probe could not gather crucial data about the transition from our solar system into interstellar space.
Voyager 2, which left the solar system with its instruments intact, completed the set of data. Scientists shared their findings for the first time on Oct. 4, 2019, via five papers published in the journal Nature Astronomy.
The analyses indicate that there are mysterious extra layers between our solar system’s bubble and interstellar space. Voyager 2 detected solar winds — flows of charged gas particles that come from the sun — leaking from the solar system. Just beyond the solar system’s edge, these solar winds interact with interstellar winds: gas, dust, and charged particles flowing through space from supernova explosions millions of years ago.
“Material from the solar bubble was leaking outside, upstream into the galaxy at distances up to a billion miles,” Tom Krimigis, a physicist who authored one of the papers, said in a call with reporters.
The new boundary layers suggest there are stages in the transition from our solar bubble to the space beyond that scientists did not previously understand.
An image of Uranus taken by Voyager 2 on January 14, 1986, from a distance of approximately 7.8 million miles.
The place where solar and interstellar winds interact
On Nov. 5, 2018, Voyager 2 left what’s known as the “heliosphere,” a giant bubble of charged particles flowing out from the sun that sheathes our solar system. In doing so, the probe crossed a boundary area called the “heliopause.” In that area, the edge of our solar system’s bubble, solar winds meet a flow of interstellar wind and fold back on themselves.
It took both spacecraft less than a day to travel through the entire heliopause. The twin probes are now speeding through a region known as the “bow shock,” where the plasma of interstellar space flows around the heliosphere, much like water flowing around the bow of a moving ship.
This illustration shows the position of NASA’s Voyager 1 and Voyager 2 probes outside the heliosphere, a protective bubble created by the sun.
Both Voyager probes measured changes in the intensity of cosmic rays as they crossed the heliopause, along with the transition between magnetic fields inside and outside the bubble.
But because so much of the transition from our solar system to the space beyond is marked by changes in plasma (a hot ionized gas that’s the most abundant state of matter in the universe), Voyager 1’s damaged instrument had difficulty measuring it.
Now the new measurements from Voyager 2 indicate that the boundaries between our solar system and interstellar space may not be as simple as scientists once thought.
The data indicates that there’s a previously unknown boundary layer just beyond the heliopause. In that area, solar winds leak into space and interact with interstellar winds. The intensity of cosmic rays there was just 90% of their intensity farther out.
“There appears to be a region just outside the heliopause where we’re still connected — there’s still some connection back to the inside,” Edward Stone, a physicist who has worked on the Voyager missions since 1972, said in the call.
An illustration of a Voyager probe leaving the solar system.
Other results from the new analyses also show a complicated the relationship between interstellar space and our solar system at its edges.
The scientists found that beyond the mysterious, newly identified layer, there’s another, much thicker boundary layer where interstellar plasma flows over the heliopause. There, the density of the plasma jumps up by a factor of 20 or more for a region spanning billions of miles. This suggests that something is compressing the plasma outside the heliosphere, but scientists don’t know what.
“That currently represents a puzzle,” Don Gurnett, an astrophysicist who authored one of the five papers, said in the call.
What’s more, the new results also showed that compared with Voyager 1, Voyager 2 experienced a much smoother transition from the heliopause to a strong new magnetic field beyond the solar system.
“That remains a puzzle,” Krimigis said.
The scientists hope to continue studying these boundaries over the next five years before the Voyager probes run out of fuel.
“The heliopause is an obstacle to the interstellar flow,” Stone added. “We want to understand that complex interaction on the largest scale as we can.”
The Voyager 2 spacecraft launches from NASA’s Kennedy Space Center on August 20, 1977.
NASA launched the Voyager probes in 1977. Voyager 2 launched two weeks ahead of Voyager 1 on a special course to explore Uranus and Neptune. It is still the only spacecraft to have visited those planets.
The detour meant that Voyager 2 reached interstellar space six years after Voyager 1. It is now NASA’s longest-running mission.
“When the two Voyagers were launched, the Space Age was only 20 years old, so it was hard to know at that time that anything could last over 40 years,” Krimigis said.
Now, he said, scientists expect to get about five more years of data from the probes as they press on into interstellar space. The team hopes the Voyagers will reach the distant point where space is undisturbed by the heliosphere before they run out of fuel.
After the spacecraft die, they’ll continue drifting through space. In case aliens ever find them, each Voyager probe contains a golden record encoded with sounds, images, and other information about life on Earth.
In the future, the researchers want to send more probes in different directions toward the edges of our solar system to study these boundary layers in more detail.
“We absolutely need more data. Here’s an entire bubble, and we only crossed at two points,” Krimigis said. “Two examples are not enough.”
This article originally appeared on Business Insider. Follow @BusinessInsider on Twitter.