After about ten to twelve years, it’s usually time for a military working dog (MWD) to retire.
Unlike us, they don’t get out and start celebrating life immediately. Hundreds of them are sent to Lackland Air Force Base near San Antonio, Texas every year. Before November 2000, most of the dogs were euthanized or just left in the battlefield troops just left (because despite the rank and funeral honors, they’re listed as equipment).
Thankfully, “Robby’s Law” opens up adoption to their former handlers, law enforcement, and civilian families.
When a dog is retired out, it is usually because of injury or sickness and the best person to care for the puppy is the handler. More than 90% of these good dogs get adopted by their handler. Makes sense — calling a military working dog your “battle buddy” seems less awkward when the context is with a Labrador Retriever.
Next on the order of precedent in MWD adoption is law enforcement. Their services would be invaluable within police forces because they are trained to do exactly when the police would need them to do. However, the dogs are contractually agreed to belong to the department. They are the only ones allowed to allow the dogs to perform patrol, security, or substance detection work and the DoD has strict restrictions otherwise.
Sadly, even the police force won’t take the rest of the military working dogs because of their age or injury. This is where civilians come in. Bare in mind, adoption isn’t a quick process and applicants are carefully screened. It may take about a year on the waiting list to get your first interview.
They are not some goofy pug you can just adopt and take home. These dogs have usually deployed and show the same symptoms of Post Traumatic Stress. These dogs were trained to sniff out roadside bombs and to fight the Taliban and now have trouble socializing with other dogs and aren’t as playful as they were before.
The MWD selection process demands that the most energetic and playful puppies are needed for combat. After years of fighting, these old dogs show signs of nervous exhaustion and distress. If that pulls at your heart strings because it hits close to home, it is scientifically proven that dogs (including these MWDs) can aid and benefit those with Post Traumatic Stress.
If you don’t mind the wait, have an appropriate living space for a large dog, and are willing to aid these four legged veterans, there are organizations that can help. Save-A-Vet and Mission K9 Rescue are great places to start.
What brought this to their attention was the medal count between Audie Murphy – long regarded as the most decorated U.S. soldier ever – and a little-known WWII veteran and Medal of Honor recipient named Matt Urban, whose medal count matched Murphy’s.
But no one knew that Urban had matched the well-known Murphy until 36 years after the end of WWII because Urban’s recommendation and supporting paperwork were lost in the bureaucratic shuffle.
He was also awarded the French Croix de Guerre and the Legion of Merit but never knew until his military records were reviewed to award his Medal of Honor.
And there were a lot of actions to review.
President Carter called then retired Lt. Col. Matt Urban “The Greatest Soldier in American History” as he presented the Medal of Honor to Urban in 1980. The soldier’s Medal of Honor citation alone lists “a series of actions” – at least 10 – that go above and beyond the call of duty.
The Nazis called Urban “The Ghost” because he just seemed to keep coming back to life when they killed him. The soldier’s seven Purple Hearts can attest to that.
Urban joined the Army ROTC at Cornell in 1941. It was just after the Japanese attack on Pearl Harbor and unfortunately for the Nazis, Urban graduated in time to land in North Africa in 1942.
He was ordered to stay aboard a landing craft off the Tunisian coast, but when he heard his unit encountered stiff resistance on the beaches, he hopped in a raft and rowed to the fight. There he replaced a wounded platoon leader.
Later, at the Battle of the Kasserine Pass, Urban destroyed a German observation post, then led his company in a frontal assault on a fortified enemy position. During one German counterattack, Urban killed an enemy soldier with his trench knife, then took the man’s machine pistol and wiped out the rest of the oncoming Germans. He was wounded in his hands and arm.
In North Africa, his actions earned him two Silver Stars, a Bronze Star, and two Purple Hearts.
It was in France where Urban would distinguish himself and earn his nickname. His division landed at Normandy on D-Day, and later at the French town of Renouf he spearheaded another gallant series of events.
On June 14, 1944, two tanks and small arms began raking Urban’s men in the hedgerows, causing heavy casualties. He picked up a bazooka and led an ammo carrier closer to the tanks.
Urban then exposed himself to the heavy enemy fire as he took out both tanks. His leadership inspired his men who easily bested the rest of the German infantry.
Later that same day, Urban took a direct shot in the leg from a 37mm tank gun. He continued to direct his men to defense positions. The next day, still wounded, Urban led his troops on another attack. He was wounded again and flown back to England.
In July 1944, he learned how much the fighting in the French hedgerows devastated his unit. Urban, still in the hospital in England, ditched his bed and hitchhiked back to France. He met up with his men near St. Lo on the eve of Operation Cobra, a breakout effort to hit the German flanks and advance into Brittany.
He found his unit held down by a German strong point with two of his tanks destroyed and a third missing its commander and gunner. Urban hatched a plan to remount the tank and break through but his lieutenant and sergeant were killed in their attempts – so he mounted the tank himself.
“The Ghost” manned the machine gun as bullets whizzed by and devastated the enemy.
He was wounded twice more in August, refusing to be evacuated even after taking artillery shell fragments to the chest. He was promoted to battalion commander.
In September 1944, Urban’s path of destruction across Europe was almost at an end. His men were pinned down by enemy artillery while trying to cross the Meuse River in Belgium. Urban left the command post and went to the front, where he reorganized the men and personally led an assault on a Nazi strongpoint. Urban was shot in the neck by a machine gun during the charge across open ground. He stayed on site until the Nazis were completely routed and the Allies could cross the Meuse.
In a 1974 interview with his hometown newspaper, the Buffalo News, he credits his survival to accepting the idea of dying in combat.
“If I had to get it,” Urban said, “it was going to be while I was doing something. I didn’t want to die in my sleep.”
The reason he never received a recommendation for the Medal of Honor was because the recommendation was just lost in the paperwork shuffle. His commander, Maj. Max Wolf filed the recommendation, but it was lost when Wolf was killed in action.
It was the enlisted men who fought with Urban who started asking about “The Ghost’s” Medal of Honor.
“The sight of him limping up the road, all smiles, raring to lead the attack once more, brought the morale of the battleweary men to its highest peak – Staff Sgt. Earl G. Evans in a 1945 letter to the War Department that was also lost.
Matt Urban died in 1995 at age 75 and is interred at Arlington National Cemetery.
When I was speaking at a university a few years ago, a student who DJ’d at the local college radio station and had read my book asked me to come on as a guest. He had me put together a list of music I listened to in Iraq, and then interviewed me between songs. It was a really cool experience for me to revisit my deployment through music.
This isn’t limited to my time in Iraq, but is evocative of both my deployment and homecoming. Here it is:
1. Live, “Mental Jewelry”
I started listening to Live in high school and have fond memories of seeing them play. For some reason, the lyrics came into my mind often in Iraq, always making me feel a little melancholy.
2. Bad Religion, “The Process of Belief”
This album came out while I was at DLI, and I listened to it throughout the summer of 2002 while I was at AIT in Texas. Once we got to Iraq, this song in particular made me ache.
3. “Story of My Life,” Social Distortion, Social Distortion
This is one of my favorite albums. Went to see them play in Dallas the summer of 2002 – and spent the whole time feeling a little alienated from civilians. As for this particular song, I left my hometown when I was 15, and every time I’ve gone back have felt that weird sensation of my old neighborhood not being the same. That got even stronger after I joined the Army. I like how this song captures a particular feeling of frustration.
4. “So What,” Ministry, The Mind is a Terrible Thing to Taste
I was angry as a teenager, and spent a lot of time angry while I was in the Army, too. This is a great song to be really pissed off to. (Random aside: I saw the movie this song has samples from on Mystery Science Theater 3000 once, which was awesome. It’s totally absurd, you should check it out: The Violent Years.)
5. “Holiday in Cambodia,” Dead Kennedys
So there isn’t a lot of DK on Spotify that I could find. The song I wanted to put was “Life Sentence” (the lyrics “you don’t do what you want to but you do the same thing every day” could describe half my time in the Army!), but this is a good one, too. Fits in with the theme of anger.
6. “Jaded,” Operation Ivy,” Operation Ivy
As angry as I got, I never gave up those hopeful kernels, and still clung to that conviction that I could make the world a better place. “Sound System” is another good one off that album, about how music can bring you back up when you feel shitty.
7. “Cactus,” Pixies, Surfer Rosa
I have no idea why this particular Pixies song is the one that I got totally fixated on in Iraq. The mention of the desert? Who knows.
8. “Then She Did,” Jane’s Addiction, Ritual De Lo Habitual
When I was younger and, um, enjoyed experimenting with mind-altering substances, the song “Three Days” was what I loved the most – it took me on this whole mental odyssey. But in Iraq I fell in love with this one, a more reserved and introspective one.
9. “In the Arms of Sleep,” The Smashing Pumpkins, Mellon Collie and the Infinite Sadness
I would listen to this one over and over and over in Iraq, longing to … be there, have those feelings.
10. “I Know, Huh?,” The Vandals, Hitler Bad, Vandals Good
This reminds me of the giddy, heady, happy days of being just home from Iraq, before the bad parts of reintegration kicked in. I have memories of driving around with Zoe singing along with this, being goofy and ridiculous.
11. “8 Mile,” Eminem, 8 Mile
When things started to get really shitty, I would listen to this song (oh, so cheesy! I know!) and tell myself I could push on for just a little longer and couldn’t give up.
Okay, maybe not entirely. But the first written use of the acronym “OMG” — meaning Oh My God, for those not hip with the kids’ lingo — came from an admiral in the Royal Navy.
In 1917, Lord John Fisher, who resigned his commission in 1915 over Churchill’s Gallipoli Campaign during the First World War, wrote to Churchill who was then Minister of Munitions about his concerns regarding the Navy’s ability to conduct a major campaign to keep the Germans from flanking the Russians via the Baltic Sea.
Also, a tapis was a kind of tablecloth, and the phrase on the tapis meant the idea was under consideration. As for Shower it on the Admiralty, I think we can all figure out what that means.
When you think about it, it makes sense an acronym would come from the military, because no one produces TLAs like the armed forces.
The problems the Marine Corps is having with its F/A-18 Hornet force have been a boon to one plane that was originally slated to go to the boneyard much earlier.
According to Foxtrot Alpha, the AV-8B Harrier has recently gained a new lease on life as upgrades are keeping the famed “jump jet” in service. In fact, the Harrier force has become more reliable in recent years, even as it too sees the effects of aging.
The Marine Corps is planning to replace both the F/A-18C/D Hornets and the AV-8B Harriers with the F-35B Lightning II, the Vertical/Short Take-Off and Landing version of the Joint Strike Fighter. The F-35B has already been deployed to Japan, while the F-35A, operating from conventional land bases, just recently deployed to Estonia.
Originally, the Harriers were slated to be retired first, but the delays on the F-35 and a review that not only changed how the Marines used the Harrier, but also discovered that the Harrier airframes had far more flight hours left in the than originally thought gave them a new lease on life.
As a result, the Marines pushed through upgrades for the Harrier force, including newer AMRAAM missiles and the GBU-54 Laser Joint Direct Attack Munition, a 500-pound system that combined both GPS guidance with a laser seeker. Other upgrades will keep the Harriers flying well into the 2020s.
Capt. Jonathan Lewenthal and Capt. Eric Scheibe, AV-8B Harrier pilots with Marine Attack Squadron 231, Marine Aircraft Group 14, 3rd Marine Aircraft Wing (Forward), fly over southern Helmand province, Afghanistan after conducting an aerial refuel Dec. 6, 2012. (U.S. Marine Corps photo by Cpl. Gregory Moore)
The Harrier has been a Marine Corps mainstay since 1971 – often providing the close-air support for Marines in combat through Desert Storm and the War on Terror. The Harrier and Sea Harrier first made their mark in the Falklands War, where the jump jets helped the United Kingdom liberate the disputed islands after Argentinean military forces invaded.
While the World Wide Web was initially invented by one person (see: What was the First Website?), the genesis of the internet itself was a group effort by numerous individuals, sometimes working in concert, and other times independently. Its birth takes us back to the extremely competitive technological contest between the US and the USSR during the Cold War.
The Soviet Union sent the satellite Sputnik 1 into space on October 4, 1957. Partially in response, the American government created in 1958 the Advanced Research Project Agency, known today as DARPA—Defense Advanced Research Projects Agency. The agency’s specific mission was to
…prevent technological surprises like the launch of Sputnik, which signaled that the Soviets had beaten the U.S. into space. The mission statement has evolved over time. Today, DARPA’s mission is still to prevent technological surprise to the US, but also to create technological surprise for our enemies.
To coordinate such efforts, a rapid way to exchange data between various universities and laboratories was needed. This bring us to J. C. R. Licklider who is largely responsible for the theoretical basis of the Internet, an “Intergalactic Computer Network.” His idea was to create a network where many different computer systems would be interconnected to one another to quickly exchange data, rather than have individual systems setup, each one connecting to some other individual system.
He thought up the idea after having to deal with three separate systems connecting to computers in Santa Monica, the University of California, Berkeley, and a system at MIT:
For each of these three terminals, I had three different sets of user commands. So if I was talking online with someone at S.D.C. and I wanted to talk to someone I knew at Berkeley or M.I.T. about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them…. I said, oh man, it’s obvious what to do: If you have these three terminals, there ought to be one terminal that goes anywhere you want to go where you have interactive computing. That idea is the ARPAnet.”
So, yes, the idea for the internet as we know it partially came about because of the seemingly universal human desire to not have to get up and move to another location.
With the threat of a nuclear war, it was necessary to decentralize such a system, so that even if one node was destroyed, there would still be communication between all the other computers. The American engineer Paul Baran provided the solution to this issue; he designed a decentralized network that also used packet switching as a means for sending and receiving data.
Many others also contributed to the development of an efficient packet switching system, including Leonard Kleinrock and Donald Davies. If you’re not familiar, “packet switching” is basically just a method of breaking down all transmitted data—regardless of content, type, or structure—into suitably sized blocks, called packets. So, for instance, if you wanted to access a large file from another system, when you attempted to download it, rather than the entire file being sent in one stream, which would require a constant connection for the duration of the download, it would get broken down into small packets of data, with each packet being individually sent, perhaps taking different paths through the network. The system that downloads the file would then re-assemble the packets back into the original full file.
The platform mentioned above by Licklider, ARPANET was based on these ideas and was the principle precursor to the Internet as we think of it today. It was installed and operated for the first time in 1969 with four nodes, which were located at the University of California at Santa Barbara, the University of California at Los Angeles, SRI at Stanford University, and the University of Utah.
The first use of this network took place on October 29, 1969 at 10:30 pm and was a communication between UCLA and the Stanford Research Institute. As recounted by the aforementioned Leonard Kleinrock, this momentous communiqué went like this:
We set up a telephone connection between us and the guys at SRI… We typed the L and we asked on the phone,
“Do you see the L?”
“Yes, we see the L,” came the response.
We typed the O, and we asked, “Do you see the O.”
“Yes, we see the O.”
Then we typed the G, and the system crashed… Yet a revolution had begun.
By 1972, the number of computers that were connected to ARPANET had reached twenty-three and it was at this time that the term electronic mail (email) was first used, when a computer scientist named Ray Tomlinson implemented an email system in ARPANET using the “@” symbol to differentiate the sender’s name and network name in the email address.
Alongside these developments, engineers created more networks, which used different protocols such as X.25 and UUCP. The original protocol for communication used by the ARPANET was the NCP (Network Control Protocol). The need for a protocol that would unite all the many networks was needed.
In 1974, after many failed attempts, a paper published by Vint Cerf and Bob Kahn, also known as “the fathers of the Internet,” resulted in the protocol TCP (Transmission Control Protocol), which by 1978 would become TCP/IP (with the IP standing for Internet Protocol). At a high level, TCP/IP is essentially just a relatively efficient system for making sure the packets of data are sent and ultimately received where they need to go, and in turn assembled in the proper order so that the downloaded data mirrors the original file. So, for instance, if a packet is lost in transmission, TCP is the system that detects this and makes sure the missing packet(s) get re-sent and are successfully received. Developers of applications can then use this system without having to worry about exactly how the underlying network communication works.
On January 1, 1983, “flag day,” TCP/IP would become the exclusive communication protocol for ARPANET.
Also in 1983, Paul Mockapetris proposed a distributed database of internet name and address pairs, now known as the Domain Name System (DNS). This is essentially a distributed “phone book” linking a domain’s name to its IP address, allowing you to type in something like todayifoundout.com, instead of the IP address of the website. The distributed version of this system allowed for a decentralized approach to this “phone book.” Previous to this, a central HOSTS.TXT file was maintained at Stanford Research Institute that then could be downloaded and used by other systems. Of course, even by 1983, this was becoming a problem to maintain and there was a growing need for a decentralized approach.
This brings us to 1989 when Tim Berners-Lee of CERN (European Organization for Nuclear Research) developed a system for distributing information on the Internet and named it the World Wide Web.
What made this system unique from existing systems of the day was the marriage of the hypertext system (linked pages) with the internet; particularly the marriage of one directional links that didn’t require any action by the owner of the destination page to make it work as with bi-directional hypertext systems of the day. It also provided for relatively simple implementations of web servers and web browsers and was a completely open platform making it so anyone could contribute and develop their own such systems without paying any royalties. In the process of doing all this, Berners-Lee developed the URL format, hypertext markup language (HTML), and the Hypertext Transfer Protocol (HTTP).
Around this same time, one of the most popular alternatives to the web, the Gopher system, announced it would no longer be free to use, effectively killing it with many switching to the World Wide Web. Today, the web is so popular that many people often think of it as the internet, even though this isn’t the case at all.
Also around the time the World Wide Web was being created, the restrictions on commercial use of the internet were gradually being removed, which was another key element in the ultimate success of this network.
Next up, in 1993, Marc Andreessen led a team that developed a browser for the World Wide Web, named Mosaic. This was a graphical browser developed via funding through a U.S. government initiative, specifically the “High Performance Computing and Communications Act of 1991.″
This act was partially what Al Gore was referring to when he said he “took the initiative in creating the Internet.” All political rhetoric aside (and there was much on both sides concerning this statement), as one of the “fathers of the internet,” Vincent Cerf said, “The Internet would not be where it is in the United States without the strong support given to it and related research areas by the Vice President [Al Gore] in his current role and in his earlier role as Senator… As far back as the 1970s, Congressman Gore promoted the idea of high speed telecommunications as an engine for both economic growth and the improvement of our educational system. He was the first elected official to grasp the potential of computer communications to have a broader impact than just improving the conduct of science and scholarship… His initiatives led directly to the commercialization of the Internet. So he really does deserve credit.” (For more on this controversy, see: Did Al Gore Really Say He Invented the Internet?)
As for Mosaic, it was not the first web browser, as you’ll sometimes read, simply one of the most successful until Netscape came around (which was developed by many of those who previously worked on Mosaic). The first ever web browser, called WorldWideWeb, was created by Berners-Lee. This browser had a nice graphical user interface; allowed for multiple fonts and font sizes; allowed for downloading and displaying images, sounds, animations, movies, etc.; and had the ability to let users edit the web pages being viewed in order to promote collaboration of information. However, this browser only ran on NeXT Step’s OS, which most people didn’t have because of the extreme high cost of these systems. (This company was owned by Steve Jobs, so you can imagine the cost bloat… ;-))
In order to provide a browser anyone could use, the next browser Berners-Lee developed was much simpler and, thus, versions of it could be quickly developed to be able to run on just about any computer, for the most part regardless of processing power or operating system. It was a bare-bones inline browser (command line / text only), which didn’t have most of the features of his original browser.
Mosaic essentially reintroduced some of the nicer features found in Berners-Lee’s original browser, giving people a graphic interface to work with. It also included the ability to view web pages with inline images (instead of in separate windows as other browsers at the time). What really distinguished it from other such graphical browsers, though, was that it was easy for everyday users to install and use. The creators also offered 24 hour phone support to help people get it setup and working on their respective systems.
And the rest, as they say, is history.
Bonus Internet Facts:
The first domain ever registered was Symbolics.com on March 15, 1985. It was registered by the Symbolics Computer Corp.
The “//” forward slashes in any web address serve no real purpose according to Berners-Lee. He only put them in because, “It seemed like a good idea at the time.” He wanted a way to separate the part the web server needed to know about, for instance “www.todayifoundout.com”, from the other stuff which is more service oriented. Basically, he didn’t want to have to worry about knowing what service the particular website was using at a particular link when creating a link in a web page. “//” seemed natural, as it would to anyone who’s used Unix based systems. In retrospect though, this was not at all necessary, so the “//” are essentially pointless.
Berners-Lee chose the “#” for separating the main part of a document’s url with the portion that tells what part of the page to go to, because in the United States and some other countries, if you want to specify an address of an individual apartment or suite in a building, you classically precede the suite or apartment number with a “#”. So the structure is “street name and number #suite number”; thus “page url #location in page”.
Berners-Lee chose the name “World Wide Web” because he wanted to emphasize that, in this global hypertext system, anything could link to anything else. Alternative names he considered were: “Mine of Information” (Moi); “The Information Mine” (Tim); and “Information Mesh” (which was discarded as it looked too much like “Information Mess”).
Pronouncing “www” as individual letters “double-u double-u double-u” takes three times as many syllables as simply saying “World Wide Web.”
Most web addresses begin with “www” because of the traditional practice of naming a server according to the service it provides. So outside of this practice, there is no real reason for any website URL to put a “www” before the domain name; the administrators of whatever website can set it to put anything they want preceding the domain or nothing at all. This is why, as time goes on, more and more websites have adopted allowing only putting the domain name itself and assuming the user wants to access the web service instead of some other service the machine itself may provide. Thus, the web has more or less become the “default” service (generally on port 80) on most service hosting machines on the internet.
The earliest documented commercial spam message on an internet is often incorrectly cited as the 1994 “Green Card Spam” incident. However, the actual first documented commercial spam message was for a new model of Digital Equipment Corporation computers and was sent on ARPANET to 393 recipients by Gary Thuerk in 1978.
The famed Green Card Spam incident was sent April 12, 1994 by a husband and wife team of lawyers, Laurance Canter and Martha Siegal. They bulk posted, on Usenet newsgroups, advertisements for immigration law services. The two defended their actions citing freedom of speech rights. They also later wrote a book titled “How to Make a Fortune on the Information Superhighway“, which encouraged and demonstrated to people how to quickly and freely reach over 30 million users on the Internet by spamming.
Though not called spam, back then, telegraphic spam messages were extremely common in the 19th century in the United States particularly. Western Union allowed telegraphic messages on its network to be sent to multiple destinations. Thus, wealthy American residents tended to get numerous spam messages through telegrams presenting unsolicited investment offers and the like. This wasn’t nearly as much of a problem in Europe due to the fact that telegraphy was regulated by post offices in Europe.
The word “internet” was used as early as 1883 as a verb and adjective to refer to interconnected motions, but almost a century later, in 1982, the term would, of course, be used to describe a worldwide network of fully interconnected TCP/IP networks.
In 1988, the very first massive computer virus in history called “The Internet Worm” was responsible for more than 10 percent of the world’s Internet servers shutting down temporarily.
The term “virus,” as referring to self-replicating computer programs, was coined by Frederick Cohen who was a student at California’s School of Engineering. He wrote such a program for a class. This “virus” was a parasitic application that would seize control of the computer and replicate itself on the machine. He then specifically described his “computer virus” as: “a program that can ‘infect’ other programs by modifying them to include a possibly evolved copy of itself.” Cohen went on to be one of the first people to outline proper virus defense techniques. He also demonstrated in 1987 that no algorithm could ever detect all possible viruses.
Though it wasn’t called such at the time, one of the first ever computer viruses was called “Creeper” and was written by Bob Thomas in 1971. He wrote this virus to demonstrate the potential of such “mobile” computer programs. The virus itself wasn’t destructive and simply printed the message “I’m the creeper, catch me if you can!” Creeper spread about on ARPANET. It worked by finding open connections and transferring itself to other machines. It would also attempt to remove itself from the machine that it was just on, if it could, to further be non-intrusive. The Creeper was ultimately “caught” by a program called “the reaper” which was designed to find and remove any instances of the creeper out there.
While terms like “Computer Worm” and “Computer Virus” are fairly commonly known, one less commonly heard term is “Computer Wabbit.” This is a program that is self-replicating, like a computer virus, but does not infect any host programs or files. The wabbits simply multiply themselves continually until eventually causing the system to crash from lack of resources. The term “wabbit” itself references how rabbits breed incredibly quickly and can take over an area until the environment can no longer sustain them. Pronouncing it “wabbit” is thought to be in homage to Elmer Fudd’s pronunciation of “rabbit.”
Computer viruses/worms don’t inherently have to be bad for your system. Some viruses are designed to improve your system as they infect it. For instance, as noted previously, the Reeper, which was designed to go out and destroy all instances of the Creeper it found. Another virus designed by Cohen would spread itself on a system to all executable files. Rather than harm them though, it would simply safely compress them, freeing up storage space.
Al Gore was one of the so called “Atari Democrats.” These were a group of Democrats that had a “passion for technological issues, from biomedical research and genetic engineering to the environmental impact of the greenhouse effect.” They basically argued that supporting development of various new technologies would stimulate the economy and create a lot of new jobs. Their primary obstacle in political circles, which are primarily made up of a lot of “old fogies,” was simply trying to explain a lot of the various new technologies, in terms of why they were important, to try to get support from fellow politicians for these things.
Gore was also largely responsible for the “Information Superhighway” term becoming popular in the 1990s. The first time he used the term publicly was way back in 1978 at a meeting of computer industry workers. Originally, this term didn’t mean the World Wide Web. Rather, it meant a system like the Internet. However, with the popularity of the World Wide Web, the three terms became synonymous with one another. In that speech, Gore used the term “Information Superhighway” to be analogous with Interstate Highways, referencing how they stimulated the economy after the passing of the National Interstate and Defense Highways Act of 1956. That bill was introduced by Al Gore’s father. It created a boom in the housing market; an increase in how mobile citizens were; and a subsequent boom in new businesses and the like along the highways. Gore felt that an “information superhighway” would have a similar positive economic effect.
An Air Force B-1B Lancer strategic bomber taking part in a training exercise with South Korean forces was threatened by the Chinese while in international airspace.
According to a report by FoxNews.com, the threat came while the Lancer was over the East China Sea. China set up an air-defense identification zone over the East China Sea in 2013, according to the state news agency Xinhua.
The B-1B Lancer carried out its training mission despite the threat. The United States and South Korea are carrying out Foal Eagle, an annual joint exercise held with South Korea. The exercises have long been protested by North Korea. According to a DOD release from earlier this month notes that over 30,000 American and South Korean troops are taking part.
China has had a history of harassing American aircraft and naval vessels in the South China Sea, including the 2001 EP-3 incident, when an EP-3E Aries II electronic surveillance plane collided with a People’s Liberation Army Navy J-8 Finback fighter. The Chinese pilot was killed in the collision, while the EP-3E made an emergency landing. The crew was held for ten days by the Chinese.
While the South China Sea is a well-known flashpoint, the East China Sea is also the location of maritime disputes, including one between China and Japan over the Senkaku Islands.
The Army and Air Force once conducted an air-to-air combat experiment between jet fighters and attack helicopters. Called J-CATCH, or Joint Countering Attack Helicopter, it was not the first of its kind but the most conclusive using modern technology.
The results showed attack helicopters proved remarkably deadly when properly employed against fighter aircraft. And it wasn’t even close.
First conducted by the Army using MASH Sikorsky H-19s, airframes developed in the 40s and 50s, the modern J-CATCH test started in 1978, as the Soviet Union expanded their helicopter forces. Of special concern was the development of the Mil Mi-24 or Hind helicopter gunship. The four phase J-CATCH experiment started in earnest with the Army, Marines, and Air Force participating in simulations at NASA’s Langley labs.
The second phase was a field test, pitting three AH-1 Cobras and two OH-58 Scouts against a Red Team force of UH-1 Twin Hueys and CH-3E Sea King helicopters and developed many new helicopter air-to-air tactics and maneuvers designed to counter the Russian Hind.
Phase Three is where the fighters came in. The Air Force chose F-4, A-7, A-10, and F-15 fighter aircraft to counter whatever the Army could muster in the exercise. The F-4 and F-15 were front line fighters with anti-air roles while the A-7 and A-10 had air-to-ground missions.
For two weeks, the helicopters trounced the fighter aircraft. The fighter pilots in the test runs sometimes didn’t even know they were under attack or destroyed until the exercise’s daily debriefing. The Army pilots were so good, they had to be ordered to follow Air Force procedures and tell their fixed-wing targets they were under attack over the radio. This only increased the kill ratio, which by the end of the exercise, had risen to 5-to-1 in favor of the helicopters.
The fourth phase of the exercise saw the final outcome of the test: fighters should avoid helicopters at all costs, unless they have superiority of distance or altitude.
If there’s such a thing a revenge served warm, the story of Robert Smalls best describes it. Smalls was born into slavery in 1839 Beaufort, South Carolina. He was hired out by his master in Charleston by the age of 12, working the hotels, docks, and wharves of Charleston Harbor.
It was while he was working in the hotel he met his wife, Hannah Jones, whom he married in 1856. She had a daughter already, and the two had a son and daughter. At the outbreak of the Civil War in 1861, Smalls was pressed into service on board the CSS Planter, a Confederate transport. This is where he would make history.
While the Planter’s three white officers were ashore, the seven slave crewmen decided to make a break for the Union blockade. The slave escape wasn’t just a spur-of-the-moment decision. They planned the escape meticulously, even picking up their families, who were hiding near the southern wharf.
He brought the ship and its cargo of cannon and ammunition to the Union, as well as the Confederate Navy’s code book and the map of Charleston’s harbor defenses.
President Lincoln and the U.S. Congress would award the prize money for the capture of the Planter to Smalls and his crew. Smalls’ bravery and skill became the instrumental argument for allowing black troops to fight for the Union.
“My race needs no special defense, for the past history of them in this country proves them to be equal of any people anywhere,” Smalls said. “All, they need is an equal chance in the battle of life.”
Smalls himself enlisted with the Union as a Naval pilot, eventually ending up back on the Planter, now a Union transport, as a free man. He piloted the USS Keokuk during a major attack on Fort Sumter in Charleston Harbor.
When the Keokuk’s skipper wanted to surrender during the failed assault, Smalls took command and got the ship to safety. For this, he was promoted to the Keokuk’s captain. When the war ended, Smalls took the Planter back to Charleston for the ceremonial raising of the American flag at Fort Sumter.
He returned to Beaufort, S.C. as a freeman during Reconstruction. He opened a store for newly-freed slaves and purchased his old master’s house. He eventually allowed his old master’s wife to move back into the house shortly before her death. The house still stands.
Smalls went on to serve in the South Carolina House of Representatives and Senate as well as the South Carolina militia as a major general. He was eventually elected to represent South Carolina in the U.S. House of Representatives and served for four years before his death in 1915.
Becker, a 33-year-old native of Novi, Michigan, was a pilot for the squadron. He is survived by his spouse, mother and father, the release said.
Dellecker, 26, was a co-pilot from Daytona Beach, Florida. He is survived by his mother and father.
Dalga, 29, was a combat systems officer from Goldsboro, North Carolina. He is survived by his spouse, son and mother.
The crash occurred a quarter-mile east of Clovis Municipal Airport at 6:50 p.m. on Tuesday, according to a release from the base. The cause of the accident is under investigation.
Kyle Berkshire, director of the airport, told local NBC affiliate KOB4 News on Wednesday the plane was observed performing “touch and goes” on the runway during a training sortie.
“We are deeply saddened by this loss within our Air Commando family,” Col. Ben Maitre, the base commander, said in a release on Wednesday. “Our sympathies are with the loved ones and friends affected by this tragedy, and our team is focused on supporting them during this difficult time,” he said.
The 318th was activated in 2008 under Air Force Special Operations Command to provide “battlefield mobility for our special operations forces,” according to then-Col. Timothy Leahy, the former wing commander.
The unit is tasked with flying a variety of light and medium aircraft known as non-standard aviation, according to a service release. The squadron operates PC-12 aircraft — designated as the U-28A in the Air Force — for intra-theater airlift missions, the release said.
The U-28A is operated by the 319th, 34th and 318th Special Operations squadrons, according to the Air Force. Training is conducted by the 5th and 19th Special Operations squadron. The units are located at Cannon and Hurlburt Field, Florida.
When Kary Kleman decided in 2015 to move his family from their home in Dubai to war-torn Syria, he assured relatives back in the U.S. that he had only good intentions.
“He said he could not live in a life of luxury knowing what was going on in Syria, and that nobody was helping the people there,” said his mother, Marlene, on April 26. “We believe he has a good heart.”
When told of his situation by the Guardian, Kleman’s family denied that he had joined Isis and said he had been trying to make his way to the American embassy in Istanbul and return to the U.S.
Not long after arriving in Syria, Kleman told them he had learned the information that led him there “was all a scam,” according to his mother, and his situation became confusing to his family.
Relatives said that about 18 months ago, they alerted the FBI that Kleman may be in danger. An agent told them the bureau needed to look into whether he had become involved with wrongdoing, according to Kleman’s sister, Brenda, who said she “completely agreed” with their caution.
“I told Kary that you have to work with them, and if you’ve done everything right, be calm and it will work out,” she said.
The U.S. state department and the FBI’s field office in Jacksonville, Florida, had not responded to questions about Kleman and his alleged activities by the time this article was published.
Kleman, who converted to Islam about 15 years ago, was born in Wisconsin in July 1970, according to official records. He attended West High School in the city of Wausau. He later moved to northern Florida, where he met Denise Eberhardy, a divorcee. The couple had a son, Spencer, in June 1991.
Kleman and Eberhardy were married at the Glad Tidings church in Jacksonville in January 1997. But Kleman filed for divorce in 2001. In May that year, a circuit judge agreed that the marriage was “irretrievably broken”, and granted a dissolution.
Marlene Kleman said on April 26 it was around this time that her son converted to Islam. A friend, whom she could only name as Dave, had converted after marrying a woman from the United Arab Emirates, and guided Kleman into the faith during a difficult time. Kleman grew a beard and became devout.
Through his mosque, Kleman met Maher Abdelwahab, a local Egyptian American businessman, and began working for Abdelwahab’s company, which imported and sold fresh produce.
Abdelwahab told him about a daughter he had back in Egypt, according to Kleman’s mother. He showed Kleman photographs, and soon the pair were talking over email. Kleman went to Egypt and the couple married and had a son. But the relationship soured and Kleman came to believe he was being exploited.
After a spell back in Florida, Kleman moved in 2011 to Dubai to be near his friend Dave, who had by then emigrated with his wife. He met a Syrian girlfriend; they married and had three children.
As the long civil war raged in his wife’s homeland, however, Kleman grew troubled, according to his family. He told his mother that he was taking his wife and children to Syria. As they departed around August 2015, he said wanted to help the people affected by the conflict, possibly working as a handyman or setting up a business.
At the time, Isis was continuing a brutal series of suicide bombings and massacres to defend territory it had seized in Syria, while coming under bombardment from U.S. airstrikes. Gruesome video footage of abducted Americans being beheaded by Isis fighters had shocked the U.S. public through 2014.
Initially his stated plan seemed to have gone smoothly. His wife had a job teaching English, according to Kleman’s mother, and things were going OK.
“Then everything went bad,” said Marlene Kleman. “They were saying Isis had taken control of the city and that Russia was bombing the city, so that’s when they planned to escape.”
Up to 30,000 foreign fighters are thought to have crossed into Syria to fight with Isis. The U.S. government estimates that as many as 25,000 of them have since been killed.
You can run, but you can’t hide – especially the age of satellites, hand-held GPS devices, Google Earth and inexpensive, camera-bearing drones.
So with easy surveillance tools in the hands of a technologically unsophisticated enemy, how does a unit hide its command post?
During the recent Large Scale Exercise 2016, I Marine Expeditionary Force experimented with a new tent setup for its command post, or CP, that included big swaths of tan-and-drab camouflage netting draped over hard structures and tents.
The idea, of course, was to disguise – if not hide – the presence and footprint of the command post that I MEF Headquarters Group set up for the exercise, a de facto MEF-level command wargaming drill that ran Aug. 14 to 22. During a similar exercise in February 2015, its top commander acknowledged the large footprint occupied by his field command post, then set up in a field at Camp Pendleton, California, but without any camo netting.
It was, frankly, large and obvious that the tents and structures were something important to the battle effort. And that makes it a big target, whether seen on the ground from line of sight or from the air from drones, aircraft or satellite imagery, officials say.
This year, intent on better concealment, headquarters group Marines looked at ways to hide the lines and structures of the CP. They came up with a new camo netting design and refined it with some bird’s-eye scrutiny.
The Leathernecks went “back to basics,” one officer said.
“We flew a drone over it. Now, it’s a little bit more ambiguous,” Col. Matthew Jones, the I MEF chief of staff, said last week as the command worked through the exercise’s final day from its CP set up in a dusty field. “It’s just camouflaged, it’s a lot better concealed.”
MEF officials declined to reveal the secret sauce of the new CPX camo set they used. “This is the state of the art right now,” said Jones.
Still, he acknowledged camouflage netting has some limitations, saying, “I won’t say it won’t look like a hard military installation.”
“The fact is, it’s clearly visible from space,” he added. “You can’t mistake it. Even if it’s camouflaged. … It’s big enough to be worth shooting at.”
In fact, camouflage and concealment are as basic to warfighting – whether on the offensive or defense – as weaponry.
It’s all about deception – hiding your capabilities and your location, which taken together might help spell out your intentions, unintentional as that may be. Deception like camouflage can mask your true force strength, combat power and, more so these days, technological capabilities. But a collection of tents and structures, and the presence of radio antennas, satellite dishes, power generators and containers, can spell out the obvious presence of an important headquarters.
“If you can be seen, you will be attacked,” Gen. Robert Neller, the commandant of the Marine Corps, told a Center for Strategic and International Studies audience on Aug. 6.
Neller relayed I MEF’s experience with camouflaging the field CP, which despite netting efforts still had the vulnerability of detection from light shining off concertina wire that encircled the facilities. He wants Marines to get back to the basics of fieldcraft, like “digging a hole, preparing a defensive position, and camouflaging that, living in the field, and not going back to a [forward operating base] overnight to check your email.”
That will be more relevant, top leaders have noted, as more Marines deploy and operate in the dispersed, distributed battlefield of the near future.
And it’s not just the physical look that I MEF and the Marine Corps wants to change. Trendy gadgets and new technologies make it easier to detect and interfere with electronic signals. Such electronic surveillance poses real threats to military command networks and command and control.
“We are working really hard on our electronic signatures … that would make it easier for the enemy to detect you,” Jones said. It’s especially critical if U.S. forces get into a fight against a peer or near-peer adversary with similar surveillance capabilities, so “maybe we need to be thinking of other ways.”