Rear Admiral Zhang Zhaozhong is a leader in the Chinese Navy, a professor at Beijing’s National Defense University, Chief Weapons Specialist and Strategist, and what some call “the Head of the Strategic Fool You Agency.”
The last is a nickname he earned because Chinese Netizens came to realize quotes from Adm. Zhang would suddenly mean the opposite of his intent. He said the Chinese would use fishermen on wooden boats to take out the new Zumwalt-Class destroyers, as the Chinese commissioned their first aircraft carrier, when he also said the Chinese defense against U.S. submarines would be “ropes of seaweed” a threat the U.S. did not foresee. He also publicly claimed the Chinese were not developing a fifth-generation stealth jet right before the Chinese test piloted its J-20 fighter in May 2011.
Admiral Zhang once criticized U.S. media for overestimating the threat of Chinese power. He said China could not keep pace with the U.S.if it wanted to, which it doesn’t. He acknowledges the need to increase the strength of Chinese military, but only because of “provocation” from the United States.
“American media like to make claims about how fast China’s military will surpass the United States,” Zhang told Want China Times. “What I have to say is that China is not going to catch up with the United States even if it stopped all military projects.
The admiral publicly stated China should do everything in its power to protect Iran from U.S.-Israeli aggression, “even if it means a third world war.” In response to the U.S. deploying a laser weapon on the USS Ponce, he said he believes the smog covering Chinese cities are the best defense from laser weapons.
“Under conditions where there is no smog, a laser weapon can fire [at a range of] 10km (6 miles),” he said, adding, “When there’s smog, it’s only 1km. What’s the point of making this kind of weapon?”
Zhang “retired” from the PLAN in 2015 and is now the most well-known and most senior military commentator on China’s state television.
The Department of Defense identified a sailor killed in action on Nov. 24 during Operation Inherent Resolve as Senior Chief Petty Officer Scott C. Dayton.
The 42-year-old from Woodbridge, Virginia, died when an improvised explosive device detonated in northern Syria, near Ayn Issa, according to a release from the headquarters of Combined Joint Task Force Inherent Resolve, which is coordinating the fight against the Islamic State of Iraq and Syria, also known as the Islamic State of Iraq and the Levant, or ISIL.
The Wall Street Journal reports that Dayton was killed north of Raqqa, a key battleground pitting Syrian government forces, rebel units and militants aligned with ISIS against one another.
Dayton was assigned to Explosive Ordnance Disposal Mobile Unit Two, based out of the Norfolk area. According to the Navy Expeditionary Combat Command website, explosive ordnance disposal personnel specialize in rendering explosive hazards safe, and have done anything from dismantling IEDs in Iraq and Afghanistan to helping secure the Olympics to supporting a local police department.
Navy bomb technicians like Dayton often are assigned to special operations teams like SEAL Team 6, which is known to be operating with rebel units deep inside war torn Syria.
EOD is one field that can be very busy, even in peacetime, often due to unexploded ordnance from past wars. In recent years, BALTOPS exercises have come across live mines left over from World War II, and some Civil War souvenirs have caused major kerfluffles in the United States.
“I am deeply saddened by the news on this Thanksgiving Day that one of our brave service members has been killed in Syria while protecting us from the evil of ISIL. It is a painful reminder of the dangers our men and women in uniform face around the world to keep us safe,” Secretary of Defense Ashton Carter said in a statement released by the Defense Department.
Combined Joint Task Force – Operation Inherent Resolve commander Lt. Gen. Stephen J. Townsend, said, “The entire counter-ISIL Coalition sends our condolences to this hero’s family, friends and teammates.”
Over the Thanksgiving Day weekend, members of the anti-ISIS coalition launched a total of 90 strikes, 19 of which were around the northern Iraqi city of Mosul. Those nineteen strikes destroyed or damaged a number of targets, including fifteen mortar systems, eight vehicles, four vehicle-borne IEDs, 22 supply routes, five caches and two heavy machine guns.
Six strikes took place near Ayn Issa, engaging four ISIS “tactical units” and destroying a vehicle storage facility, a vehicle-borne IED, a vehicle-borne IED facility and damaged fighting position.
But the fad didn’t make its debut on a famous red carpet or in an elegant fashion show — it’s the brilliant invention of the U.S. Navy.
Although no one has been officially accredited with inventing the bell bottom trouser, the flared out look was introduced for sailors to wear in 1817. The new design was made to allow the young men who washed down the ship’s deck to roll their pant legs up above their knees to protect the material.
This modification also improved the time it took to take them off when the sailors needed to abandon ship in a moments notice. The trousers also doubled as a life preserver by knotting the pant legs.
The Luftwaffe terrorized Europe during WWII. Blitzkrieg attacks by panzers and motorized infantry were supported by German fighters and bombers. Bearing the names of their designers, Junkers, Heinkel, and Messerschmitt became infamous among the Allied nations. Messerschmitt was best known for its fighter planes including the Luftwaffe’s primary fighter, the Bf 109, and the jet-powered Me 262. Although the company survived the war, it was barred from producing aircraft for ten years.
The war left Germany in a poor state. Its economy was in shambles, infrastructure was badly damaged, and manufacturing was nearly nonexistent. As the country and the continent rebuilt, fears of roadway congestion weighed heavy on people’s minds. Coupled with the scarcity and high cost of resources, European engineers turned to a radical new automobile design: the micro car.
Fritz Fend was a former Luftwaffe aeronautical engineer and technical officer. In 1948, he began building invalid carriages for disabled people. He noticed that his most popular model, the gasoline-powered Fend Fitzler tricycle, was also being purchased by able-bodied people for personal transport. Fend concluded that a two-seater model would be even more popular and adapted his design. He struck a deal with Messerschmitt to produce his new micro car at their Regensburg factory.
In 1953, Messerschmitt introduced the Kabinenroller, or “Cabin Scooter.” Based on the Flitzer, the Kabinenroller featured a monocoque chassis and a bubble canopy. Contrary to popular belief and despite their design similarities, the Kabinenroller canopies were not surplus Messerschmitt fighter canopies. The Kabinenroller platform was used to make the Messerschmitt KR175, the more powerful KR200, and the KR201 roadster. In 1956, another German company named FMR took over Kabinenroller production from Messerschmitt. Although the KR series micro cars still bore the Messerschmitt name and logo, Fend later adapted the platform into a sports car that was badged FMR.
Introduced in 1958, the Tg500 featured the same monocoque chassis, tandem seating, and bubble canopy as the Kabinenroller tricycles. However, it was fitted with a larger engine for increased speed and four wheels for improved performance. Unofficially, the “Tg” stood for Tiger, a name that stuck with the car. Confusingly, the name “Tiger” was not only the name of the most feared German tank of WWII, but also the name of a post-war truck produced by former tank maker Krupp. Despite being manufactured by FMR, the micro car Tiger is sometimes referred to as the Messerschmitt Tiger, a name that can confuse even the most ardent of WWII enthusiasts.
Because three-wheeled cars could be driven with a more affordable motorcycle license, Kabinrollers were extremely popular in Britain where they still maintain a loyal following. Overall though, the Kabinenroller was not a commercial success. Today, Kabinenroller examples are novelties that can fetch tens of thousands of dollars depending on their condition.
North Korea’s awful record of human rights violations may place it as the worst regime in the world in how it treats its people, but first-hand tales of the abuses rarely slip the secretive country’s borders.
In the video, women defectors who formerly served in North Korea’s military sit down with a South Korean host in a military-themed restaurant famous for its chicken. The cultural divide between the two Korean women becomes palpable when the North Korean points to mock ammunition decorating the restaurant, and the South Korean says she recognizes them from comics.
“Aww, you’re so adorable,” the North Korean replied.
(Digitalsoju TV | YouTube)The defector explained that all North Korean women must serve in the military for six years, and all men must serve for 11. During that time, she said she was fed three spoonfuls of rice at mealtimes.
Unsurprisingly, malnutrition is widespread across all sectors of North Korea. And despite North Korea being a communist country, the defector still said that even within the military, people badly want money and withhold or steal each other’s state-issued goods, like military uniforms.
The defector said that in North Korea, women are taught that they’re not as smart, important, or as strong as men.
A second defector said that the officers in charge of uniform and ration distribution would often leverage their position to coerce sex from female soldiers. “Higher-ranked officers sleeping around is quite common,” said the second woman.
But the first defector had a much more personal story.
“I was in the early stages of malnutrition… I weighed just around 81 pounds and was about 5’2,” said the defector. Her Body Mass Index, though not a perfect indicator of health, works out to about 15, where a healthy body is considered to have a BMI of about 19-25.
“The major general was this man who was around 45 years old and I was only 18 years old at the time,” she said. “But he tried to force himself on me.”
“So one day he tells everyone else to leave except for me. Then he abruptly tells me to take off all my clothes,” she said. The officer told her he was inspecting her for malnutrition, possibly to send her off to a hospital where undernourished soldiers are treated.
“So since I didn’t have much of a choice, I thought, well, it’s the Major General. Surely there’s a good reason for this. I never could have imagined he’d try something,” she said. But the Major General asks her to remove her underwear and “then out of nowhere, he comes at me,” she said.
The Major General then proceeded to beat her while she loudly screamed, so he covered her mouth. She said he hit her so hard in the left ear, that blood came out of her right ear. She said the beating was so severe her teeth were loose afterwards.
“How do you think this is going to make me look?” the Major General asked her after the beating. He then instructs her to get dressed and tell no one what happened or he would “make [her] life a living hell.”
“There wasn’t really anyone I could tell or report this too,” she said. “Many other women have gone through something similar.
“I don’t know whether he’s dead or alive, but if Korea ever gets reunified, I’m going to find him and even if I can’t make him feel ten times the pain I felt, I want to at least smack him on the right side of his face the same way he did to me,” she said.
If you take a peek at a list of pilots who were considered flying aces during WW2, you’ll notice that the top of the list is dominated by Luftwaffe pilots, some of whom scored hundreds of aerial victories during the war. While their skill and prowess in the air is undeniable, it’s arguable that the finest display in aerial combat during WW2 was achieved, mostly by luck, by an American B-24 co-pilot when he scored a single enemy kill with nothing but a handgun, at about 4,000-5,000 feet (about 1.3 km) in altitude, and without a plane. This is the story of Owen Baggett.
Born in 1920 in Texas, after finishing high school, Baggett moved to the city of Abilene to enroll in Hardin–Simmons University. While we were unable to discern what Baggett studied from the sparse amount of information available about his early life, the fact that he went to work at Johnson and Company Investment Securities in New York after graduating suggests he studied finance, business, or another similar subject.
Whatever the case, while still working at the investment firm in New York in December of 1941, Baggett volunteered for the Army Air Corps and reported for basic pilot training at the New Columbus Army Flying School.
After graduating from basic training, Baggett reported for duty in India, just a stone’s throw away from Japanese occupied Burma with the Tenth Air Force. Baggett eventually became a co-pilot for a B-24 bomber in the 7th Bomb Group based in Pandaveswar and reached the rank of 2nd Lieutenant. During his time with the 7th Bomb Group, Baggett’s duties mainly consisted of flying bombing runs into Burma and helping defend allied supply routes between India and China.
Baggett’s career was mostly uneventful, or at least as uneventful as it could be given the circumstances, for around a year until he was called upon to take part in a bombing run on March 31, 1943. The mission itself was fairly simple- Baggett and the rest of the 7th Bomb Group were to fly into Burma and destroy a small, but vital railroad bridge near the logging town of Pyinmana.
However, shortly after taking off, the (unescorted) bombers of the 7th Bomb Group were attacked by a few dozen Japanese Zero fighters. During the ensuing dogfight, the plane’s emergency oxygen tanks were hit, severely damaging the craft. Ultimately, 1st Lt. Lloyd Jensen gave the order for the crew to bailout. Baggett relayed the order to the crew using hand signals (since their intercom had also been destroyed) and leapt from the aircraft with the rest of the surviving crew.
Not long after the crew bailed out, the attacking Japanese Zeros began training their guns on the now-defenceless crewman lazily floating towards the ground.
Baggett would later recall seeing some of his crewmates being torn to pieces by gunfire (in total 5 of the 9 aboard the downed bomber were killed). As for himself, a bullet grazed his arm, but he was otherwise fine. In a desperate bid to stay that way, after being shot in the arm, Baggett played possum, hanging limp in his parachute’s harness.
According to a 1996 article published in Air Force Magazine, this is when Baggett spotted an enemy pilot lazily flying along almost vertically in mid-air to come check out whether Baggett was dead or not, including having his canopy open to get a better look at Baggett. When the near-stalling plane came within range, Baggett ceased to play dead and pulled out his M1911 from its holster, aimed it at the pilot, and squeezed the trigger four times. The plane soon stalled out and Baggett didn’t notice what happened after, thinking little of the incident, being more concerned with the other fighters taking pot shots at he and his crew.
After safely reaching the ground, Baggett regrouped with Lt Jensen and one of the bomber’s surviving gunners. Shortly thereafter, all three were captured, at which point Baggett soon found himself being interrogated. After telling the events leading up to his capture to Major General Arimura, commander of the Southeast Asia POW camps, very oddly (as no one else in his little group was given the opportunity), Baggett was given the chance to die with honour by committing harakiri (an offer he refused).
Later, while still a POW, Baggett had a chance encounter with one Col. Harry Melton. Melton informed him that the plane that Baggett had shot at had crashed directly after stalling out near him and (supposedly) the pilot’s body had been thrown from the wreckage. When it was recovered, he appeared to have been killed, or at least seriously injured, via having been shot, at least according to Colonel Melton.
Despite the fact that the plane had crashed after his encounter with it, Baggett was still skeptical that one (or more) of his shots actually landed and figured something else must have happened to cause the crash. Nevertheless, it was speculated by his compatriots that this must have been the reason Baggett alone had been given the chance to die with honour by committing harakiri after being interrogated.
Baggett never really talked about his impressive feat after the fact, remaining skeptical that he’d scored such a lucky shot. He uneventfully served the rest of his time in the war as a POW, dropping from a hearty 180 pounds and change to just over 90 during the near two years he was kept prisoner. The camp he was in was finally liberated on September 7, 1945 by the OSS and he continued to serve in the military for several years after WW2, reaching the rank of colonel.
The full details of his lucky shot were only dug up in 1996 by John L Frisbee of Air Force Magazine. After combing the records looking to verify or disprove the tale, it turned out that while Col. Harry Melton’s assertion that the pilot in question had been found with a .45 caliber bullet wound could not be verified by any documented evidence, it was ultimately determined that Baggett must have managed to hit the pilot. You see, the plane in question appears to have stalled at approximately 4,000 to 5,000 feet (so an amazing amount of time for the pilot to have recovered from the stall had he been physically able) and, based on official mission reports by survivors, there were no Allied fighters in the vicinity to have downed the fighter and no references of anyone seeing any friendly fire at the slow moving plane before its ultimately demise. Further, even with some sort of random engine failure, the pilot should have still had some control of the plane, instead of reportedly more or less heading straight down and crashing after the stall.
Five years ago, a phone rang in the 28th Bomb Wing vice commander’s office and made history.
Less than 72 hours later, on March 27, 2011, more than 1,100 maintenance personnel launched four B-1B Lancer bombers from the Ellsworth Air Force Base flightline in blizzard conditions to support Operation Odyssey Dawn. It was the first time the aircraft had ever launched from a continental U.S. location in support of combat operations.
Two B-1s and their eight-person crew would continue on and strike targets in Libya; however, the mission required communication and personnel working round-the-clock to be executed.
“I was about halfway through the planning process (of a training sortie), and rumors were making their way around about base leadership convening at the command post,” said Maj. Matthew, a weapons system officer for the operation’s lead B-1. “At about 1 p.m., I was called to the command post with a pilot in my squadron. We were both qualified mission commanders, which clued me in that whatever was going on was likely a real-world event.”
Matthew and many aviators within the 34th and 37th bomb squadrons, as well as maintenance and munitions personnel, were briefed that preparations were underway to organize a strike mission more than 6,000 miles away in Libya.
In less than 20 hours, the conventional munitions element built approximately 145 munitions, enough to load seven B-1s. On the aviation side of the base, aircrews were preparing for takeoff.
“We had the pre-brief, and flew a practice profile in the simulator as well to make sure everyone on the crew had the opportunity to practice the bomb runs,” said Maj. Christopher, co-pilot for the operation’s lead B-1. “The biggest thing going through my mind was trying to absorb every bit of information so that we didn’t mess it up.”
This specific weapons build was the first time many had ever built bombs that would leave a CONUS location to bomb targets.
“Seeing these guys doing their job for real, I was proud of them. I couldn’t have asked for a better crew at the time,” said Master Sgt. Matthew, the 28th Munitions Squadron munitions control section chief.
Maintenance personnel and aircrew were executing their duties in the worst imaginable weather. It was roughly 35 degrees outside with heavy fog and pilots on the runway could only see ahead one hash-mark.
Maj. Brian, a weapons system officer for the operation’s lead B-1, confessed to slipping multiple times on his way to transportation vehicles, while Maj. Matthew added the most memorable part of the mission was takeoff.
Brian said it was an honor to be selected as one of the crew members, and that he felt it was his duty to reward the faith previous commanders put in him by executing the mission to a weapons officer level.
B-1s arrived in the Libya area of operations 12 hours after takeoff and the crews checked in with command and control. Many aspects had changed between pre-brief and check-in, but the crews divvied up targets and went in for their first strike.
“The mission was the deepest strike made into Libya during OOD, which kept us in hostile airspace for over an hour and a half,” Maj. Matthew said. “(Previous missile strikes) alerted the enemy to our presence, and we immediately saw anti-aircraft artillery fire coming from the ground. It was the first time any of us had seen AAA.”
Poorly aimed artillery fire didn’t concern the aviators, who hit their marks and recovered at a forward operating location. Twenty-four hours later, the second launch began. Nearly 100 targets were hit during the two days.
At only 72 hours, the mission marked a significant milestone, not only for Ellsworth AFB, but also for the B-1 fleet as a whole.
Maj. Matthew added the mission solidified the B-1 and its aircrew members’ role as a flexible, rapidly-deployable strategic asset. Brian agreed that it showed the skill, dedication and professionalism of the 28th Maintenance Group.
“The fact they were able to generate five green jets, build 145 munitions, all while in the middle of a snow storm on only two days’ notice still amazes me to this day,” Brian said. “We train every day to do precisely that, but the maintainers and weapons troops can’t simulate extreme weather and harsh temperatures. They were the MVPs of Odyssey Dawn in my opinion.”
Master Sgt. Matthew, who led the munitions crew, added the lessons learned from the operation are always an example he brings up when training his fellow munitions Airmen.
“It’s hard to overstate how important the ground support teams were to our success,” Maj. Matthew said. “Without all of the support agencies, from maintenance to airfield operations, transportation, etc., we wouldn’t have been nearly as successful.”
According to mission planners, the B-1 was the only aircraft that could meet the demands of the mission, such as the timeframe and the number of weapons required to hit that many targets.
“Executing the strike proved the aircraft is capable of holding any target in the world at risk, at any time,” said Maj. Donavon, commander of the operation’s lead B-1.
Editor’s note: Last names were removed due to security concerns.
In honor of Russian Aerospace Force Day, the Russian Ministry of Defense has released its first official footage of the fifth-generation stealth aircraft, the PAK FA Sukhoi T-50.
The government unveiled the montage of its prized stealth fighters launching from an aircraft carrier’s ski-jump ramp, along with several other aircraft such as the MiG-29KUB naval fighter and the Su-35S.
Although the T-50s only appear for a few brief seconds, it’s enough to make out the the two camouflage patterns of the first new fighters produced after the Cold War.
However, despite the fancy paint job and Russia touting the T-50, critics say its features may fall short of it achieving the prized moniker of “fifth-generation aircraft.”
For one, the evolutionary technology onboard the T-50 doesn’t make the quantum leap that other aircraft, such as China’s Chengdu J-20 or the US’s F-35 Lightning II, incorporate. Instead, it seems to have inherited the same engine from the Su-35, an aircraft that’s considered to be 4++ generation — between fourth- and fifth-generation.
Additionally, the primary trait of fifth-generation aircraft, namely stealth, is also called into question when compared with others around the world. According to RealClearDefense, in 2010 and 2011, sources close to the program claimed that the T-50’s radar cross section, the measurement of how detectable on radar an object is, was estimated to be 0.3 to 0.5 square meters.
Although these figures may sound impressive, when compared with the US Air Force F-22 Raptor’s 0.0001-square-meter RCS or the F-35’s 0.001-square-meter RCS, it’s worth taking a second look at by engineers.
Despite the controversy, the T-50 excels where other fifth-generation aircraft have not: its cost. With each unit more than $50 million, it’s considered a bargain when comparing it with the F-22’s $339 million and the F-35’s $178 million price tags.
While the World Wide Web was initially invented by one person (see: What was the First Website?), the genesis of the internet itself was a group effort by numerous individuals, sometimes working in concert, and other times independently. Its birth takes us back to the extremely competitive technological contest between the US and the USSR during the Cold War.
The Soviet Union sent the satellite Sputnik 1 into space on October 4, 1957. Partially in response, the American government created in 1958 the Advanced Research Project Agency, known today as DARPA—Defense Advanced Research Projects Agency. The agency’s specific mission was to
…prevent technological surprises like the launch of Sputnik, which signaled that the Soviets had beaten the U.S. into space. The mission statement has evolved over time. Today, DARPA’s mission is still to prevent technological surprise to the US, but also to create technological surprise for our enemies.
To coordinate such efforts, a rapid way to exchange data between various universities and laboratories was needed. This bring us to J. C. R. Licklider who is largely responsible for the theoretical basis of the Internet, an “Intergalactic Computer Network.” His idea was to create a network where many different computer systems would be interconnected to one another to quickly exchange data, rather than have individual systems setup, each one connecting to some other individual system.
He thought up the idea after having to deal with three separate systems connecting to computers in Santa Monica, the University of California, Berkeley, and a system at MIT:
For each of these three terminals, I had three different sets of user commands. So if I was talking online with someone at S.D.C. and I wanted to talk to someone I knew at Berkeley or M.I.T. about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them…. I said, oh man, it’s obvious what to do: If you have these three terminals, there ought to be one terminal that goes anywhere you want to go where you have interactive computing. That idea is the ARPAnet.”
So, yes, the idea for the internet as we know it partially came about because of the seemingly universal human desire to not have to get up and move to another location.
With the threat of a nuclear war, it was necessary to decentralize such a system, so that even if one node was destroyed, there would still be communication between all the other computers. The American engineer Paul Baran provided the solution to this issue; he designed a decentralized network that also used packet switching as a means for sending and receiving data.
Many others also contributed to the development of an efficient packet switching system, including Leonard Kleinrock and Donald Davies. If you’re not familiar, “packet switching” is basically just a method of breaking down all transmitted data—regardless of content, type, or structure—into suitably sized blocks, called packets. So, for instance, if you wanted to access a large file from another system, when you attempted to download it, rather than the entire file being sent in one stream, which would require a constant connection for the duration of the download, it would get broken down into small packets of data, with each packet being individually sent, perhaps taking different paths through the network. The system that downloads the file would then re-assemble the packets back into the original full file.
The platform mentioned above by Licklider, ARPANET was based on these ideas and was the principle precursor to the Internet as we think of it today. It was installed and operated for the first time in 1969 with four nodes, which were located at the University of California at Santa Barbara, the University of California at Los Angeles, SRI at Stanford University, and the University of Utah.
The first use of this network took place on October 29, 1969 at 10:30 pm and was a communication between UCLA and the Stanford Research Institute. As recounted by the aforementioned Leonard Kleinrock, this momentous communiqué went like this:
We set up a telephone connection between us and the guys at SRI… We typed the L and we asked on the phone,
“Do you see the L?”
“Yes, we see the L,” came the response.
We typed the O, and we asked, “Do you see the O.”
“Yes, we see the O.”
Then we typed the G, and the system crashed… Yet a revolution had begun.
By 1972, the number of computers that were connected to ARPANET had reached twenty-three and it was at this time that the term electronic mail (email) was first used, when a computer scientist named Ray Tomlinson implemented an email system in ARPANET using the “@” symbol to differentiate the sender’s name and network name in the email address.
Alongside these developments, engineers created more networks, which used different protocols such as X.25 and UUCP. The original protocol for communication used by the ARPANET was the NCP (Network Control Protocol). The need for a protocol that would unite all the many networks was needed.
In 1974, after many failed attempts, a paper published by Vint Cerf and Bob Kahn, also known as “the fathers of the Internet,” resulted in the protocol TCP (Transmission Control Protocol), which by 1978 would become TCP/IP (with the IP standing for Internet Protocol). At a high level, TCP/IP is essentially just a relatively efficient system for making sure the packets of data are sent and ultimately received where they need to go, and in turn assembled in the proper order so that the downloaded data mirrors the original file. So, for instance, if a packet is lost in transmission, TCP is the system that detects this and makes sure the missing packet(s) get re-sent and are successfully received. Developers of applications can then use this system without having to worry about exactly how the underlying network communication works.
On January 1, 1983, “flag day,” TCP/IP would become the exclusive communication protocol for ARPANET.
Also in 1983, Paul Mockapetris proposed a distributed database of internet name and address pairs, now known as the Domain Name System (DNS). This is essentially a distributed “phone book” linking a domain’s name to its IP address, allowing you to type in something like todayifoundout.com, instead of the IP address of the website. The distributed version of this system allowed for a decentralized approach to this “phone book.” Previous to this, a central HOSTS.TXT file was maintained at Stanford Research Institute that then could be downloaded and used by other systems. Of course, even by 1983, this was becoming a problem to maintain and there was a growing need for a decentralized approach.
This brings us to 1989 when Tim Berners-Lee of CERN (European Organization for Nuclear Research) developed a system for distributing information on the Internet and named it the World Wide Web.
What made this system unique from existing systems of the day was the marriage of the hypertext system (linked pages) with the internet; particularly the marriage of one directional links that didn’t require any action by the owner of the destination page to make it work as with bi-directional hypertext systems of the day. It also provided for relatively simple implementations of web servers and web browsers and was a completely open platform making it so anyone could contribute and develop their own such systems without paying any royalties. In the process of doing all this, Berners-Lee developed the URL format, hypertext markup language (HTML), and the Hypertext Transfer Protocol (HTTP).
Around this same time, one of the most popular alternatives to the web, the Gopher system, announced it would no longer be free to use, effectively killing it with many switching to the World Wide Web. Today, the web is so popular that many people often think of it as the internet, even though this isn’t the case at all.
Also around the time the World Wide Web was being created, the restrictions on commercial use of the internet were gradually being removed, which was another key element in the ultimate success of this network.
Next up, in 1993, Marc Andreessen led a team that developed a browser for the World Wide Web, named Mosaic. This was a graphical browser developed via funding through a U.S. government initiative, specifically the “High Performance Computing and Communications Act of 1991.″
This act was partially what Al Gore was referring to when he said he “took the initiative in creating the Internet.” All political rhetoric aside (and there was much on both sides concerning this statement), as one of the “fathers of the internet,” Vincent Cerf said, “The Internet would not be where it is in the United States without the strong support given to it and related research areas by the Vice President [Al Gore] in his current role and in his earlier role as Senator… As far back as the 1970s, Congressman Gore promoted the idea of high speed telecommunications as an engine for both economic growth and the improvement of our educational system. He was the first elected official to grasp the potential of computer communications to have a broader impact than just improving the conduct of science and scholarship… His initiatives led directly to the commercialization of the Internet. So he really does deserve credit.” (For more on this controversy, see: Did Al Gore Really Say He Invented the Internet?)
As for Mosaic, it was not the first web browser, as you’ll sometimes read, simply one of the most successful until Netscape came around (which was developed by many of those who previously worked on Mosaic). The first ever web browser, called WorldWideWeb, was created by Berners-Lee. This browser had a nice graphical user interface; allowed for multiple fonts and font sizes; allowed for downloading and displaying images, sounds, animations, movies, etc.; and had the ability to let users edit the web pages being viewed in order to promote collaboration of information. However, this browser only ran on NeXT Step’s OS, which most people didn’t have because of the extreme high cost of these systems. (This company was owned by Steve Jobs, so you can imagine the cost bloat… ;-))
In order to provide a browser anyone could use, the next browser Berners-Lee developed was much simpler and, thus, versions of it could be quickly developed to be able to run on just about any computer, for the most part regardless of processing power or operating system. It was a bare-bones inline browser (command line / text only), which didn’t have most of the features of his original browser.
Mosaic essentially reintroduced some of the nicer features found in Berners-Lee’s original browser, giving people a graphic interface to work with. It also included the ability to view web pages with inline images (instead of in separate windows as other browsers at the time). What really distinguished it from other such graphical browsers, though, was that it was easy for everyday users to install and use. The creators also offered 24 hour phone support to help people get it setup and working on their respective systems.
And the rest, as they say, is history.
Bonus Internet Facts:
The first domain ever registered was Symbolics.com on March 15, 1985. It was registered by the Symbolics Computer Corp.
The “//” forward slashes in any web address serve no real purpose according to Berners-Lee. He only put them in because, “It seemed like a good idea at the time.” He wanted a way to separate the part the web server needed to know about, for instance “www.todayifoundout.com”, from the other stuff which is more service oriented. Basically, he didn’t want to have to worry about knowing what service the particular website was using at a particular link when creating a link in a web page. “//” seemed natural, as it would to anyone who’s used Unix based systems. In retrospect though, this was not at all necessary, so the “//” are essentially pointless.
Berners-Lee chose the “#” for separating the main part of a document’s url with the portion that tells what part of the page to go to, because in the United States and some other countries, if you want to specify an address of an individual apartment or suite in a building, you classically precede the suite or apartment number with a “#”. So the structure is “street name and number #suite number”; thus “page url #location in page”.
Berners-Lee chose the name “World Wide Web” because he wanted to emphasize that, in this global hypertext system, anything could link to anything else. Alternative names he considered were: “Mine of Information” (Moi); “The Information Mine” (Tim); and “Information Mesh” (which was discarded as it looked too much like “Information Mess”).
Pronouncing “www” as individual letters “double-u double-u double-u” takes three times as many syllables as simply saying “World Wide Web.”
Most web addresses begin with “www” because of the traditional practice of naming a server according to the service it provides. So outside of this practice, there is no real reason for any website URL to put a “www” before the domain name; the administrators of whatever website can set it to put anything they want preceding the domain or nothing at all. This is why, as time goes on, more and more websites have adopted allowing only putting the domain name itself and assuming the user wants to access the web service instead of some other service the machine itself may provide. Thus, the web has more or less become the “default” service (generally on port 80) on most service hosting machines on the internet.
The earliest documented commercial spam message on an internet is often incorrectly cited as the 1994 “Green Card Spam” incident. However, the actual first documented commercial spam message was for a new model of Digital Equipment Corporation computers and was sent on ARPANET to 393 recipients by Gary Thuerk in 1978.
The famed Green Card Spam incident was sent April 12, 1994 by a husband and wife team of lawyers, Laurance Canter and Martha Siegal. They bulk posted, on Usenet newsgroups, advertisements for immigration law services. The two defended their actions citing freedom of speech rights. They also later wrote a book titled “How to Make a Fortune on the Information Superhighway“, which encouraged and demonstrated to people how to quickly and freely reach over 30 million users on the Internet by spamming.
Though not called spam, back then, telegraphic spam messages were extremely common in the 19th century in the United States particularly. Western Union allowed telegraphic messages on its network to be sent to multiple destinations. Thus, wealthy American residents tended to get numerous spam messages through telegrams presenting unsolicited investment offers and the like. This wasn’t nearly as much of a problem in Europe due to the fact that telegraphy was regulated by post offices in Europe.
The word “internet” was used as early as 1883 as a verb and adjective to refer to interconnected motions, but almost a century later, in 1982, the term would, of course, be used to describe a worldwide network of fully interconnected TCP/IP networks.
In 1988, the very first massive computer virus in history called “The Internet Worm” was responsible for more than 10 percent of the world’s Internet servers shutting down temporarily.
The term “virus,” as referring to self-replicating computer programs, was coined by Frederick Cohen who was a student at California’s School of Engineering. He wrote such a program for a class. This “virus” was a parasitic application that would seize control of the computer and replicate itself on the machine. He then specifically described his “computer virus” as: “a program that can ‘infect’ other programs by modifying them to include a possibly evolved copy of itself.” Cohen went on to be one of the first people to outline proper virus defense techniques. He also demonstrated in 1987 that no algorithm could ever detect all possible viruses.
Though it wasn’t called such at the time, one of the first ever computer viruses was called “Creeper” and was written by Bob Thomas in 1971. He wrote this virus to demonstrate the potential of such “mobile” computer programs. The virus itself wasn’t destructive and simply printed the message “I’m the creeper, catch me if you can!” Creeper spread about on ARPANET. It worked by finding open connections and transferring itself to other machines. It would also attempt to remove itself from the machine that it was just on, if it could, to further be non-intrusive. The Creeper was ultimately “caught” by a program called “the reaper” which was designed to find and remove any instances of the creeper out there.
While terms like “Computer Worm” and “Computer Virus” are fairly commonly known, one less commonly heard term is “Computer Wabbit.” This is a program that is self-replicating, like a computer virus, but does not infect any host programs or files. The wabbits simply multiply themselves continually until eventually causing the system to crash from lack of resources. The term “wabbit” itself references how rabbits breed incredibly quickly and can take over an area until the environment can no longer sustain them. Pronouncing it “wabbit” is thought to be in homage to Elmer Fudd’s pronunciation of “rabbit.”
Computer viruses/worms don’t inherently have to be bad for your system. Some viruses are designed to improve your system as they infect it. For instance, as noted previously, the Reeper, which was designed to go out and destroy all instances of the Creeper it found. Another virus designed by Cohen would spread itself on a system to all executable files. Rather than harm them though, it would simply safely compress them, freeing up storage space.
Al Gore was one of the so called “Atari Democrats.” These were a group of Democrats that had a “passion for technological issues, from biomedical research and genetic engineering to the environmental impact of the greenhouse effect.” They basically argued that supporting development of various new technologies would stimulate the economy and create a lot of new jobs. Their primary obstacle in political circles, which are primarily made up of a lot of “old fogies,” was simply trying to explain a lot of the various new technologies, in terms of why they were important, to try to get support from fellow politicians for these things.
Gore was also largely responsible for the “Information Superhighway” term becoming popular in the 1990s. The first time he used the term publicly was way back in 1978 at a meeting of computer industry workers. Originally, this term didn’t mean the World Wide Web. Rather, it meant a system like the Internet. However, with the popularity of the World Wide Web, the three terms became synonymous with one another. In that speech, Gore used the term “Information Superhighway” to be analogous with Interstate Highways, referencing how they stimulated the economy after the passing of the National Interstate and Defense Highways Act of 1956. That bill was introduced by Al Gore’s father. It created a boom in the housing market; an increase in how mobile citizens were; and a subsequent boom in new businesses and the like along the highways. Gore felt that an “information superhighway” would have a similar positive economic effect.
What’s not to like about chaplains, right? They hold good conversations, are generally nice, and most keep some extra hygiene products and pogey bait around for troops who wander by the chapel. Oh, they also perform religious services and counsel service members in need.
Some of them have distinguished themselves by going far beyond their earthly call of duty. Despite not being allowed to carry weapons, these six chaplains risked their lives to save others.
1. Chaplain Capodanno ignored his amputation and ran into machine gun fire to recover the wounded.
Navy Reserve Lt. (Chaplain) Vincent R. Capodonna was in a company command post Sept. 4, 1967, in Vietnam when he learned a platoon was being overrun. He ran to the battle and began delivering last rites and treating the wounded, continuing even when a mortar round took off part of his right hand.
He refused medical treatment and tried to save a wounded corpsman under heavy machine gun fire, but was gunned down in the attempt. He was posthumously awarded the Medal of Honor.
2. Chaplain Newman gave away his armor, assisted the wounded, and held religious services ahead of the front line.
In March of 1953, Lt. j.g. (Chaplain) Thomas A. Newman, Jr. was supporting series of assaults in Korea. He continuously exposed himself to enemy fire while assisting stretcher bearers. When he came across a Marine whose vest was damaged, Newman gave up his own and continued working on the front line. Throughout the mission, he was known for holding services ahead of the front lines. He received the Silver Star and the Bronze Star.
3. Chaplain Watters repeatedly walked into the enemy’s field of fire to recover wounded soldiers.
Army Reserve Maj. (Chaplain) Charles J. Watters was moving with a company of the 173rd Airborne Brigade when they came under fire from a Vietnamese battalion. During the ensuing battle, he frequently left the outer perimeter to recover wounded soldiers, distribute food, water, and medical supplies, and administer last rites. On one trip to assist the wounded, he was injured and killed. He posthumously received the Medal of Honor.
4. Chaplain Kapaun interrupted an execution after staying with the American wounded despite facing certain capture.
When a battalion of cavalry found themselves nearly surrounded and vastly outnumbered by attacking Chinese forces on Nov. 1 1950, they still managed to rebuff the first assault. But when they realized they couldn’t possibly withstand another assault, they ordered the retreat of all able-bodied men.
Army Capt. (Chaplain) Emil J. Kapaun elected to stay with the wounded. The Chinese soon broke through the beleaguered defensive line and began fighting hand-to-hand through the camp. Kapaun found a wounded Chinese officer and convinced him to negotiate the safe surrender of American troops. After Kapaun was captured, he shoved a Chinese soldier preparing to execute an American, saving the American’s life. Kapaun died in captivity and received the Medal of Honor for his actions.
5. Chaplain Liteky evacuated wounded, directed helicopters, and shielded soldiers in Vietnam.
Capt. (Chaplain) Charles J. Liteky was accompanying a company in the 199th Infantry Brigade in Vietnam on Dec. 6, 1967 when the company found itself in a fight with an enemy battalion. Under heavy enemy fire, Liteky began crawling around the battlefield to recover the wounded. He personally carried over 20 men to the helicopters and directed medevac birds as they ferried wounded out. He received the Medal of Honor, but later renounced it.
6. Chaplain Holder searched enemy held territory for wounded and dead Americans.
Soldiers with the 19th Infantry Regiment in Nov. 1950 were desperately looking for soldiers lost during a heavy enemy assault in the Korean War. Volunteer patrols repeatedly pushed to the unit’s former positions to find the wounded and killed Americans. Capt. (Chaplain) J. M. Holder joined many of the patrols and continued searching even while under heavy enemy fire, according to his Silver Star citation.
On May 7th, 1945, Nazi Germany signed an unconditional surrender of its armed forces, effectively bringing an end to the second world war in Europe. As news spread across the globe, raucous parties soon followed. From Paris to London to Rome, over to the United States and even Canada, citizens took to the streets to celebrate the Allied victory.
At 1:10 AM on May 9th, 1945, the announcement was delivered by Yuri Levitan, the chief announcer of Radio Moscow. “Moscow is speaking,” the broadcast began, “Fascist Germany is destroyed!” (Even if you don’t understand Russian, it’s still pretty neat to hear the tone of this message).
And then things got really crazy. Despite the late hour, just about all of Russia flocked to the streets immediately. Citizens ran through Moscow in their pajamas, soon joined by the embassies of Allied Nations. Celebratory gun fire shot through the sky, as search lights illuminated the dark night. “It was impossible to describe everything that happened that day,” remembers one Muscovite. “We drank to the victory and to those killed, wishing to never see such a massacre again.”
By the time Joseph Stalin addressed the elated nation twenty two hours later, the Russian people faced a new problem: they’d polished off the country’s entire supply of vodka. As one reporter noted, “There was no vodka in Moscow on May 10, we drank it all.”
The United Kingdom’s current drone fleet is made up primarily of aircraft purchased from the U.S.
But the country is now working on its own unmanned aerial vehicle dubbed “The Protector” which will feature specialized sensors and will be armed with Britain’s Brimstone missile, a low-collateral-damage version of America’s Hellfire missile.
The Protector drone is based on the Predator-B and is being created by the Predator’s manufacturer, General Atomics Aeronautical Systems.
Britain owns 10 Reaper drones but was never able to fly them in European airspace. That’s because current drones don’t support certain devices required to fly in American and European civil airspace such as a detect-and-avoid system and an airborne “due regard” radar.
General Atomics is working on the required radar upgrades as part of the contract with the U.K., but the technology will also support U.S. projects like the MQ-4C, a surveillance UAS for the U.S. Navy.
The Protector will also fly on longer wings that will increase its lift capacity as well as its maximum fuel and weapons payload. The design is a compromise which will lower the Protector’s maximum altitude — 45,000 feet versus 50,000 feet in the Predator B — and top speed — 200 knots versus 240 knots.
The other significant upgrade that the Protector will boast is the ability to carry Britain’s Brimstone missile.
It carries a 14-pound warhead that creates less collateral damage than the Hellfire’s 20-pound warhead, but that also limits its effectiveness against the main battle tanks the Hellfire was designed to kill.
On Sep. 8, 1945, U.S. troops arrived in Korea to partition the country.
During World War II, the Allies determined that Korea, then controlled by Japan, should become an independent country. In August, 1945, the Soviet Union declared war on Japan and invaded northern Korea and one month later, per the agreement, U.S. forces entered southern Korea.
The occupation of the country was meant to be temporary but the division would become permanent as plans for reunification dissolved into fighting between the communist north and newly elected government in the south.
After the Soviets established a communist regime under Kim Il-Sung in 1948, they withdrew and shortly thereafter, the U.S troops followed suit — until North Korea attempted reunification by force in 1950, launching the three-year Korean War.
In the end, approximately 36,000 U.S. troops died and another 100,000 were wounded. Reportedly, 620,000 soldiers from both North and South Korea were killed and a staggering 1.6 million civilians perished during the bloody conflict.