Steeped in long traditions, Navies tend to be very conservative organizations. And yet, there are times when these organizations will exhibit a remarkable willingness to blaze new technological trails. Two examples, one from the history of the Royal Canadian Navy (RCN) and the other from the U.S. Navy (USN), offer striking examples of this boldness. Nearly 65 years ago, both these Navies had emerged from World War II with a heightened sense that the long-established analog design paradigms would be inadequate for naval effectiveness in the postwar era. Electronic design, based on digital techniques, seemed to hold the key to unlocking a whole new generation of naval command & control systems. In the context of the late 1940s and early 1950s, the idea was extremely daring. In today’s world, where digital techniques support every facet of our material existence, it is very hard to appreciate the enormous gamble that these navies were taking. Few people, if any, on the planet had ever heard of digital electronics, and even fewer knew how it worked. The uncertainties of the gamble were compounded by these two Navies’ desire to introduce digital computation into ship weapon systems, to link all ship computers through wireless digital communications system, and to have it all respond in “real-time” to enemy threats. The end results were two groundbreaking technological achievements which presaged the age of real-time computer/communication networks: the RCN’s Digital Automated Tracking and Resolving (DATAR) system and the USN’s Naval Data System (NTDS). Both these stories share a common thread: the sea created a specific set of general needs for which bold innovation was needed. And yet, these stories also illustrate how national contexts also shaped the details of the innovation process.
The Royal Canadian DATAR Story
Early in World War II, the Royal Canadian Navy (RCN) managed to win a role that was absolutely vital to the success of the Allies in Europe. After the Nazi’s had swept through the European continent, Great Britain stood alone as the last hope of democracy in Europe. As the “island fortress,” Britain seemed to stand alone facing constant aerial attacks and impending invasion. If she fell, all would be lost in Europe. Her survival depended on the constant trans-Atlantic flow of much needed supplies from Canada and the United States. The Nazis believed that if they could sever the life-line from North America, it would only be a matter of time before Britain fell. When the German surface fleet failed to command the seas around Britain, Hitler turned to the submarine, which had long been advocated by one of his Admirals, Karl Dönitz. It wasn’t long before Dönitz’s submarine “wolf packs” were taking a terrible toll on Allied shipping to Britain. By 1943, the struggle to control the shipping lanes, which came to be called the “Battle of the North Atlantic,” had become the pivotal battlefield of World War II.
Oil tanker hit by German U-boat.
Source: IEEE Global History Network (https://ethw.org/)
The Allies had concluded that safety lay in numbers, so supplies were moved in large convoys. These convoys moved slowly and could spread out over many miles. The notorious storms of the North Atlantic would further scatter the convoy and leave the transport vessels isolated and even more vulnerable to attack. Keeping track of where everyone was and protecting the convoy as it moved across the vast expanse of the Atlantic Ocean was a formidable task. Using highly maneuverable ships called “corvettes” and sonar (or ASDIC as the British and Canadians called it), the RCN moved through the convoy playing a deadly game of cat and mouse with the German submarines. The circumstances of WWII had thrust Canada into a more prominent role than ever before. What would Canada’s role be in the postwar era?
With the end of WWII, Canada hoped to retain an important role in the new world order, and the RCN wanted to retain its prominence within the Western military alliance that was to be called NATO. The RCN presented plans to build a full battle fleet: aircraft carriers, destroyers, battleships, etc. When political and budget realities killed this grandiose aspiration, a group of mid-level, technically trained naval officers, in the Development Section of the Electrical Engineer-in-Chief’s Directorate (EECD), suggested that the RCN use its wartime anti-submarine expertise as a way to carve out an important defense role with its British and America allies. At the same time they realized that rapid advances in submarine technology would very quickly render current methods of anti-submarine warfare obsolete. The inability to capture, extract, display, communicate and share accurate tactical information in a timely manner had severely limited the effectiveness of anti-submarine warfare. In a battle situation, the long human chain needed to convert sonar, radar and other tactical data into useful information for command-and-control was slow and often unreliable. In a highly fluid and quickly changing battle situation, where there are many ships, submarines and aircraft, this slow human-intensive chain would seriously compromise the effectiveness of anti-submarine operations. With a new generation of faster and more deadly submarines on the horizon, the inability to process and communicate tactical data in a timely manner undermined all postwar anti-submarine operations. These young naval officers reasoned that if the RCN could revolutionize anti-submarine operations by automating the production, processing and communication of tactical data, Canada would be assured of a prominent military role in the postwar North Atlantic alliance.
As early as 1947, the engineers in the Development Section had zeroed in on electronic digital computation and communications as the foundation for their automated tactical data system. The entire basis for their gamble was ENIAC, which had been worked on during the war and only became fully operational in 1946. Throughout ENIAC’s development, these Canadian officers had access to the secret and non-secret U.S. government reports related to ENIAC. They were attracted to the high-speed and high precision of ENIAC’s computation. Since no other fully operational, program controlled, electronic, digital computer was known to exist in the world at this time, the EECD’s strategy to modernize anti-submarine warfare by introducing a computer on every ship was a remarkably bold leap for the otherwise traditional culture of the RCN. But before they could proceed, the idea that data could be digitally communicated between ships had to be demonstrated. In 1949, the EECD’s ambitions came to the attention of Sebastian Ziani de Ferranti, the president of the British firm Ferranti Ltd. He was very much intrigued by the idea, since his company had just decided to commercialize the Mark I computer pioneered by Alan Turing and others at the University of Manchester. After meeting with EECD representatives, Ferranti agreed to set up a separate all-Canadian R&D team in his Canadian subsidiary, Ferranti Electric, in Toronto. The DATAR project was launched. In 1949, the DATAR team demonstrated the feasibility of digital techniques for communicating tactical data. They had used an exotic idea proposed by British engineer Alec Reeves in 1937, while he was working for IT&T in Paris, as way to transmit voice digitally. The technique was called pulse-code modulation (PCM). Nothing had come of it then, but in 1949, the DATAR team transmitted analog radar data from Toronto and displayed it as data on a screen in Ottawa, using PCM. The only other previous implementation of PCM was the top secret SIGSALY encryption equipment developed by Bell Labs in 1943.
In1953, the prototype of the entire DATAR system underwent successful sea trials. The highest echelon of the RCN was there, as were senior officials from the U.S. Navy and the Director of the U.S. Office of Naval Research. The test consisted of two Canadian Bangor class minesweepers. The presence of submarines was simulated from an installation on shore. Each ship was equipped with an electronic digital computer. On each ship, a sophisticated display depicted aircraft, surface ships and submarines as distinctly different icons. Taking the motion of all the ships into account, the computer presented data relative to the ship’s reference frame. PCM communications ensured that all data was shared in real-time. DATAR was a distributed system in that all the ship’s computers were equal nodes in the network. By means of a new device called a trackball, invented by the DATAR team, a cursor could be moved over any target on the monitor and speed, direction, range and bearing data for the target would be displayed and refreshed in real-time.
Prototype (circa 1951) of trackball used in the 1953 sea trials of DATAR. The ball floated on air-suspension. This is probably the earliest known Trackball. Source: IEEE Global History Network (https://ethw.org/)
Operator console for DATAR (1953). Screens that displayed the movement of friendly and enemy ships are on the surface of the console. Source: IEEE Global History Network (https://ethw.org/)
Although the test was an unqualified success, it failed to win U.S. buy-in. From the very beginning, the RCN understood that Canadian requirements alone could not justify the high development costs needed to move from a prototype to full-scale naval system. For example, the DATAR prototype worked on vacuum tubes, as did all the computers in the early 1950s. Cramming thousands upon thousands of vacuum tubes, with all the ancillary power and cooling equipment, into the tight confines of a warship did not bode well in the inhospitable marine environment. The thousands upon thousands of vacuum tubes consumed a lot of energy and produced a lot of component failures. During the sea trials, sweating engineers, stripped down to the waist and armed with cartridge belts filled with vacuum tubes, ran around below deck replacing failed tubes. Everyone knew that the system would have to be miniaturized, and there were plans to use the still new idea of transistors. Canada alone could not underwrite a full-scale transistorization of DATAR. Sales to the United States were essential. But in the context of the Cold War, the U.S. Navy was not about to outsource its command & control technology to another country. Another factor in the U.S. Navy’s decision may have been its preoccupation with defending against massive air assaults, in part a result of the disaster at Pearl Harbor, which DATAR did not deal with directly. Though DATAR never came to full-scale fruition, the effort nevertheless spawned a whole series of breakthroughs in Canada’s civilian computer industry. Though the United States did not adopt the Canadian system, the U.S. Navy (USN) saw the overall design concepts of DATAR as the way to go in its own planning.
U.S Navy NTDS Story
While the RCN arrived at the idea of an automated naval tactical data system from its anti-submarine experience in the North Atlantic, the USN came to it from its own WWII experience in the Pacific, and the role of radar in defending a fleet from heavy Japanese air attacks. Despite the Navy’s slow-changing traditional culture, radar won instant acceptance. Radar’s great utility for ships was evident even to the most conservative members of the USN’s senior officers and they supported any R&D that would increase the effectiveness of shipboard radar. Naval radar was capable of showing 300 aircraft stacked up from the horizon to 30,000 ft., but World War II had demonstrated the inability of humans to process the large amounts of radar data that flooded in during the heat of battle. “Every element of the information,” writes naval historian David Boslaugh, “was handled manually on radar scopes, plotting sheets, status boards, notes pads, maneuvering boards, and in men’s minds.” All calculations were done manually. Under enormous stress, the people processing the radar data were overwhelmed. They could only process a small fraction of the information presented to them. In 1945, the Chief of Naval Operations, Admiral Ernest J. King, put the situation bluntly: “The display of information was slow, complicated and incomplete, rendering it difficult for the human mind to grasp the entire situation either rapidly or correctly Weak communications prevented information from being properly collected or disseminated either internally aboard ships or externally between ships.” The USN needed new ways to automate the processing of radar information. The appearance of fast moving jet aircraft and missiles into the naval combat scene further underscored this urgency.
In 1951, the Navy Electronics Laboratory (NEL) turned to electronic digital computers. Like their counterparts in Canada, they saw great potential in this embryonic technology. In 1951, the NEL started a research project to see if a special purpose computer system could be developed to simultaneously record, store, and display the range and bearing of a large number of aircraft in the form of electronically generated symbols. From the radar data, the system would also have to display a velocity vector for each target. Working with the Teleregister Company, the NEL concluded that the state of electronic digital technology was still too immature to handle these complex tasks reliably. The NEL then started to look at analog techniques as a way of speeding up the handling of radar data. But in the end, the analog paradigm also proved an inherently unsatisfactory solution. Ironically, little did the team working on digital radar processing know that, elsewhere in their own organization were some of the world’s best experts in digital computers. Working in great secrecy, navy code breakers had been building some of the world’s most powerful digital computers.
In 1954, Project Lamplight rekindled the USN’s hope of introducing electronic digital computers into the processing and display of radar data. Earlier that year, the Secretary of Defense had requested that the tri-service Joint Research and Development Board established a study group to review and improve the combined capabilities of the services to provide a continental air defense system for the United States. A specific item in this study group’s mandate was to see if elements of the SAGE (Semi-Automatic Ground Environment) system, an integrated aircraft tracking system still under development, could be extended to ships at sea. This study became known as Project Lamplight. The study was to be directed by the SAGE managers at MIT’s Lincoln Laboratory. However, six months into the study, members of NEL became disenchanted with the way it was going. From the Navy’s perspective, the study had not addressed radar data automation, and, even more importantly, there had been no discussion on how one could apply SAGE concepts to ships at sea. SAGE was land-based system that depended on very large centralized data processing. A centralized computer/ communications architecture was inherently dangerous in the fluid context of a battle fleet. The fleet would be blind if the ship carrying the computer was sunk.
By 1954, computer technology had advanced enough that the NEL felt that it was time again to revisit the design of computerized command & control system suited specifically to naval needs. The success of the DATAR sea trials in Canada further solidified the USN’s belief in the feasibility of an automated tactical data system based on digital computation and digital radio communication. Even though the USN did not want to buy a Canadian solution, DATAR’s overall digital computation and communications design philosophy did inspire and inform the American approach.
In 1954, Lieutenant Commander Irvin McNally, who had been the key champion within the NEL for a digital approach, put together a concept paper called the Naval Tactical Data System (NTDS). Remembering the WWII Japanese saturation air attacks, the NTDS concept paper called for a system in which each ship could simultaneously process 1,000 target tracks (later reduced to 250), show whether they were air, surface or submarine tracks, and also could show whether they were friendly, hostile or unidentified. The NTDS concept also called for the computer to assess the relative threat of each hostile target, and then assign the most appropriate response: the guns or missiles on specific ships, or airborne interceptor. Finally, there had to be real-time sharing of data between all the ships via digital radio links. Jamming all this equipment in the narrow confines of a frigate meant that it had to be designed around transistors.
Diagram prepared by David Boslaugh.
In the context of 1954, the NTDS concept paper was calling for the most ambitious application of transistors to computers that had ever been attempted. From the outset, it was also decided that NTDS could sacrifice the increased performance of a special purpose computer in exchange for the flexibility that could be gained from a general purpose, programmable computer. The NTDS project was approved in late 1955 and work started in 1956. Univac won the contract to design and build the NTDS computers that would be put in each ship. The NTDS computer became known as AN/USQ-17.
Computer area of the NEL NTDS test site. U. S. Navy photo. Two of the AN/USQ-17 computers can be seen. Source: IEEE Global History Network (https://ethw.org/)
Seymour Cray was placed in charge of the AN/USQ-17’s design. He had worked on the Navy’s code breaking computer called Atlas II. While at Remington-Rand, he was put in charge of the Athena project, the ground guidance computer for the Air Force’s Titan ICBM. Athena was Remington-Rand’s first venture into transistorized computers. Later, he and other computer engineers left Remington-Rand to join the new company called the Control Data Corporation (CDC). There, Cray would become legendary for his design of supercomputers. After Control Data, Cray left to form his own company called Cray Inc., which quickly became the premier supercomputer manufacturer. The total cost of developing and testing NTDS on five ships added up to $136 million, spread over many contractors, including UNIVAC, Hughes Aircraft, Collins Radio, Hazeltine, Western Electric, and the University of Illinois. UNIVAC’s development and testing of the NTDS computer system cost $60 million. The USQ-17 computer never actually went to sea. From its testing at NEL, it was concluded that it could be improved with a new breed of transistors, and it was reworked. The resultant shipboard computers were designated CP-642 unit computers, although they embodied the architecture and instruction set of the USQ-17. NTDS went into operation in 1962.
Naval Tactical Data System (NTDS) training in full scale mock-up of a shipboard Combat Information Center. Source: IEEE Global History Network (https://ethw.org/)
In DATAR, the Royal Canadian Navy came to automated tactical data systems because of its wartime experiences in anti-submarine warfare. The U.S. Navy came to NDTS from its wartime experience of facing massive air attacks in the Pacific. The USN had always felt anti-submarine operations and sonar data would have to be an important part of an automated tactical data system. But faced with the complexity of the air defense problem, the USN thought it wiser to wait until that was working properly before tackling anti-submarine operations. It wasn’t until 1964 that the U.S. Navy decided to expand NTDS’s capabilities to anti-submarine warfare. Considerable new design work was needed to build new computational capabilities and interfaces for the expanded version of NTDS. There was considerable debate over design philosophy. By 1964, DATAR had disappeared from most people’s memories. But the Royal Canadian Navy, which still maintained a deep interest in anti-submarine warfare, was asked for its input during the design specification stage.
NTDS left a very important technological and organizational legacy in the United States. It was the first militarized, solid-state computer, and the first system to used distributed computers with high-speed interconnects. The NTDS project led to the development of the Navy’s first project management office to handle complex, large-scale electronics system development. This office pioneered management techniques by which a very small group could effectively lead large engineering projects. In the end, all of USN’s future automated fire control and command & control capabilities were heirs to the early work done to design and first deploy NTDS.
These two stories are taken from the extensive work done by historians John Vardalas and David Boslaugh. Vardalas’s work on DATAR was first published in The Computer Revolution in Canada (Cambridge, MA: The MIT Press, 2001). Boslaugh’s work on NTDS was first published in When Computers Went to Sea (Los Alamitos, CA: The IEEE Computer Society, 1999). Both authors have also added their research on these two topics to the IEEE’s Global History Network (GHN, https://ethw.org/). Although most of the material, including photos, for this article came from the GHN, it only represents a small fraction of the information contained on these topics there. Go to the GHN, log on with your IEEE web account user name and password, and then type in DATAR and NTDS into the search box for more on these stories. If you have served in, or had any association with, these aspects of the U.S. or Canadian navies, you are encouraged to go to the GHN, search for DATAR or NTDS, and contribute. David Boslaugh has single-handedly contributed a wealth of information on NTDS, and you are invited to participate as well.