Some Movements of Models

Simulation from Mechanism to Information, 1914-19771 Nicholas de Monchaux

If we follow the movement of models across the last century—their operation, shifting influence, and architectural transformation—we see them pull away from the world, even as they take an ever-more central part in shaping it.

Synchronized Rotation and the SCR-584

We start between a pair of electrical motors on an unseasonably cold English cliffside in July 1944. The motors are synchronized by alternating current to share their precise angular position with each other, “feeding back” into their position the shifting angle of the microwave reflection of a flying bomb. This missile, a German V1, contains two thousand pounds of highly explosive Amatol and a clockwork pilot. It is propelled by forty-five explosions per second from a clattering pulse-jet engine. If left untouched, the missile’s timer will shut off this engine over London, or Leeds, or Birmingham, and the high-explosive payload will explode on contact with Earth—shattering lives, homes, and neighborhoods.

Exterior view of radio set SCR-584, a mobile radar unit.In: AAF Manual 105-101-2 Radar Storm Detection, by Headquarters, Army AirForces, August 1945. Courtesy NOAA.

View Of Operation Positions Of Scr-584 Radar Of The 124Th Aaa Battalion At Dymchurch, Kent, England, On 4 August 1944. Note Plan Position Indicator (PPI) CRT at right-hand operating position. National Archives and Record Administration, College Park.

This is why—driving a radar dish following the position of the missile’s reflection through the sky—these aligned motors do not only track the position of the bomb but also, through a web of relays and further connected motors, align and position the barrel of an artillery cannon, calculating and anticipating the missile’s trajectory. Triggering the launch of a supersonic artillery shell, the web of signals creates an improbable, microsecond encounter of destruction.2

This assembly did not have a single name or creator. It was a novel combination of interconnected technologies—among them the SCR-584 radar and M-9 gun director—produced by an equally interconnected set of academic and industrial organizations over the course of World War II. The work of its creation centered around the Radiation Lab (Rad Lab) at MIT but also involved the Sperry Gyroscope Company, the Ford Instrument Company, Chrysler, and Bell Telephone Laboratories, and the US Military’s own feuding branches and technicians.

The result was a ballet of systems, mechanisms, and organizations—but also human senses. A radar operator was a crucial observer of a cathode ray projection of the radar dish’s signals, sorting real “pips” from noise in the system and initiating the tracking process by “pip matching” horizontal and vertical tracking mechanisms, thus distinguishing the target signals from background noise.

Locks, Interlocks, and the Legibility of Control

The history of this electromechanical choreography leads back not just to the extensive logistics of air defense but also to a miniature world an ocean away, lodged in the space between two oceans. 

In 1914, a visitor to one of the three control rooms of the locks of the Panama Canal would have been confronted with a gargantuan miniature: a scale model of the enormous, concrete canal lock in refined marble, brass, silk and steel. As The New York Times (1914) explained:

The locks are operated by electricity and the controlling switchboards reproduce in miniature on the board, by synchronous indicators, every detail of operation so that the man in charge sees the complete movement of all gates, valves, fender chains, &c., reproduced before his eyes, eliminating any errors which might otherwise occur. And in addition to this the control switches are so interlocked that an improper sequence of operations is impossible.

At the time, electric power was a rarity, even in factories. With the world still mostly lit by fire, it had been just a year since the first manufacturing plant in the United States was converted from steam and mechanical power. Yet the far greater precision that electronic controls offered over driveshafts and steam governors was already apparent. As the popular historian of the canal, David McCullough (1977, 601), concludes, “The chief virtue of electricity was in the degree of control it afforded.”

Gatun Lock Controls, 1922. Courtesy General Electric

Like the systems of World War II radar defense that evolved from them, the control systems of the Panama Canal were the physical manifestation of large-scale collaboration—in this case between General Electric (GE) and the US Army. A special division within GE focused exclusively on the technical problems of the canal and collaborated directly with the Corps of Engineers’ mechanical and electrical design staff. The teams traveled regularly between GE’s offices in Schenectady and the Canal Zone.

The result of this collaboration were the lock controls—one for each set of gates, filling the length and width of a bowling lane. Every part of the lock was represented in miniature detail and moved in synchronization with the relevant part of the full-scale seaway—from the fender chains to protect against runaway ships, to the layers of canal gates, to the valves rising and falling to course water through the lock’s massive chambers. The control for each part of the mechanism was installed alongside its modeled equivalent on the long assembly, and the position of each mechanism in the field was wired back into the scale-model indicators. Overlaid with a system of interlocking switches, the systems’ electronics ensured each control could not be triggered outside these switches’ correct sequence of operation.

This system relied on a new technology: accurately synchronized motors. In a 1921 article for GE’s in-house technical journal, the lock controls’ inventor, Edward Hewlett (1921, 210), explains that the model “depends for its operation on a Selsyn generator and a Selsyn motor so constructed and interconnected . . . that every angular movement of the generator rotor is duplicated instantly by a similar movement of the . . . motor.”

An interior fire-control room on the USS Natoma Bay, February 1945 (RG 806, National Archives and Record Administration Still Photo Collection, College Park, MD)

What the Control Room Encloses

Once developed in Panama, this electromechanical framework for simulation and control expanded its reach. In 1918, the Bureau of Ordnance of the US Navy contracted with GE to adapt its canal-control systems to guide the position and elevation of guns on its newest battleships (Mindell 2002, 47). Edward Hewlett himself would spend three years “apprenticed” to the Bureau of Ordnance, adapting the role of the Selsyn motors from the scale-model canal-control system (where they primarily served to display and signal positions) to the precise movement of the pivoting guns aboard ships such as the West Virginia, Colorado, and Maryland (Mindell 2002, 48). There, what the Navy called “synchros” (as opposed to GE’s trademark “Selsyn”) connected to existing, analog “fire-control” computers to create a model of the ship, wind, sea, and target. Like the radar systems that evolved from them, these systems both simulated and shaped the reality around them.3 From the switches and dials of the canal-locks to the sophisticated operation of the SCR-584/M-9 system, an idea was present from the outset. Electromechanical controls enabled a model of reality that directly shaped, in turn, the reality they modeled.

With this development of control systems came a perceptible architectural movement. At the beginning, like the canal-lock-controls, such models were adjacent to, and visible from, the landscapes they influenced. But across their development in the 1930s and 1940s, their growing complexity—embedded in targeting computers and then radar equipment—was concealed and protected inside interior bulkheads and land-based bunkers to better shield them from attack and avoid distraction in parsing their subtle interfaces. The better it became at sensing and shaping reality, the more the system began to distance itself from it.

The Task of Theory

As the interchange and overlay of self-synchronized motion became more complex, so too did the delicate connections and adjustments necessary to keep their operation accurate and responsive. Unexpected behavior crept into control systems as an emergent property of their growing complexity; an example would be “hunting” an unexpected, shivery back-and-forth movement in a large mechanism as it seeks equilibrium through over-complicated feedback loops  

As technicians developed increasingly sophisticated practical techniques to manage electromechanical controls, their expertise collected into new theoretical frameworks on the way. As argued by historian David Mindell, however, these theories—information theory and cybernetics—did not presage the invention of control systems or the mindset they created. Rather, they emerged, fittingly, in a kind of constant feedback with the systems’ growing technological complexity (see Mindell 2002, 6.; a related argument is found in Galison 1994).

Even as they followed the developing physical systems’ logic, the abiding goal of theories of control and communication that emerged from the wartime milieu—Norbert Wiener’s Cybernetics, or Control and Communication in Human and Machine of 1948, and Claude Shannon and Warren Weaver’s Mathematical Theory of Communication of 1949—was to try to separate the signals conveyed by interconnected devices from their physical circumstance, or even presence, in a framework that was electronic, mechanical, or neurological (Shannon and Weaver 1949; Wiener 1948). Particularly when so liberated, these ideas were free to exert a deep influence on the rest of postwar reality.

 1 This essay is a version of a lecture on the history of simulation, given to the Antikythera Studio in Los Angeles in March 2023 in preparation for a memorable field trip to the US Army National Training Center and its battlefield simulations at Fort Irwin, CA. Its content connects both recent, unpublished research and interviews with material from several previously published books and articles. The latter include Spacesuit: Fashioning Apollo (de Monchaux 2011), Local Code: 3,659 Proposals About Data, Design, and the Nature of Cities (de Monchaux 2016), and “A Long Time Ago in a City Far, Far Away” (de Monchaux 2019). The author’s thanks go to Benjamin Bratton for the invitation to contribute this material to the Antikythera journal, along with Stephanie Sherman and Nicolay Boyadjiev for their shared insight and collaboration. Thanks to Haley Albert and Dasha Silkina for their patience and organizational efforts, and finally, enormous thanks to Daniel Gross and Joris Maltha of Catalogtree for their ever-delightful creative collaboration.

2 Controlled initially by a timed fuse set by a human operator, by late 1944, the shells had their own, miniature radar sensor at the tip, a proximity fuse that would trigger an explosion when its trajectory coincided, for an instant, with that of the missile.

3 In a 1944 pamphlet, the Navy explains: “G.E. would call it a SELSYN; to Kolssman it is a TELETORQUE; the Bendix version is AUTOSYN’ but in the Navy it is a SYNCHRO (Navy Department 1944).

Pedestrian Motion

At MIT, from 1953, the framework of concepts that shaped and were shaped by electromechanical control systems would be used to trace paths through the city—pedestrian journeys through the delicately threaded neighborhoods of Boston, “dynamic in nature, a sequence over time” (Lynch 1955b). The authors of this experiment conducted, for example, “a photographic analysis of Copley Square . . . including a sequence for each approach at fifty foot intervals, a recording of the ‘walls’ of the space, and a time sequence of photos showing the rhythm of activity over a twenty-four hour period.” Such systematic documentation was supplemented by recorded interviews of pedestrians, both during and after the navigation of the square (Lynch 1955b, 9).

The project did not literally involve electromechanical inputs and outputs—though it was influenced by them and would influence projects that did. Rather, it reconceptualized the urban pedestrian as a radar operator or even guided device themselves, receiving inputs from the environment and, if adequate signals were interpreted correctly, effectively navigating to a target. The project was conceived by two MIT faculty, artist György Kepes and architect and urban designer Kevin Lynch. It would appear in published form seven years later as a book authored by Lynch, The Image of the City (Lynch 1960).4

Gyorgy Kepes, Kevin Lynch, Nichan Bichajian (photographer). One of a series of photographs documenting the north and south facades of Newbury Street at 50 foot Intervals from Arlington to Berkeley St. Courtesy MIT Distinctive Collections

Kepes, a Hungarian émigré and protégé of László Moholy-Nagy, had developed, in conversation with Lynch, a fascination with urban movement, terming it “the most dominant experience factor in an urban environment” (Kepes 1955a). Lynch, in turn, hoped that a systematic study of motion and perception in the city through methods developed with Kepes could address what he viewed as a disorienting, and disoriented, postwar landscape (Lynch 1955a).

When it appeared in 1960, The Image of the City did not include the photographic experiments in urban documentation conceived by Kepes, favoring instead exercises in map-drawing and interviews developed by Lynch. Progress reports through the 1950s either describe Kepes’s favored image-based techniques as “inconclusive” or stress the time and effort needed to document even a small portion of the urban environment (Lynch 1955b). Even so, The Image of the City is frequently cited in a direct intellectual lineage that flows from cybernetic theory, from Wiener, to Kepes, and then Lynch—particularly in highlighting the city as a system of signs and signals. Throughout his life, Kepes acknowledged his own intellectual debt to Wiener, crediting him with an “inner revolution” in his own thinking (ADC 1981). In the cybernetically-inflected vision of Lynch and Kepes the pedestrian is a continual interpreter of and actor on inputs generated by the urban landscape around them (Martin 2004). From the project’s original attempts to model the perception of the built environment, the published Image of the City settled on diagramming the city’s influence on its orienting subject in plan form. Yet the influence of the language of simulation remained—and would emerge, spectacularly, in the project’s later influence.

Camouflage and Cambridge

Just as in the history of electromechanical motor-control systems, the concerns of Kepes and Lynch can be traced to their direct experiences of movement, control, and administration in wartime. Kepes’s seminal 1944 book Language of Vision is exemplary. Instrumental to his invitation to launch a program in “visual studies” at MIT, the book’s analysis of pictorial form and arguments for resulting approaches to “coherence” in the postwar environment deeply shaped The Image of the City (Kepes et al. 1944). Kepes traced his first book’s origin to a single experience during his service on a “camouflage committee” advising the Army in 1942: He and other Chicago Bauhaus faculty joined a fateful tour above Chicago in a military airplane that Kepes credited with inspiring the book. “I became newly alert to the . . . environment,” he later recalled, “by looking at camouflage from the air, at what light patterns could do” (ADC 1981).

Lynch spent the years 1941 to 1946 in a more typical military environment— in the US Army Corps of Engineers, supporting the work of aircraft throughout the Pacific Conflict, from Peleliu through the Philippines and Japan. Mustered out in 1946, Lynch capped an eclectic assembly of incomplete undergraduate training (Yale and Taliesin for Architecture, RPI for engineering) with MIT’s new BA in city planning in 1947 (Southworth and Banerjee 1995). Decamping to Greensboro, NC, for a brief period as a city planner in public service, Lynch was called back to MIT by an offer to join the faculty in 1949. But, in an important sense, he still had not left the Army. During Lynch’s studies at MIT, when he returned in 1949, and through the 1950s, the Institute was not just a military-tinged environment; it was in essential ways indistinguishable from the encampments that Lynch had lived in and helped construct in the previous decade. Uniforms were ubiquitous—MIT dropped its Reserve Officers’ Training Corps requirement for all male undergraduates only in the mid-1950s. Temporary military structures filled the central spine of the campus and housed the continued work of the Rad Lab and other military-industrial collaborations.

MIT's Building 22 with Building 20 in the background, 1944. Courtesy MIT Lincoln Laboratory.

Boston vs. the Army

In the context of these highly controlled environments—created and occupied by Lynch for a decade and a half before the book’s conception—Image of the City becomes especially notable for the care and analysis it brings to the structure of one of the most spatially complex, and least conventionally ordered, cities in North America—the conurbation of Cambridge and Boston. While Lynch’s extensive studies of Boston appear in Image of the City alongside seemingly equivalent analyses of Jersey City and Los Angeles, these later studies were done relatively quickly (in months and even weeks) by Lynch’s students and protégés, using the extensive Boston study as a model. In contrast to Wiener’s brief foray into city planning—a chilling 1950 feature in Life magazine, calling for the dispersal of urban settlement into circular, highway-ring “life belts” against the possibility of nuclear attack—Lynch presented as his primary example a dense, urbane, and multilayered environment as far as one could imagine military-industrial optimization.

This highlights a further, relevant irony. The asbestos-gray formations of MIT’s wartime buildings began as a regulated environment for classified research.5 But it would be their long-term, casual chaos and resulting creativity that would leave them world-renowned before their eventual demolition in 1998—a half century behind schedule. While commentators from inside and outside MIT have often cited the “urbane” flexibility of the structures as a key part of this tradition of invention, another key factor was the availability of inexpensive and reusable synchronized motors, relays, and other discarded equipment from the Rad Lab, which facilitated the development of MIT’s “hacker” culture (see Brand 1994).

4  Lynch emphasizes Kepes’s contributions in the introduction: “One name should be on the title page with my own, if only he would thereby not be made responsible for the shortcomings of the book. That name is Gyorgy Kepes.”

5  In fact, relative to most military and MIT laboratories, historian Robert Buderi argues, the “Rad Lab” was informal and flexible, and this helped create the environment of innovation essential to wartime efforts. See Buderi (1997).

Synchronized Star Fields

Working from 1966 to 1967 at MGM’s Borehamwood production facility northeast of London, special-effects technician Douglas Trumbull did not have access to the surplus radar equipment of MIT’s Building 20. But he did have access to the downstream residue of the river of parts and equipment that flowed from the wartime radar effort into and around southeast England. So supplied, over eighteen months of feverish activity, Trumbull devised a unique series of wires, gears, tracks, automated film cameras, and the same self-synchronizing motors that had guided the SCR-584 and M-9 gun director against coastal attacks one hundred miles away. As Trumbull would later recall, it was ready access to the surplus control systems of air-defense installations, as well as the quality of wartime-trained machine-shop technicians, that helped fuel the technical innovation of the film that resulted—1968’s 2001: A Space Odyssey, whose effects were credited to Stanley Kubrick, the film’s director, alongside Trumbull, Wally Veever, and Com Pederson. 6

Trumbull was hired initially as an animator for 2001, and his first months in England were spent mass-producing the flashing displays of the Discovery spacecraft interior. But it was Trumbull’s familiarity with machining and control systems that proved essential to his growing role in the film’s complex visual effects. This was not a formal education but the product of a childhood spent in the workshop of his father, Donald Trumbull. Who had his own history working in visual effects—including 1939’s The Wizard of Oz—before wartime found him a new career in LA’s burgeoning aviation industry, where he crafted pneumatic and hydraulic systems, as well as epoxy-based part manufacturing. His tinkering resulted in seventeen patents and an indelible influence on his son, which can be traced, quite literally, in the movement of stars and spacecraft in 2001.7

One of the only surviving photographs of the 55-foot Discovery model at Borehamwood. Courtesy Douglas Trumbull.

Creeping Toward Infinity

Kubrick’s insistence on detailed clarity and detail across the 70 mm film frame meant that as many visual effects as possible needed to be produced within 2001’s “Cinerama” camera itself—each frame a long exposure across a ruthlessly sharp aperture. The Discovery was modeled at the scale of a building: fifty-five feet long, with extensive foundations—long concrete pilings cast into the floor of Borehamwood’s Studio 3, all to ensure its complete stillness. Surrounding the model was a deep-black velvet curtain on velvet-covered scaffolding—the inky void on which star fields would be introduced by double exposure. The shot’s motion was provided by the camera, slowly driven by a Trumbull-engineered assembly of self-synchronizing motors, inching forward between countless individual frames.

Unlike previous, more primitive versions of in-camera visual effects, this electromechanical system enabled consistent, repeatable motions between takes. This was essential for the multiple passes needed to expose different parts of the film frame—incorporating separate exposures, for example, for the ship’s illuminated body and tiny scenes of interiors projected onto its windows. The system also had specific limitations. As revealed during the use of Selsyn motors in military control systems, the accuracy of a synchronized, analog motor-control system declines with increasing degrees of motion. As a result, Kubrick and Trumbull agreed on a simple, linear trajectory for the Discovery, produced by driving the camera motion on a long, straight track set obliquely to the giant model, propelled by a twenty-foot drive gear machined in Detroit and airfreighted to the set. Reporting on 2001’s technical innovations, American Cinematographer would term the result “The Mechanical Monster with the Delicate Touch” (Lightman 1968).

The long exposures and low aperture needed to capture the ship model in focus also limited the interval between frames: with any substantial speed, the lack of motion blur in the crisp, individual film frames began to undermine the illusion of motion. Steady, languid progress became the film’s signature. A further contributor to the stately speed of the Discovery were the moving star fields that were added to the dark, velvet background of each frame of the Discovery before it was developed.8 With their sideways shift a subtle contrast to the movement of the spacecraft, these bright pinpricks (which were in reality a fine mist of Trumbull-applied airbrush paint on black cardboard) gave much-needed depth to the frame. Yet, beyond a ponderous speed, the fine paint stars would unexpectedly shimmer and separate, giving an illusion of doubling or tripling across the transition between frames. In Trumbull’s words, the “speed limit” of the film was set (Lightman 1968).

Sausages and Stargates

Trumbull’s Selsyn-guided machinery would leave its mark on many more of the film’s key sequences. A so-called “sausage machine” enabled the synchronized compositing of separately recorded sequences on a film-animation stand—including the background starscapes to model shots.9 Apart from featured shots where Discovery moved obliquely, it was the “sausage” machine that slowly moved 8x10" photographic transparencies of models and planets across hundreds of film exposures, producing motion in languid elevation.

A young Douglas Trumbull attends to the "Jupiter Machine." An 8x10 plate camera (left) took a 6-hour exposure of the machine, which slowly moved the thin, white, curved projection screen through a full swing of movement with one selsyn motor, while two interconnected motors projected shifting transparencies on the screen from the upper and lower projectors shown. American Cinematographer, 1968

An even more complex device was used to create the film’s celebrated “stargate” sequence. When used to layer the motion of camera and rear-projected transparency through a single vertical slit, an interconnected series of synchronized motors would produce an architecture of streaking color on the frame, providing astronaut Dave Bowman’s entry into the enormous, Jupiter-orbiting monolith. A related mechanism was used to produce images of Earth and the red planet; the “Jupiter Machine” synchronized the projection of a semicircular slit of changing light with the motion of a recording camera to produce illuminated spheres that played the part of heavenly bodies. In each case, the film’s reality was just that—recorded and visible only on the frame of the film itself, sometimes assembled over more than a year, on a single frame. To make this complex assembly possible, film stock was deeply refrigerated between exposures to minimize deterioration (Lightman 1968; see also Benson 2018). Across the landscape of the film’s surface, Trumbull’s electromechanical ingenuity used coordinated motion in the real world to create experiences that existed only on a 70 mm strip of layered silver nitrate and acetate.

Trumbull’s aim was to depict new kinds of images more than to model technical systems—but these two ideas were quickly bound together. When American Cinematographer editor Herb Lightman described 2001 and its elaborate special effects to his own technical audience of readers in 1968, he took pains to distinguish between science fiction and the “science fact” that resulted from the film’s combination of technical research and painstaking visual effects. Deploying the language of simulation, he explained that “the results . . . are an accurate forecast of things to come” (Lightman 1968).

 6 Douglas Trumbull in conversation with the author, October 24, 2021. Video recording.

7  After 2001, Donald Trumbull, who became known as “Pappy” to distinguish him from his son, would join Douglas Trumbull’s team for Silent Running, and then that of John Dykstra (see below), for whom he worked until the 1990s, first at George Lucas’s Industrial Light and Magic (ILM) and then at Dykstra’s own firm, Apogee Productions.

8  For this, and other complex compositing that could not be done in-camera, the technicolor master was separated into three black-and-white exposures, so-called “Yellow-Cyan-Magenta Masters,” which were recomposited together with the additional exposure. A team of so-called “blobbers” created frame-by-frame masks for this process, and compositing was done, in turn, by a Selsyn-guided mechanism.

9  In 1968, Trumbull wrote, “To produce exactly the same movement on each successive exposure, all movement drives and film advances were [S]elsyn synchronized. The mammoth device designed to produce this effect we nicknamed the ‘Sausage Factory,’ because we expected the machine to crank out shots at a very fast rate. This turned out to be wishful thinking however, and shooting became very painstaking and laborious work” Trumbull (1968).

Apollo’s Interior

2001’s veracity may have centered on the film frame but it touched institutional realities as well. The precipitous drop in NASA funding that followed the assembly of Apollo’s hardware in 1966 led to staff reductions and frustrations at the agency—but allowed Kubrick to hire consultants directly from NASA’s advanced, interplanetary projects. Frederick Ordway III (Kubrick’s head technical consultant) and Harry Lange (designer of much of the Discovery’s hardware) were lifted directly from Wernher von Braun’s staff at the Marshall Spaceflight Center. Their design for the Discovery was based on concepts in the nuclear propulsion of deep-space vehicles developed at NASA’s Lewis Space Center under the direction of engineer Harold Finger. While Finger never worked for Kubrick, he was caught up in the same circuit between aerospace and civic design encountered elsewhere in this history: he would leave NASA in 1968 to become the first director of research and technology for the new cabinet department of Housing and Urban Development (HUD), working under its founding secretary, George Romney. 10

Walter Cronkite at HAL-10000 Courtesy Joel Banow Collection, National Air and Space Museum, Smithsonian Institution (JBC-SI)

Such overlapping circles of influence expanded into the public broadcast of the Apollo missions. When most of the United States watched the landing of Apollo 11 on the moon in July 1969, they did so on CBS. It was Frank Stanton, CBS’s president, alongside scientific luminaries like MIT’s Jerome Wiesner, who first advised Lyndon Johnson as vice president and head of John F. Kennedy’s National Space Council on the space achievement that could compete with Yuri Gagarin’s 1961 orbit of Earth. Johnson recommended a manned landing on the moon as a singular goal with “great propaganda value” (The Papers of President Kennedy 1956–1961).

And our circles grow together. On the CBS soundstage, venerable host Walter Cronkite presented from a special set designed by Douglas Trumbull; the CBS staff nicknamed the complex and mercurial set piece “HAL 10000” as a presumed sequel to 2001’s psychopathic AI, HAL 9000. The set’s screens and surfaces featured displays and simulation that reflected the elaborate training for the mission as well as the unexpected realities of the landings’ broadcast. These screens would be full of images that drew on Apollo’s intensive use of simulations.

Aloft on Organ Pipes

The simulators of Apollo have as their first point of origin an improbable lineage of pipe organs and pianolas. These mechanisms were repurposed by Edwin Link, scion of a Binghamton, NY, pipe-organ company, to create what he patented in 1931 as “An apparatus . . . in simulation of an airplane” (Link Jr. 1931). Link successfully marketed his “trainer” to the Navy and Army Air Forces in the 1930s, and by World War II the devices were a ubiquitous part of military flight training. Yet the complex organ-pipe systems of such devices meant each trainer needed to be kept in “tune” manually, and each aircraft simulated needed to have a dedicated pneumatic system to capture its motion and behavior. As a result, the wartime research establishment explored improving pneumatic simulators with the same synchronized controls as guided artillery.

In 1943, the head of the Navy’s “Special Devices Desk” in charge of simulators (whose use was termed “synthetic training”), Commander Luis de Florez, went one step further, commissioning a group of MIT faculty to study the creation of an electronically driven trainer that could be switched between the behaviors of different aircraft—or even anticipate the behavior of aircraft not yet constructed (Redmond and Smith 1980). In 1944, MIT’s Servomechanisms Laboratory—a group whose pioneering work earlier in the war had solved fundamental problems of feedback and tracking in the application of self-synchronizing motor controls to aircraft defense—was commissioned to produce the device (see Mindell 2002, 207–30). In December, laboratory director Gordon S. Brown assigned the construction of “Device 2-K, Aircraft Stability and Control Analyzer” (ACSA) to one of his associate directors, Jay Forrester (Redmond and Smith 1980, 13).

A Link Trainer in 1945 at the Navy training facility in Pensacola, Florida. A stylus marks the virtual position of the trainer (at rear) on the desk in foreground. National Archives, College Park.

Sowing Whirlwind

By the time Forrester stepped down from leading the project over a decade later, his allotted task had changed completely. Yet it would also have returned to the Lab’s air-defense roots. In 1946, Forrester abandoned the analog control methods that had defined the Servo Lab’s expertise in favor of new techniques of digital computing; at the time the province of punch cards, digital methods had never been engineered for the real-time computing needed for simulation (Redmond and Smith 1980, 41). Two years later, the surplus B-24 cockpit that had been set aside for the simulator’s use was itself scrapped; the project had a new name, “Whirlwind,” and a new focus—the real-time digital computer alone (Redmond and Smith 1980, 60).

After another two years, the Navy’s funds and patience near-exhausted, Forrester found new funding for the project in the Air Force’s Air Defense System Engineering Committee. This group was charged with protecting the United States against the perceived threat of Soviet nuclear bombing. Now, the real-time computer would become the core of America’s continental defense, the so-called Semi-Automatic Ground Environment or SAGE (Redmond and Smith 1980).

Harvesting SAGE

Deployed at twenty-three sites across North America from the 1950s to the 1980s, SAGE was built to protect the United States against airborne attack by Soviet bombers. Each installation interpreted radar signals from across a large sector of the United States to create a digital map of moving aircraft and sort and identify potential hostile aircraft. Through its digital interface, SAGE operators interacted with this digital model of the airspace and could deploy surface-to-air defenses against any potential target. Initially, these commands traveled by automated teleprinters to surface-to-air missile batteries; later in the system, the launch systems were fully connected to SAGE’s consoles.

As a system designed to protect against bomber attack, SAGE was rendered practically obsolete as soon as the Soviet Union began to produce a ballistic missile that could reach the United States, the R-7, in 1956: the speed and near-vertical approach of the ballistic missile rendered it invulnerable to radar-based defense.11 Nevertheless, the system remained operational, including all its computers and mainframes, until the 1980s.

SAGE Consols at McGuire Air Force Base, New Jersey, 1957. NARA College Park

Nervous Systems

SAGE stored and tracked its aerial targets as coordinates in binary digits, recorded in the polarity of thousands of tiny, donut-shaped magnets in a three-dimensional grid of wiring—the antecedent of all real-time computer memory today. These recorded positions were connected to keyboards, displays, and teleprinters across miles of wiring and hundreds of tons of electronics. The only movement across the system was that of electrical signals. The 1950 report recommending the creation of SAGE took pains to explain the result not as an object, but as a “system . . . [like] the ‘nervous system’ . . . a structure composed of distinct parts so constituted that the functioning of the parts and their relations to one another is governed by their relation to the whole” (Hughes 1998, 21, quoting Air Defense Systems Engineering Committee 1950, 2–3).

Each SAGE installation had at its heart an expanded version of Forrester’s Whirlwind computer, constructed by IBM: the AN/FSQ-7. This machine was the world’s first large and reliable computer capable of real-time calculation. Holding nearly 50,000 vacuum tubes alongside its thousands of tiny memory cores, it weighed almost 300 tons.12 Surrounding, servicing, and interfacing with the rack-mounted components of this “mainframe” were hundreds of smaller, connected “console” computers, developed at the Rad Lab’s new home in Lincoln, MA. At each console, human operators would identify targets on a cathode-ray screen with a light gun—just as their wartime predecessors had “pip-matched” in the SCR-584. However, while the “pips” of the radar system were signals directly coming from a microwave receiver, the signals on the screen of the SAGE console were only an abstracted interface, created by the computer from the interpretation of data from across multiple radar installations.

Here, on the SAGE console screen, the ghostly perception of wartime radar gave way to a fully digital phantom—albeit one that represented the most devastating and un-ephemeral prospects of nuclear violence. The system reduced all the complex movements of a conflict to a string of sixteen binary digits—electronic fragments—which could be manipulated, displayed, and selected for action on screens inside a massive, windowless interior.13 Quintessentially, this meant that nuclear attacks were constantly simulated for training, indistinguishable from actual Armageddon. The resulting toll on SAGE’s young Air Force operators was recorded in graffiti visible on surviving consoles: “Don’t you feel useless?” asks one. “I can’t stand it any longer,” concludes another.14

The last movie Douglas Trumbull directed, 1983’s Brainstorm, depicts a relevant scenario: a simulation technology so comprehensive it can record and depict the entire human sensorium. When, like Silent Running (discussed below), the film was a commercial failure, Trumbull found a new career designing theme-park rides and other simulated entertainments. In this work, he would take particular care that mismatches in the information flowing to human subjects did not cause discomfort, or even alienation from the experience.15 In the operators of SAGE, however, we see the first inkling of the opposite challenge—the vertigo caused when a simulation cannot be distinguished from the terrifying reality it aims to represent.

The LEM Crew Procedures Simulator, Houston. NASA Image S67-33955, Courtesy Johnson Space Center

Space Race

The horror of nuclear weapons—underlined by their grisly simulation—produced the space race. Unlike other proxy conflicts of the Cold War, this battle deployed even the actual hardware of nuclear delivery—but with astronauts and cosmonauts for warheads and a battlefield inside the popular imagination. In this battle, the United States’ prestige depended on flawlessly executing a manned landing on the moon’s surface—which could not, by definition, be practiced before the fact. The result was a massive investment in simulation.

The simulators of Apollo and its preparations would combine both the physical legacy of “synthetic” trainers and the digital legacy of SAGE. They would become especially crucial because of a fateful decision taken early in the design of Apollo’s systems: for all of the moon landing’s anticipated automation, the astronaut would be “in the loop” for any emergency and could opt for manual control during the most critical moments of rendezvous and landing (Mindell 2008, 105). As a result, for every physical manifestation of the Apollo program—every capsule and control environment—there existed a proliferating variety of virtual analogs. In the words of Apollo astronaut Michael Collins, these artificial environments became “the very heart and soul of the NASA system” (Collins 1974, 191).

For the command module and lunar lander, simulators were built by the same Link company that had first supplied “synthetic trainers” to the Navy in the 1930s. Far more sophisticated, the Apollo simulators incorporated digital electronics, gimbals, and hydraulics into massive, inhabited assemblies. Poised around and above their interiors were complex optical display systems, incorporating virtual parallax, transparencies, miniature cameras, closed-circuit television, and models of star fields and the lunar surface. By 1966, some versions of the Lunar Excursion Module (LEM) simulator eschewed the use of extensive physical models to include instead one of the earliest three-dimensional digital displays, built by GE and capable of showing a square-edged abstraction of craters and lunar maria.

Twin Spaces

To better integrate the hundreds of hours of simulation needed for each mission, three of these simulators were located adjacent to and wired into Houston’s Mission Control. There, what often seems a singular environment was really two stacked, identical control spaces in MSC Building 30, with one housing the simulation of upcoming missions in the Gemini and Apollo sequence even while actual launches were being supervised in its twin (see de Monchaux 2011, chap. 12 and 18).

Starting from Gemini, the IBM 7094 mainframes running Mission Control were programmed to run calculations on active missions and simulate upcoming missions at the same time. When IBM’s new System/360 mainframes replaced the 7094 for Apollo, the computers would simulate not only their own operation but would contain virtual versions of other computers—from the circuits girdling the vast Saturn V rocket to the small Apollo Guidance Computer built by MIT and Raytheon. And it worked. “Throttle down. Better than the simulator,” remarked Buzz Aldrin just minutes before Apollo 11 landed on the lunar surface (National Aeronautics and Space Administration 1969, 313).This focus on safety and mission success that created NASA’s simulators indirectly created another key movement in this history: the need to simulate the lunar landing for the global television audience.

Neil Armstrong stepping onto the lunar surface. Screen capture from television broadcast. Courtesy Joel Banow Collection, National Air and Space Museum, Smithsonian Institution (JBC-SI)

Safety Signals

In 1961, when design work on Apollo’s communications infrastructure—the Unified S-Band or USB—commenced, engineers allocated only 500 kHz for TV broadcast, a fraction of that apportioned to safety-related signals like telemetry, or biomedical monitoring, and only a tenth of the scale of an earthbound TV signal. It would be several years before a “desperate” NASA administration realized that the decision profoundly compromised their ability to broadcast what had been conceived from the outset as a televised event.

This myopia had two consequences. First, while there is no inherent technical reason that a high-resolution color TV image could not be transmitted from the moon, the images viewed by millions on earth were grainy and ghostlike, produced on special, low-resolution, low-framerate cameras developed by the Radio Corporation of America (RCA) and Westinghouse; only the black-and-white RCA version was ready for Apollo 11. Second, CBS’ production team was so unconvinced by the visual quality of broadcast graphics provided by NASA’s press office, they decided it needed to be prepared to “go it alone” and prepare their own simulated visuals for the TV broadcast of Apollo—where “going it alone” ironically required commissioning NASA’s own technical suppliers of simulator content.16

Which brings us back to HAL 10000 and the CBS lunar stage set. In the same month when Walter Cronkite’s evening news had recently expanded from fifteen to thirty minutes, the thirty-one-hour Apollo 11 broadcast was an extended visual feast—with a healthy serving of simulation. This included views of actors in space suit costumes on a stage set at LEM manufacturer Grumman in Bethpage, Long Island, with mining slag standing in for the lunar surface. Broadcast views also combined transparencies and mechanisms sourced from NASA contractors with new techniques in noise reduction and chromakey (“greenscreen”) technology for live broadcast. A prime example was the view broadcast as the Eagle capsule descended to the moon’s surface, which combined a transparency of the rising earth, another of a hovering lunar lander, and a scrolling, latex-belt lunar surface—all under the prominent caption “CBS NEWS SIMULATION.”

A still from the CBS lunar broadcast. (JBC-SI)

The latex rubber moonscape belt used in CBS simulations. The belt was cast from the same molds as the landscapes of NASA’s Apollo simulators. (JBC-SI)

Robert Wussler in the Control Room of the CBS Lunar Broadcast (JBC-SI)

CNN and Capricorn One

The partially simulated broadcast produced its own, new reality. Bob Wussler, the director of CBS’s Apollo broadcast, would go on to cofound CNN, assembling much of our current media reality from the blend of interviews, prerecorded clips, and days-long broadcast developed for Apollo. CBS’s Apollo broadcast had another, stranger legacy as well: during its transmission, young assistant producer Peter Hyams observed that “There was one event of really enormous importance, that had almost no witnesses . . . [only] a TV camera.” Hyams’s resulting screenplay depicting a (secretly) simulated Mars Landing would become the 1978 movie Capricorn One, indelibly driving the theory that the moon landing was produced on a soundstage (The New York Times 1978).17 In a 2022 poll, more than a quarter of Americans responded that they did not believe NASA astronauts landed on the surface of the moon, or that they were “unsure” that they did (Hamilton 2022). In this context, as in much of contemporary culture, the idea of easily simulated reality quickly compromises civic discourse—the moon landing the ultimate “fake news.” Yet, in the same decade as this seed of skepticism was planted, there existed an urban design effort that was deeply informed by technologies of simulation and envisioned their contribution to improved civic discourse. It would have its own, unexpected effect on Hollywood as well.

10 For more on this history, see Light (2003).

11  The same rocket was used to launch the Sputnik in 1957 and, in modified form, Yuri Gagarin in 1961.

12 The initials stood for Army Navy/Fixed Special eQuipment; see Ulmann (2014).

13  In the prototype Whirlwind, for example, the azimuth of a target was converted into an eight-bit value, or a resolution of ~1.4 degrees, and range data into a seven-bit value, or up to 127 miles, “padded” with an additional zero bit to match the length of the azimuth. See Ulmann 2014).

14  Graffiti recorded and displayed at the Computer History Museum, Mountain View, CA.

15  Douglas Trumbull in conversation with the author, October 24, 2021., Video recording.

16  Mark Kramer, CBS Director of Special Events and head researcher, Apollo Broadcasts, in conversation with the author, New York, December 8, 2002, audio recording. Also, Joel Banow, CBS Apollo Broadcast Director, interview, February 1, 2000, Joel Banow Collection, National Air and Space Museum Archives, Suitland, MD.

17  The success of Capricorn One would give Hyams a new career in film, which included cowriting (with Arthur C. Clarke) and directing a 1984 sequel to 2001: A Space Odyssey: 2010: The Year We Make Contact. The sequel is notable for its (by contrast to 2001) dialogue-heavy script and a perceptible difficulty replicating the flawless sets and interior of the spaceship Discovery—to whom its characters return—because of its less gravity-defying budget.

Marin Meandering

In 1974, flooded by synchronized fluorescent stage lights in a hot, windowless room under the main staircase of UC Berkeley’s Wurster Hall, a twenty-six-foot hexagonally profiled gantry inched along a set of parallel, fifty-six-foot rails. A “gondola” at the center of the gantry trailed a long, thin optical probe, its thin tip barely grazing a cardboard model roadway below. Every one and a half seconds, the probe moved a fraction of an inch on room-spanning, low-vibration bearings. At each pause between movements, a 16 mm Arriflex camera would record a single, long exposure. Sped up, the film reproduced precisely the visual field of driving down the roadway—not just looking ahead but circling the view around corners and toward the hillsides enclosing a miniature valley.

An overall view of the Berkeley Environmental Simulation Lab (BESL) Showing a complete overhead gantry and a partially constructed model of Marin County, c. 1972. Donald Appleyard Collection (DAC) Bancroft Library, University of California

A detail view of the optical probe and the 1"=30' model (DAC)

The roadway and its adjoining buildings had been built two years previously. They formed part of an elaborate model of two square miles of Marin County at 1"=40'0" scale. While sheltered in an architecture school, the model’s precision and materials resembled a model train set, with the “emphasis on foreground detail” (Institute for Urban and Regional Development 1975). Tiny model cars populated the curbside; signs from supermarkets and gas stations were photo-etched in thin, colored plexiglass. And around the horizon, slopes of sawdust stood in for the sun-bleached grasses of the Marin headlands.

Road Shows

While more sophisticated, the model was a direct descendant of the experiments that Gyorgy Kepes and Kevin Lynch had undertaken at MIT in the 1950s, filtered through new possibilities in digitally driven control systems. The idea of reproducing urban motion in film—particularly new kinds of roads and highways—had existed since the earliest plans for Kepes and Lynch’s study of urban perception. While the simulation of perception had given way in The Image of the City to its plan-based representation, the project’s original seed of inspiration would flower on these model hillsides.

“With a better understanding of the motion picture as a tool of ‘visual recording,’ ” Georgy Kepes had written in 1955, “one can hope to tackle . . . new aspects of our present urban environment” (Kepes 1955a, 4). In 1958, Lynch’s student Donald Appleyard began work on a separate study of urban highways, focusing on multiple techniques to picture and analyze the cinematic experience of automotive travel through the city. The resulting book, A View from the Road (1964), featured both the use and an explanation of a new language of graphic notation for speed, view, and direction, as well as a flip-book-style filmstrip of Appleyard’s drawings along one page corner (Appleyard et al. 1964). By 1966, Appleyard was able to conduct tests with a true, small-scale film setup, including an Optex “modelscope,” a 16 mm Bolex film camera, and bicycle and model train tracks (see Young 1968).

An early Lynch/Appleyard film test showing a model of Storrow Drive. Courtesy MIT Libraries.

Cable and Community

By 1967, Appleyard was a new faculty member at UC Berkeley, where he declared his goal “to develop a way of simulating extensive urban areas on small-scale models and recording journeys through them” (Appleyard 1967). For Appleyard, this goal was important for the same reason Luis de Florez had sought to simulate unbuilt airplanes in 1944—to allow new designs to be vetted and evaluated before the expense and investment of construction. As Appleyard explained in a letter to simulator pioneer Edwin Link’s younger sister, the result would be a “Communications Model of the Planning Process” in which “proponents,” “opponents,” and “evaluators” would be connected by “simulation media” through “questions, evaluations, decisions, perceptions and [again] evaluations” (Appleyard 1974, 1977, 47). In the context of Berkeley in the late 1960s and early 1970s, Appleyard believed such an approach was essential to remove undue political concerns from the planning process. In his vision of the future, the use of urban simulation would expand and combine with new technologies like two-way cable television to create a full “communications model of community participation” (Appleyard 1977, 47). As Appleyard imagined, we would come to regularly confront simulated versions of our daily reality—but in a controlled and calibrated manner, to envision and shape a collective process of urban design. While simulation in a military-industrial context was coming to represent a lack of agency, Appleyard in 1960s and 1970s Berkeley sought to demonstrate its possibilities for democratic cocreation.

To facilitate this vision, Appleyard proposed and received the 1973 National Science Foundation (NSF) grant “Environmental Dispositions and the Simulation of Environments.” Totaling $348,000 (or over $2.5 million today), the funds paid for the enormous Wurster Hall gantry—or, as it became officially known, the Berkeley Environmental Simulation Laboratory (BESL). As outlined by Appleyard and his coauthor, psychologist Kenneth H. Craik, the grant proposed not just the creation of an elaborate simulation apparatus but also its validation through psychological research. “Research participants,” it was proposed, would arrive in Marin County by car from Berkeley. After arrival in the project area, they would be subjected to either a real drive, or a simulated journey along the identical route. Afterward, “extensive debriefing sessions will examine how the travelers describe the environment they pass through, what they notice in it, what they recall of it, how they feel about it, how they evaluate elements in it, and how they mentally organize its subareas and features” (Appleyard et al. 1973, 2). Only when the precise difference between our experience of a real and simulated version of the same architectural context was “calibrated,” Appleyard proposed, could a scientific process of the consideration of urban design proposals by prospective publics be undertaken.

At the time of the NSF grant, Appleyard was not satisfied with any efforts at producing simulated driving footage. His early Boston experiments were “jerky” and “unrealistic”18 A surplus driving simulator purchased from Yale and the Connecticut Department of Transportation produced only a single, unstable sequence in 1970 (Institute for Urban and Regional Development 1975). With the NSF grant, Appleyard was determined to produce his own, superior filming apparatus from scratch. With 2001 still the gold standard of visual model photography, Appleyard arranged a meeting with Douglas Trumbull in fall 1973.

Silent Signals

Trumbull was at a professional loose end. After obtaining wide admiration (if not, in his view, appropriate screen credit) for his contributions to 2001, Trumbull had produced visual effects for lower-end science fiction films: the Ringo Starr-starring alien sex comedy Candy and the (moderately) higher-brow microbiological caper The Andromeda Strain. On the strength of these outings, he was commissioned by Universal Pictures to direct his own feature, Silent Running (1972).

An outer-space eco-fable with Bruce Dern as an isolated astronaut, accompanied by original Joan Baez songs and robot mascots operated by bilateral amputees, Silent Running imagined Earth’s remaining supply of plants in isolation beyond the rings of Saturn. The film was a low-budget, technical tour de force. Trumbull and his team built and filmed a twenty-four-foot model of Dern’s starship, the Valley Forge, with an ingeniously updated version of the linear camera-control system from 2001. Crafted by Trumbull and a young visual effects supervisor, John Dykstra, the system combined front-and-rear camera projection with linear motor control. Instead of a combination of synchronized Selsyn motors, however, the Silent Running rig combined newer, incremental stepper motors with custom transistor-based controls, enabling an additional axis of movement at the same stately pace of interplanetary motion that had defined 2001. For all its technical wizardry, however, Silent Running was a commercial flop. As a result, when he spoke to Appleyard in 1973, Trumbull was without work for himself or his assistants. Reluctant to leave Los Angeles, Trumbull recommended the young John Dykstra to Appleyard. Dykstra would spend the next year in Berkeley devising the control system for Appleyard’s gantry-mounted camera.19

Maneuvering Minicomputers

At Berkeley, Dykstra looked for help. He found it in the staff of an aging Federal Aviation Administration landing simulator located seven miles north at the UC’s Richmond Field Station. Constructed in 1961 using synchronized motor systems and a full-size cockpit on overhead tracks, the simulator building sported an enormous, slanted profile, covering a 1:10-scale model runway. As a war-surplus cockpit slid down tracks hung from the building’s roof, the space could be filled with fog to simulate low-visibility landings and test a variety of visual safety systems. The system’s “Principal Electronic Technician,” Karl Mellander, had consulted already with Appleyard on the NSF proposal; Dykstra would collaborate even more with two of the simulator’s staff, Jerry Jeffress and Alvah Miller.

While an analog, Selsyn-controlled system was extensively studied for BESL’s gantry, the system Dykstra, Jeffress, and Miller developed would draw instead from the digital legacy of SAGE.20 Beginning in 1955, members of Forrester’s team at MIT’s Lincoln Laboratory had experimented with a series of SAGE’s components—a combination of magnetic-core memory, the scale and modular construction of SAGE’s small console machines, their interactive displays, and newly available transistor electronics. The (relatively) small and capable machine they built, the Lincoln Labs TX-0 and its successor TX-1, inspired Forrester deputy Ken Olson and his lieutenant Harlan Anderson to form the Digital Equipment Corporation (DEC) in 1957. From 1960 into the 1970s, DEC marketed a series of increasingly sophisticated so-called “minicomputers,” the PDP series. (The term “minicomputer” was a deliberate play on the contemporary “miniskirt”: small, modern, and self-consciously revolutionary; see Cerruzi 2003.)

If IBM’s mainframe dominance was grounded in its experience with the massive AN/FSQ-7, then the DEC PDP sustained the interactive modularity of SAGE’s consoles (Cerruzi 2003, 53, 140). Where IBM dispatched its own technicians to perform maintenance and upkeep, DEC instead delivered reams of newsprint-printed manuals, instructions, and suggestions for modifications. Appleyard’s own reflections on the purchase of a computer for the project reveal the two firms’ contrasting approaches: “IBM was not interested in our ‘small requirements,’ our funding being too modest from their point of view. DEC on the other hand continually contacted us and helped with design questions” (Institute for Urban and Regional Development 1975, 36).

Driving Targets

When the BESL system was completed in 1974, its operator would first use a downward-facing TV to record a two-dimensional “rail” along one of the model’s roads, centering on roadways using a “bombsight” graphic that quietly recalled the system’s military genealogy. This “rail” would be recorded to a disk drive as a series of 16-bit x-y-coordinates (a sensor on the bottom of the model scope would ultimately control the z-coordinate, keeping the eye, as it were, on the road). Using a second series of controls, the speed and orientation of the camera would be added, including its vertical position, rotation, and angle of the view, all centered on the “pupil”—the technical term for the very center—of a tiny lens sitting just one-tenth of an inch above the model’s surface. This second set of controls was crucial to the idea of vehicle-based simulation, recording not just the pathway of the vehicle but also the view cone of the driver, swiveling to anticipate turns and new trajectories.

A view of the BESL control room from the system's 1975 manual (DAC)

After each parameter was set, the PDP-11 would calculate the position and orientation of the camera for every frame of a 16 mm film of up to several minutes, delete all intermediary points from the trajectory to save memory, and record the result to disk. Then, reminiscent of Trumbull’s work on 2001, the long process of exposure began. To minimize vibration, the camera probe would crawl from one coordinate to the next, then pause to quiet the enormous mechanism completely before exposing a single frame of film one to one and a half seconds through the tiny lens. To obtain an exposure of even this length, 100,000 watts of lighting flooded onto the model’s surface, from tubes that coated the room’s ceiling. AC power was delivered in three phases across adjacent tubes to even out the 120 Hz flicker of each individual light source. 

When filming on the Marin sequence was complete in 1974, Dykstra returned to LA, and Appleyard and Craik began the process of studying and “validating” the efficacy of their simulator-based approach with experimental users. No comprehensive review of the study was published before Appleyard’s untimely death in 1982 in a traffic accident. But the BESL simulator would have an astonishing  afterlife in global popular culture, and the technologies developed by Dykstra, Jeffress, and Miller at Berkeley rapidly turned back into the world of Hollywood special effects.

The final Marin film, courtesy Bancroft Library

18  Peter Bosselman in conversation with the author, November 15, 2024, audio recording.

19  John Dykstra in conversation with the author, April 4, 2019, recording.

20  John Dykstra in conversation with the author, April 4, 2019, recording.

Digital Dogfight

“Every time there was a war movie on television,” reflected George Lucas of his evenings in the early 1970s, “I would watch it—and if there was a dogfight sequence, I would videotape it” (Horton 2015). Eventually, Lucas “had twenty to twenty-five hours’ worth of videotape. . . I condensed that down . . . [and] transferred about an hour to 16 mm film, and then I cut that down to about eight minutes” (Rinzler and Lippincott 2007, 50). Before he had even completed the script for the movie that would become Star Wars, the eight-minute sequence of whirling airplanes represented his goal for the visual language of the film’s space battles—unachievable by the special effects of the time. Throughout 1975, Lucas and his Star Wars producer, Gary Kurtz, showed the footage to a series of special-effects artists, each of whom demurred from attempting to reproduce the sequences as space battles. The quest finished, however, with John Dykstra, newly returned to Los Angeles from Berkeley.

Dykstra’s memory of the meeting is vivid: Lucas “wanted to be able to create the illusion of photographing spaceships, flying like WW2 fighters, from a handheld camera . . . not just showing what it looked like, but what it felt like. . . I said I could do it,” Dykstra recalls laconically, “but only because I knew it could be done.” 21

John Dykstra in an undated double-exposure showing part of the PDP-11 motion-control system and an X-wing fighter model manipulated by a motion-control arm.

A view of the Dykstraflex system filming the Death Star Trench scene, itself based on the 1955 Film Dambusters, which used a combination of archival and then-new airplane footage.

Dykstraflexing

Over the course of 1975 and 1976, in a warehouse in Van Nuys, Dykstra and his former Berkeley colleagues Miller and Jeffress would adapt the control system of the BESL to produce footage for Lucas’s new special-effects company, Industrial Light and Magic (ILM). One of their first employees was Douglas Trumbull’s father, Don.

Instead of an overhead gantry, they used a similar PDP-11 and stepper-motor-controlled system to the BESL to slowly move a large, articulated camera arm, pivoting an enormous, 35 mm “VistaVision” camera around the same, center-lens “pupil” that had formed the datum of the BESL model scope. For the more complex sequences, up to sixteen channels of computer-controlled motion were combined.

A crucial innovation was the creation of motion blur. This was added by leaving the camera shutter open for a fraction of the motion between frames. Unlike the stately pace of 2001, the blur enabled the rapid, twisting shots of Lucas’s real-world dogfight reel to be credibly reproduced. But the sequences’ ultimate realism comes from the insight explored by Appleyard’s driving simulation: the camera following our gaze as much as the motion of the spacecraft, placing us, improbably, in the space between tumbling worlds.

The complex system took over a year to develop before a single shot was produced—resulting in mounting tension between Lucas and Dykstra—and the continued use of cuts from Lucas’s late-night dogfight reel in previews of the film up until late 1976. Yet, at first slowly and then in a flurry of last-minute effort, over 350 separate visual effects sequences were created for Star Wars with Dykstra’s system. They would produce the only spontaneous applause at the movie’s 1977 premiere.22 In the process, what was first just wordplay on camera nomenclature— “Dykstraflex”—became the system’s official name. For its development, Dykstra, Miller, and Jeffress would receive a special Scientific and Engineering Award at the 1978 Oscars.

Sound and Light(sabers)

A last control-system cameo is worth recording. Just as the surfaces of Star Wars’ models were filled with found objects and kit-model parts taken out of context, so too were the soundscapes of the revolutionary film. Designed by Walter Murch from a series of found recordings by Ben Burtt, the soundtrack of Star Wars incorporated a legion of found and repurposed recordings: from leaky air conditioners to the hum of high-tension electricity wires, they created a dense, consistent environment as part of a place-making process Murch called “Worldizing.”23

Two of the essential, unidentifiable sounds of the film were made from recordings of control systems. The sound of the Millenium Falcon was produced from a recording of motion-control systems of the same Dykstraflex camera that enabled its simulated movement. And, befitting the origins of simulation itself, the sound of lightsabers came from a pair of antiquated self-synchronizing motors in a film projection room at the University of Southern California (Rinzler 2010, 66).

21  John Dykstra in conversation with the author, April 4, 2019, recording.

22  Patricia Rose Duigan in conversation with the author, May 3, 2019, notes.

23  The term can be traced to an interview Murch gave to the Filmmakers Newsletter in 1974; see Sturhahn 1974, 23.

Conclusion: Inside and Outside

In the link between these two motors, in another confined interior, our record of movement concludes as it began. Across this pathway from 1914 to 1977, one can trace an uninterrupted architectural movement across the landscape of simulation and control that continues today—and maintains, in extremis, the same trajectory outlined above. From the sunlit stage of the Panama Canal lockhouse into successively layered interiors, this movement is ever-inward, often ever-more claustrophobic. But it is still inhabitable. Soon, however, we come to an interior that becomes difficult to enter. This threshold was foreshadowed along the first leg of our journey, between the development of electromechanical controls for the Panama Canal locks in 1914 and their interconnection with fire control and radar.

Deeds and Boxes

On August 28, 1940, before the entry of the United States into World War II but after the United Kingdom had begun to fight the Battle of Britain, a small group of scientists joined more than a thousand sailors on the Duchess of Richmond from Liverpool to Halifax, Nova Scotia. With the scientists was a closely guarded “black, metal deed box” containing cavity magnetron E1189, serial number 12, which had been developed by the British General Electric Corporation (GEC). While radar technology was then being developed in the US, Germany, UK, and Japan, the E1189 microwave tube was the first to produce a signal powerful enough, and at a high enough frequency, to make detailed, mobile microwave radar possible.

Through a complex process of technological diplomacy, the black box was taken to Washington from Halifax, and thence to MIT, where the device it contained formed the basis of developments in the newly chartered Rad Lab (Von Hilgers 2011). There, whether via this specific container or the similar, black speckled equipment containers that enclosed much of the facility’s complex equipment, the phrase “black box” became a term of general use in the Rad Lab (Galison 1994, 246 46n). By 1945, Wiener was using the term in a theoretical context, contrasting technological “black boxes” (whose inputs and outputs were understood but whose inner workings were not open to view) with a “white box” whose interior could be easily revealed (Galison 1994, 245).

In the systems-engineering context of SAGE and the ICBM defenses that superseded it, the “black box” became, once more, a practical term. In a complex assembly of controls and subsystems (often literally enclosed in dark metal), a “black box” was a portion of the whole (like a guidance system) defined not by its workings but by its inputs and outputs; how it arrived at these could be changed and upgraded as long as the relationships between the device and the larger supersystem (of, say, a missile) were maintained (see de Monchaux 2011, 47). Thus, the “black box” described not so much Wiener’s notion of unknowability, as it did the new definition of any technology as a series of defined relationships between mutable things—in contrast to the tangible objects that had mostly defined technological reality to date. Yet, like a black, metal deed box, a mechanism can become not only difficult to physically enter; it can become impossible to understand.

The Loss of Finding Our Way

The way-finding problem that The Image of the City analyzed—the use of landmarks, edges, signs, and pathways to negotiate complex urban reality—was viewed anew through a cybernetic lens, but also as (seemingly) immutable, like the idea of streets themselves. Decades later, however, a visit to the Boston Common finds no one at all asking for directions or navigating by the spire of a church; instead, they hold small boxes with screens, on which an algorithm predicts an optimized route. We are no longer reading signals from the urban landscape but in the urban landscape, preoccupied with signals on the surface of a box we cannot enter. And divorced from our environment even so.

In his 1968 review of 2001: A Space Odyssey, American Cinematographer editor Herb Lightman found only one element of the technological development depicted in the film difficult to believe—the erratic artificial intelligence of HAL-9000 (Lightman 1968). Fifty years on, however, a reality-challenged AI reads instead as the most credible part of 2001’s vision. Yet, unlike the illuminated, weightless interior of the film’s second-act finale, where HAL’s Perspex-slotted brain is deconstructed by Dave Bowman, the interior of today’s responsive algorithms are, as a rule, closed to view—even as they model key components of human cognition.

This presents the key irony of this tale. Our current interaction with civic and digital systems is defined by simulation—of attention, of traffic patterns, of the filling, defining, and interpolation that defines the current mathematical approach of machine intelligence. We are surrounded by these black boxes, but we cannot enter them; simulation has become like a locked-room murder mystery, with us, as detectives, on the outside.

Pneumatic Knowledge and the Paradox of Simulation

When Edward Link modeled the behavior of aircraft with pipe-organ parts in the 1920s, he continued a centuries-long tradition of airborne simulation. Through the eighteenth and nineteenth century, a series of “androids,” or humanlike automata, were presented to the scientific authorities as key insights into life itself. From the first such devices presented to the French Academy by Jaques Vaucanson in 1738—including a working flute player with glove-leather lips, lungs, and tongue—to a set of elaborate speaking heads produced by the Abbé Mical in 1783, through which, it was proposed, the pronunciation of French could be permanently fixed, these clockwork-pneumatic simulations sought to elucidate the most delicate parts of our existence: breath and song.

The most celebrated of these devices manipulated baser substances: the duck crafted by Vaucanson which, it seemed, ingested corn and produced a substance similar to feces—an achievement for which he was hailed by Voltaire as “Prometheus’ rival” (Riskin 2016, chap. 4; parts of this history are also considered in de Monchaux 2011, chap. 2). Like his Hungarian counterpart Wolfgang Von Kempelen’s chess-playing Turk of 1770, with its hidden operator, Vaucanson’s duck was an illusion—it did not alter its food into feces but simply squeezed brown paste from a hidden compartment. But Kempelen’s subsequent speaking machine, developed over twenty years, was again a pneumatic tour de force, accurately reproducing spoken language and providing key insights into the nature of speech. The notion that a model gave special insight into nature was a key belief in a world where—until the early nineteenth century—man himself was given by Diderot’s Encyclopédie as an example of a “machine,” instead of its opposite (Riskin 2016, chap. 4).

This history is relevant, because it outlines the opposite of our larger tracing. What we have followed above, since the 1930s, is the transformation of the model from an analog of, or insight into, its subject. It is now an active agent in shaping the reality it charts. In this arc, the line between simulation and real experience—from SAGE to soundstage—begins not just to resonate but, ultimately, to collapse. Yet, as can be traced in the movement of such models into smaller and more closed interiors—like the algorithms in an iPhone, ultimately closed to view—the more ubiquitous and successful such a model, the more likely it is to contravene, precisely, a model’s original purpose: to create, through its analog, a more complete understanding of the whole. We are caught in a paradox: the more powerful a model in shaping our reality, the more closed to view it, and its movements, seems to become.

ADC. 1981. “György Kepes.” Global Awards & Club. Published online April 25, 2023. https://web.archive.org/web/20230425004847/http://adcglobal.org/hall-of-fame/gyorgy-kepes/.

Air Defense Systems Engineering Committee. 1950. “Air Defense System: ASDEC Final Report,” October 24. MITRE Corporation Archives.

Appleyard, Donald. 1967. “Letter to Alberto S. Trevino, Jr.” November 7, DAP.

Appleyard, Donald. 1974. “Letter to Miss Marilyn C. Link.” Link Foundation. February 12, DAP.

Appleyard, Donald. 1977. “Understanding Professional Media: Issues, Theory, and a Research Agenda.” In Human Behavior and Environment: Advances in Theory and Research, Vol. 2, edited by Irwin Altman and Joachim F. Wohlwill. Plenum Press.

Appleyard, Donald, Kenneth Craik, Merrie Klapp, and Alciera Kreimer. 1973. “The Berkeley Environmental Simulation Laboratory: Its Use in Environmental Impact Assessment.” Working paper 206. Institute for Urban and Regional Development, University of California, February.

Appleyard, Donald, Kevin Lynch, and John Randolph Myer. 1964. The View from the Road. MIT Press.

Banow, Joel. February 1, 2000. Transcript of Oral History Interview. Joel Banow Collection, National Air and Space Museum Archives, Suitland, MD.

Benson, Michael. 2018. Space Odyssey: Stanley Kubrick, Arthur C. Clarke, and the Making of a Masterpiece. Simon & Schuster.

Brand, Stewart. 1994. How Buildings Learn: What Happens After They’re Built. Viking.

Buderi, Robert. 1997. The Invention That Changed the World: How a Small Group of Radar Pioneers Won the Second World War and Launched a Technological Revolution. Sloan Technology Series. Simon & Schuster.

Cerruzi, Paul. 2003. The History of Modern Computing. MIT Press.

Collins, Michael. 1974. Carrying the Fire: An Astronaut’s Journey. Farrar, Straus & Giroux.

de Monchaux, Nicholas. 2011. Spacesuit: Fashioning Apollo. MIT Press.

de Monchaux, Nicholas. 2016. Local Code: 3,659 Proposals About Data, Design and the Nature of Cities. Princeton Architectural Press.

de Monchaux, Nicholas. 2019. “A Long Time Ago in a City Far, Far Away.” New Geographies 11 (May 2020), p. 129-139.

Galison, Peter. 1994. “The Ontology of the Enemy: Norbert Wiener and the Cybernetic Vision.” Critical Inquiry 21 (1): 228–66.

Hamilton, Lewis. 2022. “Conspiracy vs. Science: A Survey of U.S. Public Beliefs.” Carsey School of Public Policy, April 21. https://carsey.unh.edu/publication/conspiracy-vs-science-survey-us-public-beliefs.

Hewlett, E. M. 1921. “The Selsyn System of Position Indication.” General Electric Review 24 (3): 210–18.

Horton, Cole. 2015. “From World War to Star Wars: Dogfights!” Interview with George Lucas, Star Wars website. https://web.archive.org/web/20190606182227/https://www.starwars.com/news/from-world-war-to-star-wars-dogfights.

Hughes, Thomas Parke. 1998. Rescuing Prometheus. Pantheon Books.

Institute for Urban and Regional Development. 1975. The Berkeley Environmental Simulation Laboratory. Technical Report: Preliminary Report. DAP.

Kepes, Gyorgy. 1955a. “Notes Headed Gyorgy Kepes 3/22/55.” Box 1-1, KLP.

Kepes, Gyorgy. 1955b. “Notes Headed Gyorgy Kepes 4/26/55.” Box 1-1, KLP.

Kepes, Gyorgy. 1956. The New Landscape in Art and Science. Theobald.

Kepes, Gyorgy, Siegfried Giedion, and Samuel Ichiyé Hayakawa. 1944. Language of Vision. P. Theobald.

Light, Jennifer S. 2003. From Warfare to Welfare: Defense Intellectuals and Urban Problems in Cold War America. Johns Hopkins University Press.

Lightman, Herb. 1968. “Filming 2001: A Space Odyssey.” American Cinematographer, June: 416–61. http://archive.org/details/Article-Filming-2001-A-Space-Odyssey-Herb-Lightman-American-Cinematographer.

Link, Edwin A. Jr. 1931. Combination training device for student aviators and entertainment apparatus. United States Patent US1825462A, filed March 12, 1930, and issued September 29, 1931. https://patents.google.com/patent/US1825462A/en.

Lynch, Kevin. 1955a. “Note Headed K.L. 8-8-55.” Box 1-1, 01-General Notes. KLP

Lynch, Kevin. 1955b. “THE PERCEPTUAL FORM OF THE CITY: PROGRESS REPORT AND PLAN FOR FUTURE STUDIES • June, 1955.” Box 1-1, General Statements. KLP.

Lynch, Kevin. 1959. “Summary of Accomplishments, Research Project on the Perceptual Form of the City.” Box 1-1, General Statements. KLP

Lynch, Kevin. 1960. The Image of the City. M.I.T. Press.

Lynch, Kevin. 1995. City Sense and City Design: Writings and Projects of Kevin Lynch. Edited by Michael Southworth and Tridib Banerjee. MIT Press.

Martin, Reinhold. 2004. “Environment, c. 1973.” Grey Room, no. 14: 79–101.

McCullough, David G. 1977. The Path between the Seas: The Creation of the Panama Canal, 1870–1914. Simon and Schuster.

Mindell, David A. 2002. Between Human and Machine: Feedback, Control, and Computing before Cybernetics. Johns Hopkins University Press.

Mindell, David A. 2008. Digital Apollo: Human and Machine in Spaceflight. MIT Press.

National Aeronautics and Space Administration. 1969. Apollo 11 Technical Air-to-Ground Voice Transcription (Goss Net 1), Tape 66/8 04:06:39:37. Manned Spacecraft Center, July 1969. https://ntrs.nasa.gov/citations/20160014392.

Navy Department. 1944. “Ordnance Pamphlet 1303: United States Navy Synchros – Description and Operation.” Prepared for the Bureau of Ordnance and Bureau of Ships by the RCA. December 15.

The New York Times. 1914. “ELECTRICITY’S WONDERS REACH THEIR ZENITH ALONG PANAMA CANAL; Systems of Generation, Distribution and Control Used There Are New Departures in Electrical History.” March 15. https://www.nytimes.com/1914/03/15/archives/electricitys-wonders-reach-their-zenith-along-panama-canal-systems.html.

The New York Times. 1978. “What If a Moon Landing Were Faked? Ask Peter Hyams.” May 28, D10.

The Papers of President Kennedy. 1956–1961, President’s Office Files, Box 30, Special Correspondence. “Johnson, Lyndon B. 1/56-11/61.” Folder 62. John F. Kennedy Presidential Library.

Redmond, Kent C., and Thomas M Smith. 1980. Project Whirlwind: The History of a Pioneer Computer. Digital Press.

Rinzler, J. W. 2010. The Sounds of Star Wars. Chronicle Books.

Rinzler, J. W, and Charles Lippincott. 2007. The Making of Star Wars: The Definitive Story Behind the Original Film: Based on the Lost Interviews from the Official Lucasfilm Archives.

Riskin, Jessica. 2016. The Restless Clock: A History of the Centuries-Long Argument over What Makes Living Things Tick. University of Chicago Press.

Shannon, Claude Elwood, and Warren Weaver. 1949. The Mathematical Theory of Communication. University of Illinois Press.

Southworth, Michael, and Tridib Banerjee. 1995. “Introduction.” In Kevin Lynch, City Sense and City Design: Writings and Projects of Kevin Lynch. Edited by Michael Southworth and Tridib Banerjee. MIT Press.

Sturhahn, Larry. 1974. “The Art of the Sound Editor. An Interview with Walter Murch.” Filmmakers Newsletter 8 (2): 22–25.

Ulmann, Bernd. 2014. AN/FSQ-7: The Computer That Shaped the Cold War. De Gruyter Oldenbourg.

Von Hilgers, Philipp. 2011. “The History of the Black Box: The Clash of a Thing and Its Concept.” Cultural Politics 7, no. 1 (2011): 41–58. https://doi.org/10.2752/175174311X12861940861707.

Wiener, Norbert. 1948. Cybernetics: Control and Communication in the Animal and the Machine. Wiley; Hermann et Cie.

Young, Barry James. 1968. “Experiments in the Perceptual Design of Expressways.” MA diss., Massachusetts Institute of Technology. https://dspace.mit.edu/handle/1721.1/78412.