NASA Glenn Research Center in Cleveland, Ohio USA — Construction and architecture

NASA John H. Glenn Research Center at Lewis Field is a NASA center, located within the cities of Brook Park and Cleveland between Cleveland Hopkins International Airport and the Cleveland Metroparks’s Rocky River Reservation, with a subsidiary facility in Sandusky, Ohio. Glenn Research Center is one of ten major NASA field centers, whose primary mission is to develop science and technology for use in aeronautics and […]

via NASA Glenn Research Center in Cleveland, Ohio USA — Construction and architecture

The Evolution of Computational Aerospace from 1968-2018 — Another Fine Mesh

Sometimes you write something that ends up on the cutting room floor (as my film-loving friends might say). Such is the case with an article I was asked to write for the 50th anniversary of the Society of Flight Test Engineers in 2018. Alas, plans change and the article went unused. I thought it turned […]

via The Evolution of Computational Aerospace from 1968-2018 — Another Fine Mesh

A Magnificent Man and his Flying Machines — The Logical Place

by Tim Harding, B.Sc. B.A. (An edited version of this article was published in the Sandringham & District Historical Society Newsletter, May 2019) Major Harry Turner Shaw OBE (1889-1973) was an Australian pioneer aviator, both in wartime and peace, and later a boat builder. He lived at ‘The Point’ mansion overlooking Ricketts Point, Beaumaris from around […]

via A Magnificent Man and his Flying Machines — The Logical Place

The Day Science Died

On this website is documented my family’s aviation and yachting activities.  Coupled with our involvement in the deep foundation industry, the thing that ties all of this together is transportation: getting from one place to another.  Integral to that was the desire to advance the methods and technology by which we do things, both in and out of transportation.

I recently read a book I picked up entitled Space Frontier by Wernher von Braun.  It’s basically a series of articles he wrote for Popular Science from the early 1960’s until just before Apollo 11 in 1969, covering various aspects of the space program and accurately describing the moon mission that shortly took place.  von Braun was more than a rocket scientist: he was a visionary who saw us going to Mars in 1986, and had a good idea what it would take to accomplish this.

When I read this book, the first thing that came back to mind was the tight relationship between NASA’s civilian efforts and that of the military.  That was inevitable, not only  because most of the early astronauts were military pilots, but also because rocketry was very much a province of the military.  I wish I had read this book before or during my time in the aerospace industry; it would have given me context for my work.

But the other thing that came in reading this book was an ache–an ache for a time when we were literally reaching for the stars (or at least the moon.)  The passing of that time–something that basically lost its momentum after the moon shots and never quite got it back–is a point in history when something seriously died in this country, and that was a general commitment to the advancement of our state with science.


South Florida and the western Bahamas, from Gemini 12, November 1966. It turned out to be an overview of most of our yachting adventures. One of the astronauts on Gemini 12, Edwin “Buzz” Aldrin, later became the second man to walk on the moon with Apollo 11, and was featured at the 2019 State of the Union address.

I loved the space program, especially after we moved to Palm Beach and we were down the coast from the Kennedy Space Centre.  Jack Kennedy, whose Palm Beach compound was not far from our house, had challenged a nation in shock over the Soviets’ early unmanned and manned (and womanned!) achievements to reach the moon by the end of the 1960’s.  The Gemini program, which transitioned us to the moon shots, was favourite television viewing.  (After Apollo 11, I went away to prep school, which hindered my ability to follow such things.)  I was aware of the technological spin-offs of the program, such as fuel cells, solar panels (mostly for satellites,) and many other things.

But by the time Armstrong and Aldrin set foot on the moon, the mood had changed.  The 1960’s were a decidedly Luddite time; technology was blamed for despoiling the environment and creating the “few minutes to midnight” atmosphere of the Cold War.  Those who plied their trade in technology were “nerds.”  The space program collapsed and the aerospace industry went with it.  A new generation turned away from technology to more “relevant” (and easier way up) professions such as law and finance.  Instead of landing on Mars in 1986, we were in angst (something we’ve gotten good at) over the explosion of the Challenger.

Fortunately there were two revolutions going on.  It took some time (one wonders if pushing the space program would have speeded it up) but the revolution in computing power was changing the landscape.  Would the nerds get their revenge?  Well, sort of…but people whose training is in the sciences were still very much in the back seat of our society, in contrast to other parts of the world.


It was quite a shock, therefore, when suddenly the spectre of climate change reared its ugly head in the late 1990’s.  It was (and is) characterised as “settled science,” not to be disputed.  Growing up in an era when that didn’t count for anything, one was tempted to ask, “so what’s the panic now?”  But the worst thing about the whole movement is not the problem statement (which can be successfully defended if done in a rational manner) but the solutions that aren’t allowed.  Instead of the obvious goal–producing energy in a manner that doesn’t produce carbon dioxide–we have been told that what we do must be “sustainable.”  This meant a combination of radical conservation (we’re back to Jimmy Carter’s sweater speech) and reliance on technologies such as wind and solar that aren’t quite ready for prime time (they might have been with help from space technology, but…)  The one source of energy that could have eliminated much of these emissions to start with is nuclear power, but this is another bête noire of the hippie dreamers and has been since the days they trashed the space program.

One thing that gets overlooked with science and technology is that the latter is the validation of how well we understand the former.  Evolutionists like to bandy about the billions of years different geological and biological periods lasted, and then use “belief” in evolution as a litmus test.  And, as an old earther, that time frame is fine with me.  But moving the marks of prehistory a billion years here, a billion years there (sounds like Everett Dirksen and the Federal budget!) really doesn’t change the state of things now.  It is what it is.  But applying science to technology and getting results is another matter altogether.  Raising the level of carbon dioxide in the atmosphere is a technological problem and should have a technological solution, be that solution the reduction in the carbon dioxide already there and/or reducing our emissions of same.

But that’s not what’s really being presented these days.  Solving problems isn’t something our social and political systems are really good at, not only because actual scientists and engineers are incidental to the process, but also because solving a problem means ending a movement, something the moment organisers are loathe to do.  We are trapped in a system where science is turned into a religion and problem solving subordinated to moral imperative, and the result is that we have neither solved our problems nor addressed our moral imperatives.

But that’s what happens when real science dies.  We struggle to advance some of our basic sciences (esp. physics) and wonder why things don’t move faster than they do.  Some of the problem is in the research system we have, as I mentioned in another context:

…the piecemeal nature of our research grant system and the organisational disconnect among between universities, contractors and owners incentivises tweaking existing technology and techniques rather than taking bolder, riskier steps with the possible consequence of a dead-end result and a disappointed grant source.

At this point we are too risk averse to take the bold steps we need to take.  Until that changes, and we engage with real science and real results, we will see the secular command of the planet pass to those who are prepared to take the risks and back them up with the science and technology to make them work.

HARM AGM-88A Missile

harmtitleAlthough I don’t usually commemorate the date, on this day in 1977 I started my first job as an engineer for Texas Instruments in Dallas.

My first (and only) work there: design of the HARM AGM-88A missile for the U.S. Navy (actually, a joint development of both the Navy and the Air Force, but we interfaced mostly with the Navy.)

Overview of the Missile

There’s a lot out there for the very technically minded on this weapon (such as the Australian and Dutch sites here) but I’ll try to present the simple view.

HARM stands for High-speed Anti-Radiation Missile.  “Radiation” in this case isn’t a nuclear facility but a radar installation.  The missile’s purpose is to take out radar installations and thus blind the enemy combatant to incoming planes or whatever other airbore weaponry that the U.S. military decided to delopy against an enemy.

The missile is the direct descendant of the Shrike and Standard ARM missiles used in Vietnam.  The Shrike was produced by Texas Instruments and that is what put TI in the missile business.   The Missile and Ordinance Division (which was contracted to develop the HARM) was at the company’s central facility in Dallas at the time, although it was later moved to Lewisville, TX.

The primary Navy point of contact for us was the Naval Weapons Centre in China Lake, CA.  Tests on the prototypes were conducted there and they were excellent people to deal with, although Navy projects in particular suffer from excessive mission expansion.


The missile (as shown in the photo above, with two of its wings removed to fit in the rack) is divided into four parts:

  1. The Seeker, at the very front of the missile.  A plastic nose cone (radome) covers the antenna, which seeks out and locates the radar installations.  The electronics to process this information are also there.
  2. The Warhead, where the explosive charge to destroy the target is contained.  During the test program, this was the Test Section, which contained telemetry (as was the case with the space program) to monitor the missile’s flight status and enable us to evaluate both its performance and our modelling of same.
  3. The Control Section, where the wings were rotated to alter the course of the missile during flight.
  4. The Rocket Motor, which propelled the missile away from the aircraft from which it was launched (it’s an air-to-surface missile) and bring it up to the velocity necessary to reach its target.  The HARM is ballistic in the sense that the rocket motor only operates during the first few seconds of flight.

The video below is a good overview of the mission of the missile, from an early (around 1980?) video.

At the time the missile was developed, the main enemy was Soviet.  However, most of the action it has seen has been, unsurpisingly, in the Middle East.  Its first use came in 1986 in Libya; it was also used in the 1991 Gulf War and 2003 Iraq invasion.


If you read the development history of this missile, one thing that strikes you is the length of time it took from start to finish.  Developing HARM took most of the 1970’s and early 1980’s, and this is a fairly simple weapon compared to, say, a fighter or a large warship.  There are two main reasons for this.

The first is, of course, the bureaucratic nature of government.  It’s tempting to say this is the only reason but it isn’t.  Much of that is due to getting funding through Congress, which can be an ordeal for all kinds of projects.  And, of course, changes in administration don’t always help either.  Right after I came to work at TI Jimmy Carter was inaugurated, and funding for the project was put on some kind of “hold.”  My job wasn’t affected but some people’s was.

The second is that our military doesn’t like to leave anything to the imagination or chance if it can help it.  It wants to cover all of its bases and make sure whatever is buys is operational in all environments and meets all of the threats it’s intended to meet.  With radar installations, this leads to the complicated sets of modes that you see described both in the linked articles and in the videos, including the obvious one: shutting off the radar to try to throw the missile off course.   Given that the electronic counter-measures (ECM) environment is very fluid, this leads to a constant cycle of revision during development to meet changes in the field.  In an era when such changes had to be hard-coded into the electronics, meeting this took time.  (Later versions of the missile went to the “soft” coding that is routine today with virtually every electronic device.)

My Work

But another challenge–and one I was involved in–concerned the missile’s electronics and controlling the temperature they operate at, from the time the plane is launched until the missile hits its target.  This is an easier problem to explain now that it was thirty years ago.

In order to function properly, electronic devices have to be kept below certain temperatures.  There are two basic sources of heat.  The first is the electronics themselves, as anyone who has tried to operate an aluminium MacBook or MacBookPro wearing shorts will attest.  To get rid of that heat usually requires a fan of some kind, which isn’t an option on the missile.  (The avionics for it, stored on the aircraft, is another story altogether; it’s similar to the box for a desktop computer, although it has to operate in the thin air of elevated altitudes.)

The second source is external.  For most electronic devices on the earth, that means when the room temperature is too high, or the ventilation is inadequate, either heat is introduced to the unit or not allowed to escape.  That’s why it’s important for your computer or any other heat-generating electronic device to be properly ventilated.  With any kind of air or space craft, at elevated speeds heat is generated by friction with the air.  The most spectacular (and tragic) demonstration of this took place in the 2003 disintegration of the space shuttle Columbia.  Since the HARM’s most sensitive components are located at the front of the missile, that only added to the challenge.

harm4To meet that challenge was, from the standpoint of most engineering in the 1970’s, “the shape of things to come.”  We used simulation software for everything: flight, component stress, heat, you name it.  The aerospace industry was the leader in the development and implementation of computer simulation techniques such as finite element and difference analysis, things that are routine in most design work today.  Most of the work we did was in “batch” mode, and that meant punching a batch of Hollerith cards and taking them down to the computer centre for processing.  Interactive modes via a terminal were just starting when I left, as were plotting graphics.  Today most any flight and flight related wargame posesses the same kinds of simulation we did then, only more, and the graphics to watch what’s going on.  That last was, in my view, the biggest lacuna of our simulation; we only saw and interpreted numbers.

The Company

Texas Instruments was one of the early “high tech” (“semiconductor” was the more common term at the time) companies like Fairchild and later Intel.  It had a breezy, informal (if somewhat spartan) work environment, complete with an automated mail cart which followed a (nearly) invisible stripe in the hallway to guide it to its stops.   It encouraged innovation and creativity in its workforce through both its work environment and its compensation system.  The only time coats and ties came out is when the “brass” (in this case military) came.  That was, from a corporate standpoint, the biggest challenge: keeping the Missile and Ordinance Division, an extension of the government (as is the case with just about any defence contractor,) creative, while at the same time trying to keep the bureaucratic mindset and procedures from oozing into the rest of the company.  Our Division was, to some extent, “quarantined” from the rest of TI to prevent the latter from taking place.

For me, it was a great place to start a career, and I got a chance to work with great people on an interesting project.

My thanks to Jerry McNabb of the Church of God Chaplains Commission (and a former Navy chaplain) for the photos.