Getting the Reynolds Number Right

One of the most common mistakes I see in my Fluid Mechanics lab course is incorrect computation of the Reynolds number.  This is a brief treatment of this subject.

First, the Reynolds number is defined as

R_e = \frac{VD\rho}{\mu}

or

R_e = \frac{VD}{\nu}

where

  • R_e = Reynolds number (yes, I know the newer textbooks use a different notation, but I’m a dinosaur)
  • V = Average or reference velocity
  • D = “Diameter” or other governing dimension
  • \mu = Dynamic viscosity
  • \rho = Fluid Density
  • \nu = Kinematic viscosity

Let’s go over this variable by variable.

Velocity

The velocity is generally some kind of “average” velocity, which basically makes the fluid velocity through or around the object under analysis uniform.  In the case of closed ducts or pipes, such as are described here, it’s an average velocity, which can be computed by dividing the fluid flow per unit time by the cross-sectional area of the pipe or duct through which the fluid flows.  For Wind Tunnel Testing (actual and virtual, such as in CFD) it’s the “free stream” velocity, or the velocity through which the airfoil or object moves through the fluid.

Diameter

The “diameter” used to compute Reynolds numbers would seem to be clear-cut, but it’s more a matter of convention than anything else.  With straight pipes of uniform diameter, it’s just the diameter of the pipe.  For flow meters such as orifices or venturi meters, it’s customarily the diameter of the incoming pipe, although in the past that wasn’t always the case.  For airfoils the rules are more complicated and are described in Wind Tunnel Testing.

Density

This is simply the density at wherever the velocity is chosen.  For incompressible fluids, that’s pretty simple.  For compressible fluids it’s more complicated.  With Wind Tunnel Testing it’s the free-stream density of the fluid.

Viscosity

This, I think, is where most students get into trouble.  If you use dynamic viscosity and density, it’s easy to get into trouble with the units.  My advice to students is to use the kinematic viscosity \nu whenever possible, which means that we use the second form of the Reynolds number.  The units for this (unless you’re given it in stokes or centistokes, in which cases you’ll need to convert it) are \frac{length^2}{time} , which cancel nicely with the other dimensions.  For water and air, the two fluids we mostly test in my course, the properties are in my monograph Variation in Viscosity, along with a discussion of viscosity in general.

Conclusion

Reynolds numbers are important in fluid mechanics, but they can trip up a new student.  This is some advice to avoid that.

Now the serious question: how many Reynolds numbers do you know?

 

Advertisements

Indicator Devices and Cards for Vulcan Hammers

vulcanhammer.info

The indicator card, and the devices that produced them, have been around about as long as there have been steam engines.  The basic idea is simple: as the piston of the engine moved, a pressure indicator moved a needle and pen up and down on a paper (usually a rotating drum) and produced what’s called in thermodynamics a pV diagram, shown below.

indicator card An indicator card, taken from A Practical Treatise on the Steam Engine Indicator and Indicator Diagrams by Amice, edited and enlarged by W. Worby Beaumont, 1888. The area of the central region would indicate the energy output of the engine. The displacement is noted on the x-axis and the pressure on the y-axis. The straight lines over the region are probably a method of graphical integration, although even then (before the advent of CAD and numerical integration) a planimeter would be much easier.

The steam engine (or any…

View original post 229 more words

If You Really Want to Get Into Trouble, Read the Mediaevals

Although it’s been out there for a long time, these days the clash between religion and science has been especially heated.  The simplest way to solve the problem would be to cancel elections and let the self-proclaimed know-it-alls run the show.  In this way they could ignore the religious “masses” and insure the continuity of their funding.  Funded scientists are happy scientists…

But what would happen if there was a synergy between the two?  Basically the same thing that is happening between religion and science now: an academic slugfest.  (Reminds one of Rabbi Jonathan Sacks’ joke about “the tradition”…)    That’s pretty much the bottom line of the life of the life of Georg Cantor (1845-1918) and his formulation of set theory.

He was born in St. Petersburg, Russia to parents who originally came from Denmark.  When he was eleven, they moved to Frankfurt, in the German Electorate of Hesse (the Hessians were the ones George Washington crossed the Delaware to defeat at Trenton).  As Carl Boyer notes in his A History of Mathematics:

His (Cantor’s) parents were Christians of Jewish background–his father had been converted to Protestantism, his mother had been born a Catholic.  The son Georg took a strong interest in the finespun arguments of medieval theologians concerning continuity and the infinite, and this militated against his pursuing a mundane career in engineering as suggested by his father.  In his studies at Zurich, Göttingen and Berlin the young man consequently concentrated on philosophy, physics and mathematics–a program that seems to have fostered his unprecedented mathematical imagination.

His central “claim to fame” is the elucidation of set theory (or, as the Germans are wont to call it, mengenlehre).  It’s not an understatement that set theory has come to dominate the teaching of mathematics and its conceptualisation, as I found out the hard way taking advanced linear algebra a couple of years ago, complete with the bizarre notation that has just about taken over math textbooks.  In the 1960’s it was the centrepiece of the “new math” that came into primary and secondary school curricula, and that was controversial, but a great deal more useful than its critics would admit.

The controversy didn’t end with the sets themselves.  Cantor realised that set theory forced him to consider something that mathematicians had danced around for almost two centuries: infinite quantities, or more precisely transfinite quantities.  Sets can have an infinite number of elements, but just what that means was something Cantor plunged very deeply into.

It’s easy to get lost in Cantor’s reasoning, as the concepts he proposed are very profound.  I’ll try to keep things as uncomplicated as I can, taking the risk that I may oversimplify the business.

Let us consider the set of integers.  We know instinctively that there are an infinite number of integers.  Now let us consider the number of even integers.  You’d think that there are half as many even integers as all integers, right?  But both quantities are in fact infinite, which means that dividing it by two doesn’t mean much.  In fact Cantor proved that, if we considered the set of all integers and the set of even integers,  we would have a one-to-one correspondence between each member of each set.  So the size of the two sets is equal, even though one set is a subset of the other.

Things get more complicated when we pass from integers and rational numbers to transcendental numbers like e and pi. Cantor proved that the number of transcendental numbers was larger than that of either/or or both/and the integers and rational numbers, even though all of them were infinite.  Cantor had shown, in effect, that not all infinities were equal to each other!

One device that Cantor, and just about anyone else who deals with transfinite numbers, uses is the limit.  But one major difference between Cantor and many of his contemporaries–and predecessors–is that Cantor showed that infinity was in fact an existing quantity, the problem with the transcendentals not withstanding.

That lit several fuses.  Before Cantor’s time the French mathematician Cauchy stated the following:

I protest against the use of an infinite magnitude as something completed, which is never permissible in mathematics.  Infinity is merely a way of speaking, the true meaning being a limit which certain ratios approach indefinitely close.

The most deadly grenade pin that Cantor pulled, however, was that of Leopold Kronecker (1823-1891), after whom the famous Kronecker Delta is named.  Kronecker, like Cantor of Jewish origins but a Christian, famously stated that “God made the integers, and all the rest is the work of man.” Kronecker made a career out of academically trashing Cantor, blocking appointments and delaying publications.  Cantor, not the scholarly pugilist the situation called for (he should have read Jerome with the mediaevals) had his first nervous breakdown in 1884.  After that time he published little, and died in a psychiatric hospital in 1918, although by then his work was receiving the recognition it deserved.   David Hilbert pithily stated that “From the paradise created for us by Cantor, no one will drive us out”.

So how did the mediaevals influence this revolution in mathematics? The problem of infinity wasn’t as far-fetched as you might think.  It had sat there since Newton and Leibniz set forth the calculus, which in turn hangs on infinitesimals.  Between two finite points there is an infinite number of infinitesimals.  Mediaevals have been jeered for wondering how many angels could dance on the head of a pin, but as long as they were infinitesimals, the answer is clear for finite pins.  It was only a matter of someone putting infinitesimals and infinities together, and that person (with help from others) was Cantor.

Anyone who has explored the philosophy of the scholastics with a mathematical background sooner or later will consider the relationship between their idea and the mathematics of infinity.  Coming off of a master’s degree, I found myself doing that in My Lord and My God.  Although I would not dare to rank myself any where near Cantor, I discovered that all infinities were not equal, and, although they could not have a finite ratio with finite quantities, they were not necessarily equal to each other.  That in turn helped me to see that subordination in God does not impair the deity of the subordinate persons, which solves many problems.  Unfortunately there are those who either can’t or–ahem–won’t see that relationship, and there is always the problem that prelates and seminary academics and are often mathematically challenged.

Today we live in a world where science and religion are forcibly bifurcated.  But it was not always so.  Cantor–and Kronecker and others for that matter–allowed the two to intermingle, and before that Euler was more religiously conservative than Voltaire.  And the Nineteenth Century in Europe was a golden age in mathematics, where advances came one after the other.

But there’s a price.  If you want to get into serious trouble, read the mediaevals, and that’s true for mathematicians and theologians alike.

Note: in addition to Boyer’s book, I used Jane Muir’s Of Men and Numbers: The Story of the Great Mathematicians (Dover Books on Mathematics) in writing this piece.

The Day Science Died

On this website is documented my family’s aviation and yachting activities.  Coupled with our involvement in the deep foundation industry, the thing that ties all of this together is transportation: getting from one place to another.  Integral to that was the desire to advance the methods and technology by which we do things, both in and out of transportation.

I recently read a book I picked up entitled Space Frontier by Wernher von Braun.  It’s basically a series of articles he wrote for Popular Science from the early 1960’s until just before Apollo 11 in 1969, covering various aspects of the space program and accurately describing the moon mission that shortly took place.  von Braun was more than a rocket scientist: he was a visionary who saw us going to Mars in 1986, and had a good idea what it would take to accomplish this.

When I read this book, the first thing that came back to mind was the tight relationship between NASA’s civilian efforts and that of the military.  That was inevitable, not only  because most of the early astronauts were military pilots, but also because rocketry was very much a province of the military.  I wish I had read this book before or during my time in the aerospace industry; it would have given me context for my work.

But the other thing that came in reading this book was an ache–an ache for a time when we were literally reaching for the stars (or at least the moon.)  The passing of that time–something that basically lost its momentum after the moon shots and never quite got it back–is a point in history when something seriously died in this country, and that was a general commitment to the advancement of our state with science.

10074607

South Florida and the western Bahamas, from Gemini 12, November 1966. It turned out to be an overview of most of our yachting adventures. One of the astronauts on Gemini 12, Edwin “Buzz” Aldrin, later became the second man to walk on the moon with Apollo 11, and was featured at the 2019 State of the Union address.

I loved the space program, especially after we moved to Palm Beach and we were down the coast from the Kennedy Space Centre.  Jack Kennedy, whose Palm Beach compound was not far from our house, had challenged a nation in shock over the Soviets’ early unmanned and manned (and womanned!) achievements to reach the moon by the end of the 1960’s.  The Gemini program, which transitioned us to the moon shots, was favourite television viewing.  (After Apollo 11, I went away to prep school, which hindered my ability to follow such things.)  I was aware of the technological spin-offs of the program, such as fuel cells, solar panels (mostly for satellites,) and many other things.

But by the time Armstrong and Aldrin set foot on the moon, the mood had changed.  The 1960’s were a decidedly Luddite time; technology was blamed for despoiling the environment and creating the “few minutes to midnight” atmosphere of the Cold War.  Those who plied their trade in technology were “nerds.”  The space program collapsed and the aerospace industry went with it.  A new generation turned away from technology to more “relevant” (and easier way up) professions such as law and finance.  Instead of landing on Mars in 1986, we were in angst (something we’ve gotten good at) over the explosion of the Challenger.

Fortunately there were two revolutions going on.  It took some time (one wonders if pushing the space program would have speeded it up) but the revolution in computing power was changing the landscape.  Would the nerds get their revenge?  Well, sort of…but people whose training is in the sciences were still very much in the back seat of our society, in contrast to other parts of the world.

10074526

The

It was quite a shock, therefore, when suddenly the spectre of climate change reared its ugly head in the late 1990’s.  It was (and is) characterised as “settled science,” not to be disputed.  Growing up in an era when that didn’t count for anything, one was tempted to ask, “so what’s the panic now?”  But the worst thing about the whole movement is not the problem statement (which can be successfully defended if done in a rational manner) but the solutions that aren’t allowed.  Instead of the obvious goal–producing energy in a manner that doesn’t produce carbon dioxide–we have been told that what we do must be “sustainable.”  This meant a combination of radial conservation (we’re back to Jimmy Carter’s sweater speech) and reliance on technologies such as wind and solar that aren’t quite ready for prime time (they might have been with help from space technology, but…)  The one source of energy that could have eliminated much of these emissions to start with is nuclear power, but this is another bête noire of the hippie dreamers and has been since the days they trashed the space program.

One thing that gets overlooked with science and technology is that the latter is the validation of how well we understand the former.  Evolutionists like to bandy about the billions of years different geological and biological periods lasted, and then use “belief” in evolution as a litmus test.  And, as an old earther, that time frame is fine with me.  But moving the marks of prehistory a billion years here, a billion years there (sounds like Everett Dirksen and the Federal budget!) really doesn’t change the state of things now.  It is what it is.  But applying science to technology and getting results is another matter altogether.  Raising the level of carbon dioxide in the atmosphere is a technological problem and should have a technological solution, be that solution the reduction in the carbon dioxide already there and/or reducing our emissions of same.

But that’s not what’s really being presented these days.  Solving problems isn’t something our social and political systems are really good at, not only because actual scientists and engineers are incidental to the process, but also because solving a problem means ending a movement, something the moment organisers are loathe to do.  We are trapped in a system where science is turned into a religion and problem solving subordinated to moral imperative, and the result is that we have neither solved our problems nor addressed our moral imperatives.

But that’s what happens when real science dies.  We struggle to advance some of our basic sciences (esp. physics) and wonder why things don’t move faster than they do.  Some of the problem is in the research system we have, as I mentioned in another context:

…the piecemeal nature of our research grant system and the organisational disconnect among between universities, contractors and owners incentivises tweaking existing technology and techniques rather than taking bolder, riskier steps with the possible consequence of a dead-end result and a disappointed grant source.

At this point we are too risk averse to take the bold steps we need to take.  Until that changes, and we engage with real science and real results, we will see the secular command of the planet pass to those who are prepared to take the risks and back them up with the science and technology to make them work.

HARM AGM-88A Missile

harmtitleAlthough I don’t usually commemorate the date, on this day in 1977 I started my first job as an engineer for Texas Instruments in Dallas.

My first (and only) work there: design of the HARM AGM-88A missile for the U.S. Navy (actually, a joint development of both the Navy and the Air Force, but we interfaced mostly with the Navy.)

Overview of the Missile

There’s a lot out there for the very technically minded on this weapon (such as the Australian and Dutch sites here) but I’ll try to present the simple view.

HARM stands for High-speed Anti-Radiation Missile.  “Radiation” in this case isn’t a nuclear facility but a radar installation.  The missile’s purpose is to take out radar installations and thus blind the enemy combatant to incoming planes or whatever other airbore weaponry that the U.S. military decided to delopy against an enemy.

The missile is the direct descendant of the Shrike and Standard ARM missiles used in Vietnam.  The Shrike was produced by Texas Instruments and that is what put TI in the missile business.   The Missile and Ordinance Division (which was contracted to develop the HARM) was at the company’s central facility in Dallas at the time, although it was later moved to Lewisville, TX.

The primary Navy point of contact for us was the Naval Weapons Centre in China Lake, CA.  Tests on the prototypes were conducted there and they were excellent people to deal with, although Navy projects in particular suffer from excessive mission expansion.

424

The missile (as shown in the photo above, with two of its wings removed to fit in the rack) is divided into four parts:

  1. The Seeker, at the very front of the missile.  A plastic nose cone (radome) covers the antenna, which seeks out and locates the radar installations.  The electronics to process this information are also there.
  2. The Warhead, where the explosive charge to destroy the target is contained.  During the test program, this was the Test Section, which contained telemetry (as was the case with the space program) to monitor the missile’s flight status and enable us to evaluate both its performance and our modelling of same.
  3. The Control Section, where the wings were rotated to alter the course of the missile during flight.
  4. The Rocket Motor, which propelled the missile away from the aircraft from which it was launched (it’s an air-to-surface missile) and bring it up to the velocity necessary to reach its target.  The HARM is ballistic in the sense that the rocket motor only operates during the first few seconds of flight.

The video below is a good overview of the mission of the missile, from an early (around 1980?) video.

At the time the missile was developed, the main enemy was Soviet.  However, most of the action it has seen has been, unsurpisingly, in the Middle East.  Its first use came in 1986 in Libya; it was also used in the 1991 Gulf War and 2003 Iraq invasion.

Development

If you read the development history of this missile, one thing that strikes you is the length of time it took from start to finish.  Developing HARM took most of the 1970’s and early 1980’s, and this is a fairly simple weapon compared to, say, a fighter or a large warship.  There are two main reasons for this.

The first is, of course, the bureaucratic nature of government.  It’s tempting to say this is the only reason but it isn’t.  Much of that is due to getting funding through Congress, which can be an ordeal for all kinds of projects.  And, of course, changes in administration don’t always help either.  Right after I came to work at TI Jimmy Carter was inaugurated, and funding for the project was put on some kind of “hold.”  My job wasn’t affected but some people’s was.

The second is that our military doesn’t like to leave anything to the imagination or chance if it can help it.  It wants to cover all of its bases and make sure whatever is buys is operational in all environments and meets all of the threats it’s intended to meet.  With radar installations, this leads to the complicated sets of modes that you see described both in the linked articles and in the videos, including the obvious one: shutting off the radar to try to throw the missile off course.   Given that the electronic counter-measures (ECM) environment is very fluid, this leads to a constant cycle of revision during development to meet changes in the field.  In an era when such changes had to be hard-coded into the electronics, meeting this took time.  (Later versions of the missile went to the “soft” coding that is routine today with virtually every electronic device.)

My Work

But another challenge–and one I was involved in–concerned the missile’s electronics and controlling the temperature they operate at, from the time the plane is launched until the missile hits its target.  This is an easier problem to explain now that it was thirty years ago.

In order to function properly, electronic devices have to be kept below certain temperatures.  There are two basic sources of heat.  The first is the electronics themselves, as anyone who has tried to operate an aluminium MacBook or MacBookPro wearing shorts will attest.  To get rid of that heat usually requires a fan of some kind, which isn’t an option on the missile.  (The avionics for it, stored on the aircraft, is another story altogether; it’s similar to the box for a desktop computer, although it has to operate in the thin air of elevated altitudes.)

The second source is external.  For most electronic devices on the earth, that means when the room temperature is too high, or the ventilation is inadequate, either heat is introduced to the unit or not allowed to escape.  That’s why it’s important for your computer or any other heat-generating electronic device to be properly ventilated.  With any kind of air or space craft, at elevated speeds heat is generated by friction with the air.  The most spectacular (and tragic) demonstration of this took place in the 2003 disintegration of the space shuttle Columbia.  Since the HARM’s most sensitive components are located at the front of the missile, that only added to the challenge.

harm4To meet that challenge was, from the standpoint of most engineering in the 1970’s, “the shape of things to come.”  We used simulation software for everything: flight, component stress, heat, you name it.  The aerospace industry was the leader in the development and implementation of computer simulation techniques such as finite element and difference analysis, things that are routine in most design work today.  Most of the work we did was in “batch” mode, and that meant punching a batch of Hollerith cards and taking them down to the computer centre for processing.  Interactive modes via a terminal were just starting when I left, as were plotting graphics.  Today most any flight and flight related wargame posesses the same kinds of simulation we did then, only more, and the graphics to watch what’s going on.  That last was, in my view, the biggest lacuna of our simulation; we only saw and interpreted numbers.

The Company

Texas Instruments was one of the early “high tech” (“semiconductor” was the more common term at the time) companies like Fairchild and later Intel.  It had a breezy, informal (if somewhat spartan) work environment, complete with an automated mail cart which followed a (nearly) invisible stripe in the hallway to guide it to its stops.   It encouraged innovation and creativity in its workforce through both its work environment and its compensation system.  The only time coats and ties came out is when the “brass” (in this case military) came.  That was, from a corporate standpoint, the biggest challenge: keeping the Missile and Ordinance Division, an extension of the government (as is the case with just about any defence contractor,) creative, while at the same time trying to keep the bureaucratic mindset and procedures from oozing into the rest of the company.  Our Division was, to some extent, “quarantined” from the rest of TI to prevent the latter from taking place.

For me, it was a great place to start a career, and I got a chance to work with great people on an interesting project.

My thanks to Jerry McNabb of the Church of God Chaplains Commission (and a former Navy chaplain) for the photos.