HARM AGM-88A Missile

harmtitleAlthough I don’t usually commemorate the date, on this day in 1977 I started my first job as an engineer for Texas Instruments in Dallas.

My first (and only) work there: design of the HARM AGM-88A missile for the U.S. Navy (actually, a joint development of both the Navy and the Air Force, but we interfaced mostly with the Navy.)

Overview of the Missile

There’s a lot out there for the very technically minded on this weapon (such as the Australian and Dutch sites here) but I’ll try to present the simple view.

HARM stands for High-speed Anti-Radiation Missile.  “Radiation” in this case isn’t a nuclear facility but a radar installation.  The missile’s purpose is to take out radar installations and thus blind the enemy combatant to incoming planes or whatever other airbore weaponry that the U.S. military decided to delopy against an enemy.

The missile is the direct descendant of the Shrike and Standard ARM missiles used in Vietnam.  The Shrike was produced by Texas Instruments and that is what put TI in the missile business.   The Missile and Ordinance Division (which was contracted to develop the HARM) was at the company’s central facility in Dallas at the time, although it was later moved to Lewisville, TX.

The primary Navy point of contact for us was the Naval Weapons Centre in China Lake, CA.  Tests on the prototypes were conducted there and they were excellent people to deal with, although Navy projects in particular suffer from excessive mission expansion.

424

The missile (as shown in the photo above, with two of its wings removed to fit in the rack) is divided into four parts:

  1. The Seeker, at the very front of the missile.  A plastic nose cone (radome) covers the antenna, which seeks out and locates the radar installations.  The electronics to process this information are also there.
  2. The Warhead, where the explosive charge to destroy the target is contained.  During the test program, this was the Test Section, which contained telemetry (as was the case with the space program) to monitor the missile’s flight status and enable us to evaluate both its performance and our modelling of same.
  3. The Control Section, where the wings were rotated to alter the course of the missile during flight.
  4. The Rocket Motor, which propelled the missile away from the aircraft from which it was launched (it’s an air-to-surface missile) and bring it up to the velocity necessary to reach its target.  The HARM is ballistic in the sense that the rocket motor only operates during the first few seconds of flight.

The video below is a good overview of the mission of the missile, from an early (around 1980?) video.

At the time the missile was developed, the main enemy was Soviet.  However, most of the action it has seen has been, unsurpisingly, in the Middle East.  Its first use came in 1986 in Libya; it was also used in the 1991 Gulf War and 2003 Iraq invasion.

Development

If you read the development history of this missile, one thing that strikes you is the length of time it took from start to finish.  Developing HARM took most of the 1970’s and early 1980’s, and this is a fairly simple weapon compared to, say, a fighter or a large warship.  There are two main reasons for this.

The first is, of course, the bureaucratic nature of government.  It’s tempting to say this is the only reason but it isn’t.  Much of that is due to getting funding through Congress, which can be an ordeal for all kinds of projects.  And, of course, changes in administration don’t always help either.  Right after I came to work at TI Jimmy Carter was inaugurated, and funding for the project was put on some kind of “hold.”  My job wasn’t affected but some people’s was.

The second is that our military doesn’t like to leave anything to the imagination or chance if it can help it.  It wants to cover all of its bases and make sure whatever is buys is operational in all environments and meets all of the threats it’s intended to meet.  With radar installations, this leads to the complicated sets of modes that you see described both in the linked articles and in the videos, including the obvious one: shutting off the radar to try to throw the missile off course.   Given that the electronic counter-measures (ECM) environment is very fluid, this leads to a constant cycle of revision during development to meet changes in the field.  In an era when such changes had to be hard-coded into the electronics, meeting this took time.  (Later versions of the missile went to the “soft” coding that is routine today with virtually every electronic device.)

My Work

But another challenge–and one I was involved in–concerned the missile’s electronics and controlling the temperature they operate at, from the time the plane is launched until the missile hits its target.  This is an easier problem to explain now that it was thirty years ago.

In order to function properly, electronic devices have to be kept below certain temperatures.  There are two basic sources of heat.  The first is the electronics themselves, as anyone who has tried to operate an aluminium MacBook or MacBookPro wearing shorts will attest.  To get rid of that heat usually requires a fan of some kind, which isn’t an option on the missile.  (The avionics for it, stored on the aircraft, is another story altogether; it’s similar to the box for a desktop computer, although it has to operate in the thin air of elevated altitudes.)

The second source is external.  For most electronic devices on the earth, that means when the room temperature is too high, or the ventilation is inadequate, either heat is introduced to the unit or not allowed to escape.  That’s why it’s important for your computer or any other heat-generating electronic device to be properly ventilated.  With any kind of air or space craft, at elevated speeds heat is generated by friction with the air.  The most spectacular (and tragic) demonstration of this took place in the 2003 disintegration of the space shuttle Columbia.  Since the HARM’s most sensitive components are located at the front of the missile, that only added to the challenge.

harm4To meet that challenge was, from the standpoint of most engineering in the 1970’s, “the shape of things to come.”  We used simulation software for everything: flight, component stress, heat, you name it.  The aerospace industry was the leader in the development and implementation of computer simulation techniques such as finite element and difference analysis, things that are routine in most design work today.  Most of the work we did was in “batch” mode, and that meant punching a batch of Hollerith cards and taking them down to the computer centre for processing.  Interactive modes via a terminal were just starting when I left, as were plotting graphics.  Today most any flight and flight related wargame posesses the same kinds of simulation we did then, only more, and the graphics to watch what’s going on.  That last was, in my view, the biggest lacuna of our simulation; we only saw and interpreted numbers.

The Company

Texas Instruments was one of the early “high tech” (“semiconductor” was the more common term at the time) companies like Fairchild and later Intel.  It had a breezy, informal (if somewhat spartan) work environment, complete with an automated mail cart which followed a (nearly) invisible stripe in the hallway to guide it to its stops.   It encouraged innovation and creativity in its workforce through both its work environment and its compensation system.  The only time coats and ties came out is when the “brass” (in this case military) came.  That was, from a corporate standpoint, the biggest challenge: keeping the Missile and Ordinance Division, an extension of the government (as is the case with just about any defence contractor,) creative, while at the same time trying to keep the bureaucratic mindset and procedures from oozing into the rest of the company.  Our Division was, to some extent, “quarantined” from the rest of TI to prevent the latter from taking place.

For me, it was a great place to start a career, and I got a chance to work with great people on an interesting project.

My thanks to Jerry McNabb of the Church of God Chaplains Commission (and a former Navy chaplain) for the photos.

Advertisements

In Search of the Lost Movado

I spend a lot of time on this site talking about yachting around South Florida and the Bahamas. As a family we were privileged to do so in an era when things weren’t so crowded—or regulated—as they are now. We also got to miss the thrill of the piracy wave that swept over the region during the 1980’s with the drug trade.

goldengirl2

Our last yacht leaving the Palm Beach Inlet, with Singer Island in the background. My father changed the naming convention for this one. 65′ long, attractive and comfortable, it nevertheless wasn’t the best craft for a storm, as we found out the hard way. Note that the sea just in front of the beach is a different (brown) colour from what our craft is going through. This is because Lake Worth was badly polluted at that time; when the tide went out, the foul water went with it. The line between the lake effluent and the ocean was usually very crisp, as one can see above. (Photo by Bernice Ransom Studios, Palm Beach.)

One of our ports of call was Key Biscayne, near Miami. It is basically the last barrier island in the chain that runs along all of Florida’s east cost before the break west of Fowey Rocks begins the Florida Keys. It’s a nice place to visit, or at least was in the late 1960’s when we tied up our yacht at the Key Biscayne Yacht Club. In those days it wasn’t as much of a problem to let me and my brother putter around the island a bit, although I preferred to fall into the drink trying to get into my Dilly Boat dinghy.

dilly-boat

Slightly overloaded: on our last yacht, we had two dinghys. The smaller of the two is shown at the left. Called a “Dilly Boat,” it was an 8′ long, cathedral hull fibreglass boat, not really suitable for all of the three people occupying it in this photo, taken at the Ocean Reef Resort on Key Largo. One of the things that has changed dramatically since our years on the water is the engine horsepower that propel ships of all sizes in the water. For me, it’s still hard to believe the power that’s put into boats now, large and small, and the speeds they routinely achieve. However, this craft took slow to a new level. The outboard motor driving this small craft was only a 3 hp Johnson with a self-contained fuel tank.

One night, however, it wasn’t the kids that got into trouble: it was my parents. They went out partying and “bar hopping” one night. In the midst of all of this revelry my mother’s Movado watch left her wrist, victim of an unlatched band clasp failure and a broken safety chain.

Needless to say, when she realised this, panic ensued. All hands were on deck—or on the island, really, searching for this watch. My parents attempted to retrace their steps, but that wasn’t easy as they were having a job of it trying to remember what their steps were. Us kids were sent out literally along the roadsides to try to find the watch. As time went on the fear that the Movado was the victim of South Florida crime became greater. Finally, there was victory: they found the bar where it had fallen off, the bartender having found and kept it, hoping for the return of its rightful owner.

There are few who haven’t experienced the loss or misplacement of a possession. Such a common phenomenon finds expression in the New Testament:

quote:


“What man among you who has a hundred sheep, and has lost one of them, does not leave the ninety-nine out in the open country, and go after the lost sheep till he finds it? And, when he has found it, he puts in on his shoulders rejoicing; And, on reaching home, he calls his friends and his neighbours together, and says ‘Come and rejoice with me, for I have found my sheep which was lost.’ So, I tell you, there will be more rejoicing in Heaven over one outcast that repents, than over ninety-nine religious men, who have no need to repent. Or again, what woman who has ten silver coins, if she loses one of them, does not light a lamp, and sweep the house, and search carefully until she finds it? And, when she has found it, she calls her friends and neighbours together, and says ‘Come and rejoice with me, for I have found the coin which I lost.’ So, I tell you, there is rejoicing in the presence of God’s angels over one outcast that repents.” (Luke 15:4-10)


Jesus compares such an event to God’s diligence in seeking lost souls on the earth. Like my brother and I, he even refers to searching the roadsides: “’Go out,’ the master said, ‘into the roads and hedgerows, and make people come in, so that my house may be filled…” (Luke 14:23) For Christians, it is an impetus to do likewise, to spend as much time seeking the lost as we do looking for our stuff.

But what if the lost soul is you? Has that empty place in your life remain unfulfilled? Do you feel that you are of no worth? My mother’s Movado had diamonds around the face, but it was of little value to her while it was lost. When we find unity with God, the value that has been designed into us becomes real, and our life finds the purpose our Creator had for us from the very start. We all have diamonds mounted in our being, and when we enter into a relationship with Him who moves all things, our value in principle becomes one of reality, and our existence becomes a real life.

For more information click here.

Merry Christmas from Chet Aero Marine

pem-don-christmas-card

Our yacht, at anchor in the harbour at Hope Town, Abaco, Bahamas, shortly before its fateful encounter

This photo graced my family’s Christmas card in 1965.  It showed our second family yacht at anchor in Hope Town, Bahamas.  What it didn’t show was that, six months earlier, it almost went to the bottom a few days after the photo was taken down in Spanish Wells, as described in this post.

Positive-Infinity-2007-Chr

The boat itself at Christmastime, in Ocean Reef, Key Largo.

May God richly bless you at this time and all through the coming year.

 

Numerical Integration of a Function in a Two-Dimensional Space, and Its Solution with Conjugate Gradient

The purpose of this piece is to document the numerical integration of a function in a two-dimensional space using the conjugate gradient method.

Basic Problem Statement and Closed Form Solution

The boundary value problem is as follows:
{\frac{\partial^{2}}{\partial{x}^{2}}}u(x,y)+{\frac{\partial^{2}}{\partial{y}^{2}}}u(x,y)=-5\, y\sin(x)\sin(2\, y)+4\,\sin(x)\cos(2\, y)

where
u(x,0)=0
u(x,\pi)=0
u(0,y)=0
u(\pi,y)=0

A plot of the right hand side of this is shown below.

pasted11

To solve this in closed form, we need to consider our solution method. Given the form of Poisson’s Equation above, it is tempting to use a double Fourier series (or alternatively a complex exponential solution with constant coefficients) with only a few (1-2) terms in the solution. Complicating this is the presence of the first order term for y . One way to deal with this is to consider including the linear term in an assumed solution, as follows:

u(x,y)=y{e^{\sqrt{-1}\alpha\, x}}{e^{\sqrt{-1}\omega\, y}}

Transforming this into sines and cosines with a more general solution, we can write

u(x,y)=A_{{1}}y\cos(\alpha\, x)\cos(\omega\, y)-A_{{2}}y\sin(\alpha\, x)\sin(\omega\, y)+\sqrt{-1}A_{{3}}y\sin(\alpha\, x)\cos(\omega\, y)+\sqrt{-1}A_{{4}}y\cos(\alpha\, x)\sin(\omega\, y)

Applying the operator to this yields

{\frac{\partial^{2}}{\partial{x}^{2}}}u(x,y)+{\frac{\partial^{2}}{\partial{y}^{2}}}u(x,y)=-A_{{1}}y\cos(\alpha\, x){\alpha}^{2}\cos(\omega\, y)+A_{{2}}y\sin(\alpha\, x){\alpha}^{2}\sin(\omega\, y)-\sqrt{-1}A_{{3}}y\sin(\alpha\, x){\alpha}^{2}\cos(\omega\, y)-\sqrt{-1}A_{{4}}y\cos(\alpha\, x){\alpha}^{2}\sin(\omega\, y)-2\, A_{{1}}\cos(\alpha\, x)\sin(\omega\, y)\omega-A_{{1}}y\cos(\alpha\, x)\cos(\omega\, y){\omega}^{2}-2\, A_{{2}}\sin(\alpha\, x)\cos(\omega\, y)\omega+A_{{2}}y\sin(\alpha\, x)\sin(\omega\, y){\omega}^{2}-2\,\sqrt{-1}A_{{3}}\sin(\alpha\, x)\sin(\omega\, y)\omega-\sqrt{-1}A_{{3}}y\sin(\alpha\, x)\cos(\omega\, y){\omega}^{2}+2\,\sqrt{-1}A_{{4}}\cos(\alpha\, x)\cos(\omega\, y)\omega-\sqrt{-1}A_{{4}}y\cos(\alpha\, x)\sin(\omega\, y){\omega}^{2}

The idea now is to eliminate terms of sine and cosine combinations that do not appear in the second partial derivatives.

The simplest place to start is to state the following, based on the structure of same partial derivatives:

\alpha=1
\omega=2

This harmonizes the sine and cosine functions. By inspection, we can also state that

A_{1}=A_{4}=0

since these terms do not appear in the problem statement. The values of A_{2} and A_{3} are not as obvious, but based on the previous statement our assumed solution (equated to the problem statement) reduces to

5\, A_{{2}}y\sin(x)\sin(2\, y)-5\,\sqrt{-1}A_{{3}}y\sin(x)\cos(2\, y)-4\, A_{{2}}\sin(x)\cos(2\, y)-4\,\sqrt{-1}A_{{3}}\sin(x)\sin(2\, y)=-5\, y\sin(x)\sin(2\, y)+\sin(x)\cos(2\, y)\,

This can be reduced to a linear system as follows:

5\, A_{{2}}y-4\,\sqrt{-1}A_{{3}}=-5\, y

-5\,\sqrt{-1}A_{{3}}y-4\, A_{{2}}=4\, y

In matrix form, the equation becomes

\left[\begin{array}{cc} 5\, y & -4\,\sqrt{-1}\\ \noalign{\medskip}-4 & -5\,\sqrt{-1}y \end{array}\right]\left[\begin{array}{c} A_{2}\\ A_{3} \end{array}\right]=\left[\begin{array}{c} -5\, y\\ \noalign{\medskip}4 \end{array}\right]

Inverting the matrix and multiplying it by the right hand side yields

\left[\begin{array}{c} A_{2}\\ A_{3} \end{array}\right]=\left[\begin{array}{cc} 5\,{\frac{y}{25\,{y}^{2}+16}} & -4\,\left(25\,{y}^{2}+16\right)^{-1}\\ \noalign{\medskip}4\,{\frac{\sqrt{-1}}{25\,{y}^{2}+16}} & 5\,{\frac{\sqrt{-1}y}{25\,{y}^{2}+16}} \end{array}\right]\left[\begin{array}{c} -5\, y\\ \noalign{\medskip}4 \end{array}\right]\\ =\left[\begin{array}{c} -25\,{\frac{{y}^{2}}{25\,{y}^{2}+16}}-16\,\left(25\,{y}^{2}+16\right)^{-1}\\ \noalign{\medskip}0 \end{array}\right]\\ =\left[\begin{array}{c} -1\\ 0 \end{array}\right]

This yields our solution,
u(x,y)=y\sin(x)\sin(2\, y)

A plot is shown below.

pasted2

Solution by Conjugate Gradient

The residual norm history is shown below for n=9, 19 and 39, where n is the number of nodes on each axis to produce a grid. The plot shows the decrease in the 2-norm of the residual as the iterations progress. The iterations are stopped when they either exceed 2000 or if the 2-norm falls below 10^{-4}.

pasted5a

All of the discretisations converged within 40 steps. In any event the conjugate gradient method will converge in no more steps than the row/column number of the matrix, and in this case the performance of the method far exceeded that, doubtless in part to the fact that we employed a preconditioner to accelerate the solution.

As mentioned in the code comments, the method used is conjugate gradient as outlined in Gourdin and Boumahrat (2003). The method begins by computing the original residual:

r^{\left(0\right)}=b-Ax^{\left(0\right)}

We then solve the following equation

Cp^{\left(0\right)}=r^{\left(0\right)}

At this point we need to consider the conditioning matrix C. It is defined as follows:

C=TT^{t}

where T is the lower diagonal portion of A. As an aside, this is an incomplete Cholesky factorization. Obviously T^{t} is the upper diagonal, A being symmetric. Since we have an upper diagonal solver, it would make sense to compute and invert C using T^{t}.

To accomplish this, we invert this to yield

C^{-1}=\left(TT^{t}\right)^{-1}=\left(T^{t}\right)^{-1}\left(T\right)^{-1}

We can accomplish the first term of the right hand side with the information at hand. For the second term, considering

\left(T^{t}\right)^{-1}=\left(T^{-1}\right)^{t}

we transpose both sides to yield

\left(T\right)^{-1}=\left(\left(T^{t}\right)^{-1}\right)^{t}

and substituting

C^{-1}=\left(T^{t}\right)^{-1}\left(\left(T^{t}\right)^{-1}\right)^{t}

We can use the factoring method to invert T^{t}, then take its transpose, then multiply \left(T^{t}\right)^{-1} by its transpose to yield C^{-1}, which is in essence our conditioning matrix and which can be used to solve for the vector p.

Returning to the algorithm, to save calculations later we define the
vector

s=C^{-1}r

and solving

p^{\left(0\right)}=C^{-1}r^{\left(0\right)}

we equate
s^{\left(0\right)}=p^{\left(0\right)}

We now begin our iteration for k=1,2,3,\ldots k_{max}. We first compute

\tilde{\alpha}=\frac{\left(r^{\left(k\right)}\right)^{t}s^{\left(k\right)}}{\left(p^{\left(k\right)}\right)^{t}Ap^{\left(k\right)}}

for which purpose we developed a special subroutine. We then update our step as follows:

x^{\left(k+1\right)}=x^{\left(k\right)}+\tilde{\alpha}p^{\left(k+1\right)}

r^{\left(k+1\right)}=b-Ax^{\left(k+1\right)}

Using the conditioning matrix we developed, we compute the following

s^{\left(k+1\right)}=C^{-1}r^{\left(k+1\right)}

and (using a similar routine we used for \tilde{\alpha})

\tilde{\beta}=\frac{\left(r^{\left(k+1\right)}\right)^{t}s^{k+1}}{\left(r^{\left(k\right)}\right)^{t}s^{\left(k\right)}}

and

\tilde{p}^{\left(k+1\right)}=s^{\left(k+1\right)}+\tilde{\beta}^{\left(k+1\right)}p^{\left(k\right)}

From here we index k and repeat the cycle until either we reach k_{max} or our convergence criterion.  It should be noted that the efficiency of conjugate gradient was such that, particularly at the finer discretizations, the routine spent more time generating the preconditioning matrices than it did in actually going through the cycles!

Solution Code

The solution codes were written in FORTRAN 77, and is presented below.  The traditional FORTRAN column spacing got messed up in the cut and paste. Also presented is some of the code which was incorporated using the INCLUDE statement of Open WATCOM FORTRAN. The size of the grid was varied using the parameter statement at the beginning of the problem,and is generally set to 39 for the codes presented. It was varied to 9 and 19 as needed.

We should also note that the routine was run in single precision.

 c Solution of Two-Dimensional Grid Using
 c Conjugate Gradient Iteration
 c Implementation of conjugate gradient method
 c as per Gourdin and Boumahrat (2003)
 c Includes preconditioning by construction of matrix C
 c Matrix size changed via parameter statement
 include 'mpar.for'
 parameter(pi=3.141592654,istep=2000)
 c Define arrays for main array, diagonal block, off-diagonal block,
 c and solution and rhs vectors
 dimension a(nn,nn),d(n,n),c(n,n),u(nn),b(nn)
 c Define conditioning and related matrices
 dimension cond(nn,nn),t(nn,nn),tt(nn,nn)
 c Define residual vectors and norm for residual
 dimension r(nn),rold(nn),p(nn),s(nn),sold(nn),xnorm(istep)
 c Coordinate Arrays
 dimension x(n,n),y(n,n)
 c Error Vector for Final Iterate
 dimension uerr(nn)
 c Statement function to define rhs
 f(x1,y1)=-5*y1*sin(x1)*sin(2*y1)+4*sin(x1)*cos(2*y1)
 c Statement function to define closed form solution
 fexact(x1,y1)=y1*sin(x1)*sin(2*y1)
 c Write Header for Output
 write(*,*)'Conjugate Gradient Iteration to solve',
 &' two-dimensional grid'
 write(*,*)'Grid Size = ',n,' x ',n
 write(*,*)'Matrix Size ',nn,' x ',nn
 call tstamp
 c Initialise diagonal block array
 do 20 i=1,n,1
 do 20 j=1,n,1
 if(i.eq.j) then
 d(i,j)=4.
 elseif(abs(i-j).eq.1) then
 d(i,j)=-1.
 else
 d(i,j)=0.
 endif
 20 continue
 c Initialise off-diagonal block array
 do 30 i=1,n,1
 do 30 j=1,n,1
 if(i.eq.j) then
 c(i,j)=-1.
 else
 c(i,j)=0.
 endif
 30 continue
 c Compute grid spacing h
 xh=pi/float(n+1)
 c Initialise rhs
 do 60 j=1,n,1
 do 60 i=1,n,1
 x(i,j)=float(i)*xh
 y(i,j)=float(j)*xh
 ii=(j-1)*n+i
 b(ii)=-f(x(i,j),y(i,j))*xh**2
 write(*,61)i,j,ii,x(i,j),y(i,j),b(ii)
 61 format(3i5,3f10.3)
 60 continue
 c Insert block matrices into main matrix
 call blkadd(d,c,a)
 c Initialise result vector
 do 70 ii=1,nn,1
 70 u(ii)=1.0
 c Compute first residual vector, using residual vector as temporary storage
 call xmvmat(a,u,r)
 do 75 ii=1,nn,1
 r(ii)=b(ii)-r(ii)
 75 continue
 c Construct upper triangular matrix tt by stripping lower diagonal portion of
 c matrix a
 do 76 ii=1,nn,1
 do 76 jj=1,nn,1
 if(jj-ii)78,77,77
 77 tt(ii,jj)=a(ii,jj)
 goto 76
 78 tt(ii,jj)=0.0
 76 continue
 c Invert upper triangular matrix tt by factorisation method
 call uminv(tt)
 c Construct inverted matrix t by transposing inverted matrix tt
 call trnspz(tt,t)
 c Multiply t by tt to obtain preconditioning matrix c (which is actually
 c c**(-1))
 call xmvmul(tt,t,cond)
 c Compute initial vector p by multiplying cond (inverse of c) by residual
 call xmvmat(cond,r,p)
 c Set initial vector s to p
 do 79 ii=1,nn,1
 79 s(ii)=p(ii)
 do 80 kk=1,istep,1
 c set old values of vectors r and s
 do 190 ii=1,nn,1
 rold(ii)=r(ii)
 190 sold(ii)=s(ii)
 c Compute alpha for each step
 alpha1=alpha(a,p,r,s)
 c Update value of u
 do 181 ii=1,nn,1
 181 u(ii)=u(ii)+alpha1*p(ii)
 c Update residual for each step r = b-Au
 call xmvmat(a,u,r)
 do 82 ii=1,nn,1
 r(ii)=b(ii)-r(ii)
 82 continue
 c Update vector s
 call xmvmat(cond,r,s)
 c Compute value of beta for each step
 beta1=beta(r,rold,s,sold)
 c Update value of p vector
 do 195 ii=1,nn,1
 195 p(ii)=s(ii)+beta1*p(ii)
 c Invoke function to compute Euclidean norm of residual
 c Kicks the iteration out once tolerance is reached
 xnorm(kk)=vnorm(r,nn)
 if(xnorm(kk).lt.1.0e-04)goto 83
 write(*,*)kk,xnorm(kk)
 80 continue
 83 continue
 do 90 j=1,n,1
 do 90 i=1,n,1
 ii=(j-1)*n+i
 uerr(ii)=fexact(x(i,j),y(i,j))-u(ii)
 90 continue
 uerrf=vnorm(uerr,nn)
 write(*,*)'Number of iterations = ',kk
 write(*,*)'Euclidean norm for final error =',uerrf
 open(2,file='m5610p1d.csv')
 if(kk.gt.istep)kk=istep
 do 40 ii=1,kk,1
 write(2,50)ii,xnorm(ii)
 50 format(i5,1h,,e15.5)
 40 continue
 close(2)
 stop
 end
c Subroutine to insert nxn blocks into n**2xn**2 array
 c Assumes all diagonal blocks are the same
 c Assumes all off-diagonal blocks (upper and lower) are the same
 c Assigns zero values elsewhere in the array
 subroutine blkadd(d,c,a)
 include 'm5610par.for'
 dimension a(nn,nn),d(n,n),c(n,n)
 c Insert blocks into array
 do 20 ii=1,n,1
 do 20 jj=1,n,1
 c Determine row and column index in master array for corner of block
 ic=(ii-1)*n+1
 jc=(jj-1)*n+1
 do 30 i=1,n,1
 do 30 j=1,n,1
 iii=ic+i-1
 jjj=jc+j-1
 if(ii.eq.jj)then
 a(iii,jjj)=d(i,j)
 elseif(abs(ii-jj).eq.1)then
 a(iii,jjj)=c(i,j)
 else
 a(iii,jjj)=0.
 endif
 30 continue
 20 continue
 return
 end
c Subroutine to multiply a nn x nn matrix with an nn vector
 c a is the nn x nn square matrix
 c b is the nn vector
 c c is the result
 c nn is the matrix and vector size
 c Result is obviously an nn vector
 c Routine loosely based on Gennaro (1965)
 subroutine xmvmat(a,b,c)
 include 'm5610par.for'
 dimension a(nn,nn),b(nn),c(nn)
 do 20 i=1,nn,1
 c(i)=0.0
 do 20 l=1,nn,1
 20 c(i)=c(i)+a(i,l)*b(l)
 return
 end
c Subroutine to multiply a nn x nn matrix with another nn x nn matrix
 c a is the first nn x nn square matrix
 c b is the second nn x nn matrix
 c c is the result
 c nn is the matrix size
 c Result is obviously an nn x nn matrix
 c Routine based on Gennaro (1965)
 subroutine xmvmul(a,b,c)
 include 'm5610par.for'
 dimension a(nn,nn),b(nn,nn),c(nn,nn)
 do 20 i=1,nn,1
 write(*,*)'Multiplying Row ',i
 do 20 j=1,nn,1
 c(i,j)=0.0
 do 20 l=1,nn,1
 20 c(i,j)=c(i,j)+a(i,l)*b(l,j)
 return
 end
c Subroutine to invert a mm x mm upper triangular matrix using factor method
 c u is input and output matrix
 c t is result matrix, written back into the output matrix
 c mm is matrix size
 c Result overwrites original matrix
 c Based on Gennaro (1965)
 subroutine uminv(u)
 include 'm5610par.for'
 dimension u(nn,nn),t(nn,nn)
 c Zero all entries in matrix t except for diagonals, which are the reciprocals
 c of the diagonals of the orignal matrix
 do 20 i=1,nn,1
 do 20 j=1,nn,1
 if(i-j)11,10,11
 10 t(i,j)=1.0/u(i,j)
 goto 20
 11 t(i,j)=0.0
 20 continue
 c Compute strictly upper triangular entries of inverted matrix
 do 40 i=1,nn,1
 write(*,*)'Inverting Row ',i
 do 40 j=1,nn,1
 if(j-i)40,40,31
 31 l=j-1
 do 41 k=i,l,1
 t(i,j)=t(i,j)-t(i,k)*u(k,j)/u(j,j)
 41 continue
 40 continue
 c Write back result into original input matrix
 do 50 i=1,nn,1
 do 50 j=1,nn,1
 50 u(i,j)=t(i,j)
 return
 end
c Function to compute Euclidean norms of vector
 function vnorm(V,no)
 dimension V(no)
 znorm=0.
 do 100 ia=1,no,1
 100 znorm=znorm+V(ia)**2
 vnorm=sqrt(znorm)
 return
 end
c Function to determine scalar multiplier alpha for conjugate gradient method
 c Function returns a scalar which is multiplied by residual vector
 c Function uses subroutine xmvmat to multiply matrix A by residual
 c A is main matrix
 c r is residual vector
 c nn is size of matrix/vector
 c rgrad is scalar multiplier for gradient method
 function alpha(a,p,r,s)
 include 'm5610par.for'
 dimension a(nn,nn),r(nn),pt(nn),p(nn),s(nn)
 c Compute dot product of r and s for numerator
 rnumer=0.0
 do 10 ii=1,nn,1
 rnumer=rnumer+r(ii)*s(ii)
 10 continue
 c Compute matrix product of A * p
 call xmvmat(a,p,pt)
 c Premultiply matrix product by transpose of p vector for denominator
 rdenom=0.0
 do 20 ii=1,nn,1
 rdenom=rdenom+p(ii)*pt(ii)
 20 continue
 alpha = rnumer/rdenom
 return
 end
c Function to determine scalar multiplier beta for conjugate gradient method
 c Function returns a scalar which is multiplied by p vector
 c r,s are vectors is residual vector
 c nn is size of matrix/vector
 c rgrad is scalar multiplier for gradient method
 function beta(r,rold,s,sold)
 include 'm5610par.for'
 dimension r(nn),rold(nn),s(nn),sold(nn)
 c Compute dot product for numerator
 rnumer=0.0
 do 10 ii=1,nn,1
 rnumer=rnumer+r(ii)*s(ii)
 10 continue
 c Compute dot product for denominator
 rdenom=0.0
 do 20 ii=1,nn,1
 rdenom=rdenom+rold(ii)*sold(ii)
 20 continue
 beta = rnumer/rdenom
 return
 end
c Subroutine to transpose a matrix
 c Adapted from Carnahan, Luther and Wilkes (1969)
 subroutine trnspz(a,at)
 include 'm5610par.for'
 dimension a(nn,nn),at(nn,nn)
 do 14 ii=1,nn,1
 do 14 jj=1,nn,1
 14 at(jj,ii)=a(ii,jj)
 return
 end
include 'tstamp.for'

Included Routines

  • tstamp.for calls a OPEN WATCOM function and time stamps the output.
  • mpar.for sets the parameter statements and is shown below.
 parameter(n=39,nn=n*n)

References

  • Carnahan, B., Luther, H.A., and Wilkes, J.O. (1969) Applied Numerical Methods. New York: Wiley.
  • Gennaro, J.J. (1965) Computer Methods in Solid Mechanics. New York: Macmillan.
  • Gourdin, A., and Boumahrat, M. (2003) Applied Numerical Methods. New Delhi: Prentice-Hall India.

The Importance of Causality

In his book Introduction to the Differential Equations of Physics, German physicist Ludwig Hopf opens with the following statement:

Any differential equation expresses a relation between derivatives or between derivatives and given functions of the variables.  It thus establishes a relation between the increments of certain quantities and these quantities themselves.  This property of a differential equation makes it the natural expression of the principle of causality which is the foundation of exact natural science.  the ancient Greeks established laws of nature in which certain relation between numbers (harmony of spheres) or certain shapes of bodies played a privileged role.  The law was supposed to state something about a process as a whole, or about the complete shape of a body.  In more recent times (Galileo, Newton, etc.) a different concept has been adopted.  We do not try to establish a relation between all phases of a process immediately, but only between one phase and the next.  A law of this type may express, for example, how a certain state will develop in the immediate future, or it may describe the influence of the state of a certain particle on the particles in the immediate neighbourhood.  Thus we have a procedure for the description of a law of nature in terms of small (mathematically speaking, infinitesimal) differences of time and space.  The increments with which the law is concerned appear as derivatives, i.e., as the limits of the quotient of the increments of the variables which describe the process over the increment of space or time in wihch this development takes place.  A law of nature of this form is the expression of the relation between one state and the neighbouring (in time or space) states and therefore represents a special form of the principle of causality.

The whole issue of causality is an important one for both scientific and theological reasons, and I want to touch on one of each.

Every event that takes place in the universe is a result of an event before it.  Those events in turn are the results of those which have gone before.  All of these events form a chain which leads back to the first cause.  The need for the first cause is one of St. Thomas Aquinas’ proofs of God’s existence:

The second way is from the nature of the efficient cause. In the world of sense we find there is an order of efficient causes. There is no case known (neither is it, indeed, possible) in which a thing is found to be the efficient cause of itself; for so it would be prior to itself, which is impossible. Now in efficient causes it is not possible to go on to infinity, because in all efficient causes following in order, the first is the cause of the intermediate cause, and the intermediate is the cause of the ultimate cause, whether the intermediate cause be several, or only one. Now to take away the cause is to take away the effect. Therefore, if there be no first cause among efficient causes, there will be no ultimate, nor any intermediate cause. But if in efficient causes it is possible to go on to infinity, there will be no first efficient cause, neither will there be an ultimate effect, nor any intermediate efficient causes; all of which is plainly false. Therefore it is necessary to admit a first efficient cause, to which everyone gives the name of God.

Although, as Hopf points out, our understanding of how that causality actually works in the physical universe is different from the Greeks (and Thomas Aquinas worked in a Greek concept of natural philosophy) the truth of the importance of causality is undiminished.

To determine what comes after is a major reason for differential equations, which contain three elements: the equation itself, the initial conditions and the boundary conditions.  Once we have these, we can predict the behaviour of a system.  In some cases we can do so with a “simple” equation, others require discretisation and numerical modelling.  And that leads to our second point.

It’s interesting that Hopf speaks of “exact natural science.”  Today much of science and engineering is driven by probabalistic considerations, which in turn lead to statistical analysis.  Probability and statistics is a very useful tool, but not a substitute for the understanding of the actual mechanisms by which things work.  The actual mechanisms (physical laws, etc.) are what cause the phenomena which we record as statistics, not the other way around.  The fact that there are variations in these should not blind us to the core reality.

The advent of computers with broad-based number crunching abilities has only inflated our overconfidence in such methods.  It is essential, however, that we understand the why of phenomena as well as the what.  We must both be able to quantify the results and the correct causes of what is going on around us.  Two recent debacles illustrate this.

The first is the climate change fiasco we’ve been treated to of late.  Removing the dissimulation (as opposed to simulation) of some involved in the science, the core problem is that we do not as of yet have a model of global climate sufficiently comprehensive so that we can dispense with reliance on the statistics and project what will happen with a reasonable degree of confidence.  Part of the problem is the core problem in chaos theory: minor variations in initial conditions lead to major variations in the results.  But without such a model we are bereft with a definitive “why” as much as “what.”

The second is our financial collapse.  The models developed of the elaborate credit structure were fine as far as they went.  But ultimately they were divorced from sustainable reality because they did not take in to consideration all of the factors, many of which were obvious to those with raw experience.

The issue of causality is one that is central to our understanding of the universe.