3
Vector Integral Calculus

3–1Vector integrals; the line integral of ∇ψ
We found in Chapter 2 that there were various ways of taking derivatives of fields. Some gave vector fields; some gave scalar fields. Although we developed many different formulas, everything in Chapter 2 could be summarized in one rule: the operators ∂/∂x, ∂/∂y, and ∂/∂z are the three components of a vector operator ∇. We would now like to get some understanding of the significance of the derivatives of fields. We will then have a better feeling for what a vector field equation means.
We have already discussed the meaning of the gradient operation (∇ on a scalar). Now we turn to the meanings of the divergence and curl operations. The interpretation of these quantities is best done in terms of certain vector integrals and equations relating such integrals. These equations cannot, unfortunately, be obtained from vector algebra by some easy substitution, so you will just have to learn them as something new. Of these integral formulas, one is practically trivial, but the other two are not. We will derive them and explain their implications. The equations we shall study are really mathematical theorems. They will be useful not only for interpreting the meaning and the content of the divergence and the curl, but also in working out general physical theories. These mathematical theorems are, for the theory of fields, what the theorem of the conservation of energy is to the mechanics of particles. General theorems like these are important for a deeper understanding of physics. You will find, though, that they are not very useful for solving problems—except in the simplest cases. It is delightful, however, that in the beginning of our subject there will be many simple problems which can be solved with the three integral formulas we are going to treat. We will see, however, as the problems get harder, that we can no longer use these simple methods.
We take up first an integral formula involving the gradient. The
relation contains a very simple idea: Since the gradient represents the
rate of change of a field quantity, if we integrate that rate of change,
we should get the total change. Suppose we have the scalar
field ψ(x,y,z). At any two points (1) and (2), the
function ψ will have the values ψ(1) and ψ(2),
respectively. [We use a convenient notation, in which (2) represents
the point (x2,y2,z2) and ψ(2) means the same thing
as ψ(x2,y2,z2).] If Γ (gamma) is any curve joining (1)
and (2), as in Fig. 3–1, the following relation is true:
Theorem 1.
ψ(2)−ψ(1)=∫(2)(1)along Γ(∇ψ)⋅ds.
The integral is a line integral, from (1) to (2) along the
curve Γ, of the dot product of ∇ψ—a
vector—with ds—another vector which is an infinitesimal line
element of the curve Γ (directed away from (1) and
toward (2)).
First, we should review what we mean by a line integral. Consider a scalar function f(x,y,z), and the curve Γ joining two points (1) and (2). We mark off the curve at a number of points and join these points by straight-line segments, as shown in Fig. 3–2. Each segment has the length Δsi, where i is an index that runs 1, 2, 3, … By the line integral ∫(2)(1)along Γfds we mean the limit of the sum ∑ifiΔsi, where fi is the value of the function at the ith segment. The limiting value is what the sum approaches as we add more and more segments (in a sensible way, so that the largest Δsi→0).
The integral in our theorem, Eq. (3.1), means the same thing, although it looks a little different. Instead of f, we have another scalar—the component of ∇ψ in the direction of Δs. If we write (∇ψ)t for this tangential component, it is clear that (∇ψ)tΔs=(∇ψ)⋅Δs. The integral in Eq. (3.1) means the sum of such terms.
Now let’s see why Eq. (3.1) is true. In Chapter 2, we showed that the component of ∇ψ along a small displacement ΔR was the rate of change of ψ in the direction of ΔR. Consider the line segment Δs from (1) to point a in Fig. 3–2. According to our definition, Δψ1=ψ(a)−ψ(1)=(∇ψ)1⋅Δs1. Also, we have ψ(b)−ψ(a)=(∇ψ)2⋅Δs2, where, of course, (∇ψ)1 means the gradient evaluated at the segment Δs1, and (∇ψ)2, the gradient evaluated at Δs2. If we add Eqs. (3.3) and (3.4), we get ψ(b)−ψ(1)=(∇ψ)1⋅Δs1+(∇ψ)2⋅Δs2. You can see that if we keep adding such terms, we get the result ψ(2)−ψ(1)=∑i(∇ψ)i⋅Δsi. The left-hand side doesn’t depend on how we choose our intervals—if (1) and (2) are kept always the same—so we can take the limit of the right-hand side. We have therefore proved Eq. (3.1).
You can see from our proof that just as the equality doesn’t depend on how the points a b, c, …, are chosen, similarly it doesn’t depend on what we choose for the curve Γ to join (1) and (2). Our theorem is correct for any curve from (1) to (2).
One remark on notation: You will see that there is no confusion if we
write, for convenience,
(∇ψ)⋅ds=∇ψ⋅ds.
With this notation, our theorem is
Theorem 1.
ψ(2)−ψ(1)=∫(2)(1)any curve from(1) to (2)∇ψ⋅ds.
3–2The flux of a vector field
Before we consider our next integral theorem—a theorem about the divergence—we would like to study a certain idea which has an easily understood physical significance in the case of heat flow. We have defined the vector h, which represents the heat that flows through a unit area in a unit time. Suppose that inside a block of material we have some closed surface S which encloses the volume V (Fig. 3–3). We would like to find out how much heat is flowing out of this volume. We can, of course, find it by calculating the total heat flow out of the surface S.
We write da for the area of an element of the surface. The symbol stands for a two-dimensional differential. If, for instance, the area happened to be in the xy-plane we would have da=dxdy. Later we shall have integrals over volume and for these it is convenient to consider a differential volume that is a little cube. So when we write dV we mean dV=dxdydz.
Some people like to write d2a instead of da to remind themselves that it is kind of a second-order quantity. They would also write d3V instead of dV. We will use the simpler notation, and assume that you can remember that an area has two dimensions and a volume has three.
The heat flow out through the surface element da is the area times the component of h perpendicular to da. We have already defined n as a unit vector pointing outward at right angles to the surface (Fig. 3–3). The component of h that we want is hn=h⋅n. The heat flow out through da is then h⋅nda. To get the total heat flow through any surface we sum the contributions from all the elements of the surface. In other words, we integrate (3.10) over the whole surface: (Total heat flowoutward through S)=∫Sh⋅nda.
We are also going to call this surface integral “the flux of h through the surface.” Originally the word flux meant flow, so that the surface integral just means the flow of h through the surface. We may think: h is the “current density” of heat flow and the surface integral of it is the total heat current directed out of the surface; that is, the thermal energy per unit time (joules per second).
We would like to generalize this idea to the case where the vector does not represent the flow of anything; for instance, it might be the electric field. We can certainly still integrate the normal component of the electric field over an area if we wish. Although it is not the flow of anything, we still call it the “flux.” We say (Flux of Ethrough thesurface S)=∫SE⋅nda. We generalize the word “flux” to mean the “surface integral of the normal component” of a vector. We will also use the same definition even when the surface considered is not a closed one, as it is here.
Returning to the special case of heat flow, let us take a situation in which heat is conserved. For example, imagine some material in which after an initial heating no further heat energy is generated or absorbed. Then, if there is a net heat flow out of a closed surface, the heat content of the volume inside must decrease. So, in circumstances in which heat would be conserved, we say that ∫Sh⋅nda=−dQdt, where Q is the heat inside the surface. The heat flux out of S is equal to minus the rate of change with respect to time of the total heat Q inside of S. This interpretation is possible because we are speaking of heat flow and also because we supposed that the heat was conserved. We could not, of course, speak of the total heat inside the volume if heat were being generated there.
Now we shall point out an interesting fact about the flux of any vector. You may think of the heat flow vector if you wish, but what we say will be true for any vector field C. Imagine that we have a closed surface S that encloses the volume V. We now separate the volume into two parts by some kind of a “cut,” as in Fig. 3–4. Now we have two closed surfaces and volumes. The volume V1 is enclosed in the surface S1, which is made up of part of the original surface Sa and of the surface of the cut, Sab. The volume V2 is enclosed by S2, which is made up of the rest of the original surface Sb and closed off by the cut Sab. Now consider the following question: Suppose we calculate the flux out through surface S1 and add to it the flux through surface S2. Does the sum equal the flux through the whole surface that we started with? The answer is yes. The flux through the part of the surfaces Sab common to both S1 and S2 just exactly cancels out. For the flux of the vector C out of V1 we can write Flux through S1=∫SaC⋅nda+∫SabC⋅n1da, and for the flux out of V2, Flux through S2=∫SbC⋅nda+∫SabC⋅n2da. Note that in the second integral we have written n1 for the outward normal for Sab when it belongs to S1, and n2 when it belongs to S2, as shown in Fig. 3–4. Clearly, n1=−n2, so that ∫SabC⋅n1da=−∫SabC⋅n2da. If we now add Eqs. (3.14) and (3.15), we see that the sum of the fluxes through S1 and S2 is just the sum of two integrals which, taken together, give the flux through the original surface S=Sa+Sb.
We see that the flux through the complete outer surface S can be considered as the sum of the fluxes from the two pieces into which the volume was broken. We can similarly subdivide again—say by cutting V1 into two pieces. You see that the same arguments apply. So for any way of dividing the original volume, it must be generally true that the flux through the outer surface, which is the original integral, is equal to a sum of the fluxes out of all the little interior pieces.
3–3The flux from a cube; Gauss’ theorem
We now take the special case of a small cube1 and find an interesting formula for the flux out of it. Consider a cube whose edges are lined up with the axes as in Fig. 3–5. Let us suppose that the coordinates of the corner nearest the origin are x, y, z. Let Δx be the length of the cube in the x-direction, Δy be the length in the y-direction, and Δz be the length in the z-direction. We wish to find the flux of a vector field C through the surface of the cube. We shall do this by making a sum of the fluxes through each of the six faces. First, consider the face marked 1 in the figure. The flux outward on this face is the negative of the x-component of C, integrated over the area of the face. This flux is −∫Cxdydz. Since we are considering a small cube, we can approximate this integral by the value of Cx at the center of the face—which we call the point (1)—multiplied by the area of the face, ΔyΔz: Flux out of 1=−Cx(1)ΔyΔz. Similarly, for the flux out of face 2, we write Flux out of 2=Cx(2)ΔyΔz. Now Cx(1) and Cx(2) are, in general, slightly different. If Δx is small enough, we can write Cx(2)=Cx(1)+∂Cx∂xΔx. There are, of course, more terms, but they will involve (Δx)2 and higher powers, and so will be negligible if we consider only the limit of small Δx. So the flux through face 2 is Flux out of 2=[Cx(1)+∂Cx∂xΔx]ΔyΔz. Summing the fluxes for faces 1 and 2, we get Flux out of 1 and 2=∂Cx∂xΔxΔyΔz. The derivative should really be evaluated at the center of face 1; that is, at [x,y+(Δy/2),z+(Δz/2)]. But in the limit of an infinitesimal cube, we make a negligible error if we evaluate it at the corner (x,y,z).
Applying the same reasoning to each of the other pairs of faces, we have Flux out of 3 and 4=∂Cy∂yΔxΔyΔz and Flux out of 5 and 6=∂Cz∂zΔxΔyΔz.
The total flux through all the faces is the sum of these terms. We find that ∫cubeC⋅nda=(∂Cx∂x+∂Cy∂y+∂Cz∂z)ΔxΔyΔz, and the sum of the derivatives is just ∇⋅C. Also, ΔxΔyΔz=ΔV, the volume of the cube. So we can say that for an infinitesimal cube ∫surfaceC⋅nda=(∇⋅C)ΔV. We have shown that the outward flux from the surface of an infinitesimal cube is equal to the divergence of the vector multiplied by the volume of the cube. We now see the “meaning” of the divergence of a vector. The divergence of a vector at the point P is the flux—the outgoing “flow” of C—per unit volume, in the neighborhood of P.
We have connected the divergence of C to the flux of C out
of each infinitesimal volume. For any finite volume we can use the
fact we proved above—that the total flux from a volume is the sum of
the fluxes out of each part. We can, that is, integrate the divergence
over the entire volume. This gives us the theorem that the integral of
the normal component of any vector over any closed surface can also be
written as the integral of the divergence of the vector over the
volume enclosed by the surface. This theorem is named after
Gauss.
Gauss’ Theorem
∫SC⋅nda=∫V∇⋅CdV,
where S is any closed surface and V is the volume inside it.
3–4Heat conduction; the diffusion equation
Let’s consider an example of the use of this theorem, just to get familiar with it. Suppose we take again the case of heat flow in, say, a metal. Suppose we have a simple situation in which all the heat has been previously put in and the body is just cooling off. There are no sources of heat, so that heat is conserved. Then how much heat is there inside some chosen volume at any time? It must be decreasing by just the amount that flows out of the surface of the volume. If our volume is a little cube, we would write, following Eq. (3.17), Heat out=∫cubeh⋅nda=∇⋅hΔV. But this must equal the rate of loss of the heat inside the cube. If q is the heat per unit volume, the heat in the cube is qΔV, and the rate of loss is −∂∂t(qΔV)=−∂q∂tΔV. Comparing (3.19) and (3.20), we see that −∂q∂t=∇⋅h.
Take careful note of the form of this equation; the form appears often in physics. It expresses a conservation law—here the conservation of heat. We have expressed the same physical fact in another way in Eq. (3.13). Here we have the differential form of a conservation equation, while Eq. (3.13) is the integral form.
We have obtained Eq. (3.21) by applying Eq. (3.13) to an infinitesimal cube. We can also go the other way. For a big volume V bounded by S, Gauss’ law says that ∫Sh⋅nda=∫V∇⋅hdV. Using (3.21), the integral on the right-hand side is found to be just −dQ/dt, and again we have Eq. (3.13).
Now let’s consider a different case. Imagine that we have a block of material and that inside it there is a very tiny hole in which some chemical reaction is taking place and generating heat. Or we could imagine that there are some wires running into a tiny resistor that is being heated by an electric current. We shall suppose that the heat is generated practically at a point, and let W represent the energy liberated per second at that point. We shall suppose that in the rest of the volume heat is conserved, and that the heat generation has been going on for a long time—so that now the temperature is no longer changing anywhere. The problem is: What does the heat vector h look like at various places in the metal? How much heat flow is there at each point?
We know that if we integrate the normal component of h over a closed surface that encloses the source, we will always get W. All the heat that is being generated at the point source must flow out through the surface, since we have supposed that the flow is steady. We have the difficult problem of finding a vector field which, when integrated over any surface, always gives W. We can, however, find the field rather easily by taking a somewhat special surface. We take a sphere of radius R, centered at the source, and assume that the heat flow is radial (Fig. 3–6). Our intuition tells us that h should be radial if the block of material is large and we don’t get too close to the edges, and it should also have the same magnitude at all points on the sphere. You see that we are adding a certain amount of guesswork—usually called “physical intuition”—to our mathematics in order to find the answer.
When h is radial and spherically symmetric, the integral of the normal component of h over the area is very simple, because the normal component is just the magnitude of h and is constant. The area over which we integrate is 4πR2. We have then that ∫Sh⋅nda=h⋅4πR2 (where h is the magnitude of h). This integral should equal W, the rate at which heat is produced at the source. We get h=W4πR2, or h=W4πR2er, where, as usual, er represents a unit vector in the radial direction. Our result says that h is proportional to W and varies inversely as the square of the distance from the source.
The result we have just obtained applies to the heat flow in the vicinity of a point source of heat. Let’s now try to find the equations that hold in the most general kind of heat flow, keeping only the condition that heat is conserved. We will be dealing only with what happens at places outside of any sources or absorbers of heat.
The differential equation for the conduction of heat was derived in Chapter 2. According to Eq. (2.44), h=−κ∇T. (Remember that this relationship is an approximate one, but fairly good for some materials like metals.) It is applicable, of course, only in regions of the material where there is no generation or absorption of heat. We derived above another relation, Eq. (3.21), that holds when heat is conserved. If we combine that equation with (3.25), we get −∂q∂t=∇⋅h=−∇⋅(κ∇T), or ∂q∂t=κ∇⋅∇T=κ∇2T, if κ is a constant. You remember that q is the amount of heat in a unit volume and ∇⋅∇=∇2 is the Laplacian operator ∇2=∂2∂x2+∂2∂y2+∂2∂z2.
If we now make one more assumption we can obtain a very interesting equation. We assume that the temperature of the material is proportional to the heat content per unit volume—that is, that the material has a definite specific heat. When this assumption is valid (as it often is), we can write Δq=cvΔT or ∂q∂t=cv∂T∂t. The rate of change of heat is proportional to the rate of change of temperature. The constant of proportionality cv is, here, the specific heat per unit volume of the material. Using Eq. (3.27) with (3.26), we get ∂T∂t=κcv∇2T. We find that the time rate of change of T—at every point—is proportional to the Laplacian of T, which is the second derivative of its spatial dependence. We have a differential equation—in x, y, z, and t—for the temperature T.
The differential equation (3.28) is called the heat diffusion equation. It is often written as ∂T∂t=D∇2T, where D is called the diffusion constant, and is here equal to κ/cv.
The diffusion equation appears in many physical problems—in the diffusion of gases, in the diffusion of neutrons, and in others. We have already discussed the physics of some of these phenomena in Chapter 43 of Vol. I. Now you have the complete equation that describes diffusion in the most general possible situation. At some later time we will take up ways of solving the diffusion equation to find how the temperature varies in particular cases. We turn back now to consider other theorems about vector fields.
3–5The circulation of a vector field
We wish now to look at the curl in somewhat the same way we looked at the divergence. We obtained Gauss’ theorem by considering the integral over a surface, although it was not obvious at the beginning that we were going to be dealing with the divergence. How did we know that we were supposed to integrate over a surface in order to get the divergence? It was not at all clear that this would be the result. And so with an apparent equal lack of justification, we shall calculate something else about a vector and show that it is related to the curl. This time we calculate what is called the circulation of a vector field. If C is any vector field, we take its component along a curved line and take the integral of this component all the way around a complete loop. The integral is called the circulation of the vector field around the loop. We have already considered a line integral of ∇ψ earlier in this chapter. Now we do the same kind of thing for any vector field C.
Let Γ be any closed loop in space—imaginary, of course. An example is given in Fig. 3–7. The line integral of the tangential component of C around the loop is written as ∮ΓCtds=∮ΓC⋅ds. You should note that the integral is taken all the way around, not from one point to another as we did before. The little circle on the integral sign is to remind us that the integral is to be taken all the way around. This integral is called the circulation of the vector field around the curve Γ. The name came originally from considering the circulation of a liquid. But the name—like flux—has been extended to apply to any field even when there is no material “circulating.”
Playing the same kind of game we did with the flux, we can show that the circulation around a loop is the sum of the circulations around two partial loops. Suppose we break up our curve of Fig. 3–7 into two loops, by joining two points (1) and (2) on the original curve by some line that cuts across as shown in Fig. 3–8. There are now two loops, Γ1 and Γ2. Γ1 is made up of Γa, which is that part of the original curve to the left of (1) and (2), plus Γab, the “short cut.” Γ2 is made up of the rest of the original curve plus the short cut.
The circulation around Γ1 is the sum of an integral along Γa and along Γab. Similarly, the circulation around Γ2 is the sum of two parts, one along Γb and the other along Γab. The integral along Γab will have, for the curve Γ2, the opposite sign from what it has for Γ1, because the direction of travel is opposite—we must take both our line integrals with the same “sense” of rotation.
Following the same kind of argument we used before, you can see that the sum of the two circulations will give just the line integral around the original curve Γ. The parts due to Γab cancel. The circulation around the one part plus the circulation around the second part equals the circulation about the outer line. We can continue the process of cutting the original loop into any number of smaller loops. When we add the circulations of the smaller loops, there is always a cancellation of the parts on their adjacent portions, so that the sum is equivalent to the circulation around the original single loop.
Now let us suppose that the original loop is the boundary of some surface. There are, of course, an infinite number of surfaces which all have the original loop as the boundary. Our results will not, however, depend on which surface we choose. First, we break our original loop into a number of small loops that all lie on the surface we have chosen, as in Fig. 3–9. No matter what the shape of the surface, if we choose our small loops small enough, we can assume that each of the small loops will enclose an area which is essentially flat. Also, we can choose our small loops so that each is very nearly a square. Now we can calculate the circulation around the big loop Γ by finding the circulations around all of the little squares and then taking their sum.
3–6The circulation around a square; Stokes’ theorem
How shall we find the circulation for each little square? One question is, how is the square oriented in space? We could easily make the calculation if it had a special orientation. For example, if it were in one of the coordinate planes. Since we have not assumed anything as yet about the orientation of the coordinate axes, we can just as well choose the axes so that the one little square we are concentrating on at the moment lies in the xy-plane, as in Fig. 3–10. If our result is expressed in vector notation, we can say that it will be the same no matter what the particular orientation of the plane.
We want now to find the circulation of the field C around our little square. It will be easy to do the line integral if we make the square small enough that the vector C doesn’t change much along any one side of the square. (The assumption is better the smaller the square, so we are really talking about infinitesimal squares.) Starting at the point (x,y)—the lower left corner of the figure—we go around in the direction indicated by the arrows. Along the first side—marked (1)—the tangential component is Cx(1) and the distance is Δx. The first part of the integral is Cx(1)Δx. Along the second leg, we get Cy(2)Δy. Along the third, we get −Cx(3)Δx, and along the fourth, −Cy(4)Δy. The minus signs are required because we want the tangential component in the direction of travel. The whole line integral is then ∮C⋅ds=Cx(1)Δx+Cy(2)Δy−Cx(3)Δx−Cy(4)Δy.
Now let’s look at the first and third pieces. Together they are [Cx(1)−Cx(3)]Δx. You might think that to our approximation the difference is zero. That is true to the first approximation. We can be more accurate, however, and take into account the rate of change of Cx. If we do, we may write Cx(3)=Cx(1)+∂Cx∂yΔy. If we included the next approximation, it would involve terms in (Δy)2, but since we will ultimately think of the limit as Δy→0, such terms can be neglected. Putting (3.33) together with (3.32), we find that [Cx(1)−Cx(3)]Δx=−∂Cx∂yΔxΔy. The derivative can, to our approximation, be evaluated at (x,y).
Similarly, for the other two terms in the circulation, we may write Cy(2)Δy−Cy(4)Δy=∂Cy∂xΔxΔy. The circulation around our square is then (∂Cy∂x−∂Cx∂y)ΔxΔy, which is interesting, because the two terms in the parentheses are just the z-component of the curl. Also, we note that ΔxΔy is the area of our square. So we can write our circulation (3.36) as (∇×C)zΔa. But the z-component really means the component normal to the surface element. We can, therefore, write the circulation around a differential square in an invariant vector form: ∮C⋅ds=(∇×C)nΔa=(∇×C)⋅nΔa.
Our result is: the circulation of any vector C around an infinitesimal square is the component of the curl of C normal to the surface, times the area of the square.
The circulation around any loop Γ can now be easily related to
the curl of the vector field. We fill in the loop with any convenient
surface S, as in Fig. 3–11, and add the circulations
around a set of infinitesimal squares in this surface. The sum can be
written as an integral. Our result is a very useful theorem called
Stokes’ theorem (after Mr. Stokes).
Stokes’ Theorem
∮ΓC⋅ds=∫S(∇×C)nda,
where S is any surface bounded by Γ.
We must now speak about a convention of signs. In Fig. 3–10 the z-axis would point toward you in a “usual”—that is, “right-handed”—system of axes. When we took our line integral with a “positive” sense of rotation, we found that the circulation was equal to the z-component of ∇×C. If we had gone around the other way, we would have gotten the opposite sign. Now how shall we know, in general, what direction to choose for the positive direction of the “normal” component of ∇×C? The “positive” normal must always be related to the sense of rotation, as in Fig. 3–10. It is indicated for the general case in Fig. 3–11.
One way of remembering the relationship is by the “right-hand rule.” If you make the fingers of your right hand go around the curve Γ, with the fingertips pointed in the direction of the positive sense of ds, then your thumb points in the direction of the positive normal to the surface S.
3–7Curl-free and divergence-free fields
We would like, now, to consider some consequences of our new theorems. Take first the case of a vector whose curl is everywhere zero. Then Stokes’ theorem says that the circulation around any loop is zero. Now if we choose two points (1) and (2) on a closed curve (Fig. 3–12), it follows that the line integral of the tangential component from (1) to (2) is independent of which of the two possible paths is taken. We can conclude that the integral from (1) to (2) can depend only on the location of these points—that is to say, it is some function of position only. The same logic was used in Chapter 14 of Vol. I, where we proved that if the integral around a closed loop of some quantity is always zero, then that integral can be represented as the difference of a function of the position of the two ends. This fact allowed us to invent the idea of a potential. We proved, furthermore, that the vector field was the gradient of this potential function (see Eq. (14.13) of Vol. I).
It follows that any vector field whose curl is zero is equal to the gradient of some scalar function. That is, if ∇×C=0, everywhere, there is some ψ (psi) for which C=∇ψ—a useful idea. We can, if we wish, describe this special kind of vector field by means of a scalar field.
Let’s show something else. Suppose we have any scalar field ϕ (phi). If we take its gradient, ∇ϕ, the integral of this vector around any closed loop must be zero. Its line integral from point (1) to point (2) is [ϕ(2)−ϕ(1)]. If (1) and (2) are the same points, our Theorem 1, Eq. (3.8), tells us that the line integral is zero: ∮loop∇ϕ⋅ds=0. Using Stokes’ theorem, we can conclude that ∫(∇×(∇ϕ))nda=0 over any surface. But if the integral is zero over any surface, the integrand must be zero. So ∇×(∇ϕ)=0,always. We proved the same result in Section 2–7 by vector algebra.
Let’s look now at a special case in which we fill in a small loop Γ with a large surface S, as indicated in Fig. 3–13. We would like, in fact, to see what happens when the loop shrinks down to a point, so that the surface boundary disappears—the surface becomes closed. Now if the vector C is everywhere finite, the line integral around Γ must go to zero as we shrink the loop—the integral is roughly proportional to the circumference of Γ, which goes to zero. According to Stokes’ theorem, the surface integral of (∇×C)n must also vanish. Somehow, as we close the surface we add in contributions that cancel out what was there before. So we have a new theorem: ∫any closedsurface(∇×C)nda=0.
Now this is interesting, because we already have a theorem about the surface integral of a vector field. Such a surface integral is equal to the volume integral of the divergence of the vector, according to Gauss’ theorem (Eq. 3.18). Gauss’ theorem, applied to ∇×C, says ∫closedsurface(∇×C)nda=∫volumeinside∇⋅(∇×C)dV. So we conclude that the second integral must also be zero: ∫anyvolume∇⋅(∇×C)dV=0, and this is true for any vector field C whatever. Since Eq. (3.41) is true for any volume, it must be true that at every point in space the integrand is zero. We have ∇⋅(∇×C)=0,always. But this is the same result we got from vector algebra in Section 2–7. Now we begin to see how everything fits together.
3–8Summary
Let us summarize what we have found about the vector calculus. These are really the salient points of Chapters 2 and 3:
- The operators ∂/∂x, ∂/∂y, and ∂/∂z can be considered as the three components of a vector operator ∇, and the formulas which result from vector algebra by treating this operator as a vector are correct: ∇=(∂∂x,∂∂y,∂∂z).
- The difference of the values of a scalar field at two points is equal to the line integral of the tangential component of the gradient of that scalar along any curve at all between the first and second points: ψ(2)−ψ(1)=∫(2)(1)any curve∇ψ⋅ds.
- The surface integral of the normal component of an arbitrary vector over a closed surface is equal to the integral of the divergence of the vector over the volume interior to the surface: ∫closedsurfaceC⋅nda=∫volumeinside∇⋅CdV.
- The line integral of the tangential component of an arbitrary vector around a closed loop is equal to the surface integral of the normal component of the curl of that vector over any surface which is bounded by the loop: ∫boundaryC⋅ds=∫surface(∇×C)⋅nda.
- The following development applies equally well to any rectangular parallelepiped. ↩