You are currently browsing the monthly archive for August 2010.

I’ve spent a little bit of time over the last few weeks casually filling in some of the gaps unresolved by the Cambridge tripos syllabus (much of which is excellent, but does tend to be taught within what I am beginning to believe is a generally quite old-fashioned way).

In “applied” lectures in the first year, one is introduced to the curious trio Grad, Curl and Div, who perform various magical tricks which then prove to be incredibly handy for solving problems in electromagnetism and fluid mechanics (or anywhere else where there is some kinds of generic ‘stuff’ flowing around in 3-dimensional space). At no point does anyone explain to the audience how the tricks are done, and this may be to some extent because the mechanisms are quite complicated. In this series of posts, while avoiding going into too much gory detail, I will summarise a few of their more spectacular illusions and hint at the reality that lies beneath (much of the surface of which is scratched in any course on “Complex Analysis”).

Trick 1: Being defined at all

I suspect I was not alone in suspecting that Curl was a rather strange member of this family of derivatives. The idea of defining this weird antisymmetrised thing definitely felt quite idiosyncratic and not much to do with the other two operators. It was also obvious how the other two operators would behave in higher dimensions, while generalising the idea of curl to 4-dimensions doesn’t seem so easy.

A more general way to define them is to use the language of differential forms. These are really abstract objects which should be thought of “things you want to integrate”. In fact, if we are in some n-dimensional space X, for any 0 \leq p \leq n we define a p-form to be something we will integrate over a ‘p-dimensional subset’ of the space. We assume the 0-forms are just the smooth functions on X (with ‘0-dimensional subsets’ being ‘finite sets of points’ and ‘integration’ being ‘evaluation at’). In fact, if we have some co-ordinates (x_1,x_2,...,x_n for points in X) we can define the 1-forms over the space as simply the objects

\omega = f_1(x)dx_1 + f_2(x)dx_2 + ... + f_n(x)dx_n

where f_1,...,f_n are smooth functions of space.

Then for any curve \gamma: [0,1] \rightarrow X, the integral is defined in the natural way

\int_\gamma \omega = \int_0^1 (f_1(\gamma(t))\frac{d\gamma_1}{d t}(t) + ... + f_n(\gamma(t))\frac{d\gamma_n}{d t}(t))dt.

When we go to higher dimensions, p-forms tend to pick up an antisymmetric character (as is familiar from the fact that volume forms are alternating, and when we learn how to integrate over more than one dimension we frequently multiply by a Jacobian, which is the determinant of a matrix). This motivates the definition of the (non-commutative) wedge product:

If \omega_1, \omega_2 are a p-form and a q-form respectively, then \omega = \omega_1 \wedge \omega_2 = (-1)^{pq} \omega_2 \wedge \omega_1 is a (p+q)-form, and we insist the wedge product is bilinear and associative.

This sounds a little abstract, but it can be seen to be well-defined and sensible by looking at a concrete example. Suppose we are in 2-dimensional space, and \omega_1 = p_1 dx + q_1 dy, \omega_2 = p_2 dx + q_2 dy. Then applying the above axioms gives \omega_1 \wedge \omega_2 = (p_1 q_2 - p_2 q_1)(dx \wedge dy), exactly the determinant-like expression we would expect when passing to an area. Note also that if we just multiply through by a smooth function (a 0-form) this can be interpreted as wedge multiplication.

Now comes the clever part (again an abstract definition that turns out to extend to a very well-defined concept). The exterior derivative of a p-form \omega is a (p+1)-form d\omega with the derivation operation satisfying the properties:

  • d is linear.
  • If \omega_1 is a p-form, \omega_2 any differential form, d(\omega_1 \wedge \omega_2) = (d\omega_1)\wedge \omega_2 + (-1)^p \omega_1 \wedge (d\omega_2) (d is “wedge-Leibniz”).
  • d(d\omega) \equiv 0 (or d^2=0) (d is “cohomological”).

Though d is defined in this axiomatic way, in concrete situations it really is n distinct maps, one for each increase in dimension, and they might take on apparently different concrete characters, but are ultimately instances of the same thing: differentiation, all the time with a view to later integration.

Let us just examine the case where X is 3-dimensional space, and see what the three exterior derivatives are (you should be able to see where this is going…).

Firstly, taking exterior derivative of the zero forms f turns out simply to be the map

d: f \mapsto \sum_i \frac{\partial f}{\partial x_i} dx_i.

In other words, it takes a scalar function and gives us a covector of partial derivatives. Up to worrying about whether or not it was originally an element of the dual space, this is precisely Grad.

What now happens at the next stage? Here the ‘cohomological’ property comes into play, along with the others (since dx_i is of course the 1-form obtained by taking the exterior derivative of a “co-ordinate” function).

d(\sum_i p_i dx_i) = \sum_{i,j} \frac{\partial p_i}{\partial x_j} dx_j \wedge dx_i which expands to

(\frac{\partial p_3}{\partial x_2} - \frac{\partial p_2}{\partial x_3})(dx_2 \wedge dx_3) + (\frac{\partial p_1}{\partial x_3} - \frac{\partial p_3}{\partial x_1})(dx_3 \wedge dx_1) + (\frac{\partial p_2}{\partial x_1} - \frac{\partial p_1}{\partial x_2})(dx_1 \wedge dx_2).

This is just the Curl we know and love. Since the spaces of 1-forms and 2-forms both have dimension 3 here, it can be identified with a vector in the space again, but we should bear in mind it is actually in some sense really just an object we want to integrate over an area.

Finally, what happens when we pass from “things to be integrated over areas” to “things to be integrated over volumes”? You’ve probably guessed correctly, but it’s worth checking anyway:

d(\sum_{i \mod 3} p_i dx_{i+1} \wedge dx_{i+2}) = \sum_{i \mod 3}\frac{\partial p_i}{\partial x_i} dx_i \wedge dx_{i+1} \wedge dx_{i+2} (since all the other terms are eliminated by the alternating property of wedge products of 1-forms, or by the cohomological property of d)

= (\frac{\partial p_1}{\partial x_1} + \frac{\partial p_2}{\partial x_2} + \frac{\partial p_3}{\partial x_3}) (dx_1 \wedge dx_2 \wedge dx_3).

These results are not only interesting because we have managed to abstractify in total generality a few things of which we previously only knew special cases(we can now easily derive the 57-dimensional analogues of the 3 vector calculus derivatives of 3-dimensions if we were so minded). We now know that our perspective on them was previously incomplete, in that whenever we were taking a curl we inadvertently were interpreting our original object as something which exists as integrable in 1-dimension and turning it into something which exists as integrable in 2-dimensions. The deeper implications of this, including Stokes’ theorem (the more general form of which an astute reader might be able to hazard a guess at now) will be explored in future posts.

Two more remarks. The notion of integration with differential forms entirely removes (or abstracts) the mysterious Jacobian from the calculations. In converting between co-ordinates (changing the basis of our p-forms), it will be automatically encorporated in the resulting recasting of the differential form being integrated as a straightforward change-of-basis matrix. Secondly, the identities \text{curl}(\text{grad}(f)) = 0, \text{div}(\text{curl}(u)) = 0 are revealed here to be special cases of precisely what I have called the “cohomological condition” on the exterior derivative, and not actually particularly connected with the antisymmetries of the wedge product (contrary to what one might assume after proving these identities using suffix notation in the 3-dimensional case).

With this closing act, we shall take a break, and come back with two far more daring magic tricks after the intermission.

Advertisements