You are currently browsing the monthly archive for July 2010.

Nice proof of the Primitive Element Theorem for infinite base fields (though it does rely on having done stuff in a different order to Yoshida). We will assume two facts, and that K is any infinite field.

Fact 1: Every separable extension has only finitely many subextensions.

Fact 2: No finite-dimensional vector space over an infinite field is a union of finitely many of its proper subspaces.

Theorem (Primitive Element Theorem): If F/K is finite and separable then it has a primitive element (\theta such that F=K(\theta)).

Proof: Let X = F \backslash (\bigcup_{K\leq M < F} M). By fact 1, this is a finite union and so by fact 2, X is nonempty. Hence any element of X is a primitive element.

I’m back! Over the summer I’m pursuing some research in combinatorics which it feels like I should shut up about until after it’s done, but I’ve also been doing a bit of reading around and mathematical sightseeing during natural breaks at what seems like an appropriate time to be doing such things, and hopefully I’ll be able to share/record a few of the interesting ideas I’ve been looking into here.

One nice project has been returning to Penrose’s astonishing book The Road to Reality which in spite of his best attempts to explain that mathematics of modern physics in the opening 16 chapters, I’ve unfortunately not been mathematically dextrous enough to understand until very recently (certain seemingly innocuous ideas in Lent term of the second year of the Cambridge mathematical tripos unleash a blaze of light onto most of the world of modern differential geometry on which theoretical physics rests). This has reopened a few ideas in my mind that were dimly there but not properly understood. I’d also like to thank Kenyi Wang for encouraging me to get keen for certain areas of theoretical physics again. None of what I say below may be correct – I am far from an expert and am just recording my thoughts which, if wrong, I’d be interested to have corrected.

I have been thinking about the concept of entropy, which predates the modern ideas of relativity and quantum mechanics, being a property of how the universe behaves on easily observable ‘macro’ scales. It is part of the study of statistical mechanics (the development of what used to be called “Thermodynamics”), where there are a very large number of independent particles ‘milling around’ and their macro-scale behaviour is being investigated. It is a well-known observation that if you quickly release some volume of gas out of a pipe into a room, it will eventually spread from being concentrated near the opening of the pipe to fill the entire space. When this happens, there is in some sense less we can say about the micro-scale particles based on our observations. Before we had some idea of where each individual particle was, whereas now any given particle can be anywhere in the room.

What has happened is a fairly subtle phenomenon, so we’d better give it a name before we forget what has happened. We say that the configuration has increased its entropy. In some sense, the configuration has become ‘more random’, but in another sense it has become ‘more predictable’ and easier to describe on the macro scale. I like to see it as a loss of information, though I’ll now provide a couple more examples of this phenomenon which will hopefully refine our understanding a little more of what has gone on.

In a situation with several massive bodies and not all that much angular momentum, we observe that over time the bodies will fall together into a big clump. What has happened to the entropy during this process? By analogy with the gas example, we might have to guess that the entropy has decreased (in that a load of spread out things have come together into a concentrated thing). However, the situation with gravitation present and solid bodies requires something of a different analysis. The macro structure of some collection of planets is reasonably complicated and quite specific (you can say quite a lot about the solar system) whereas the macro structure of a big clump of rock is somehow less specific and less complicated. Bits of what were individual planets could be in many places in the big rock, so we have lost information, and lost structure. In the most extreme case, the formation of a black hole, a huge mass, many times the mass of our sun, can be described with just a handful of parameters (google “Kerr geometry”) all the other information having been lost. To put it another way, there has been a huge increase in the system’s entropy.

Another fascinating example is how the sun sustains life on earth. If asked “what does the sun provide that is useful for life?” most intelligent people would probably say “energy” or “heat.” Indeed, in some sense they are correct, but in another they are totally wrong. Consider a hypothetical sun that continued to shine uniformly on all parts of the earth at all times forever into the past and future. Then the energy at all points of earth would be roughly the same at all times: indeed, we would be at thermal equilibrium. But energy is only significant when we consider energy *differences*, so uniform energy is equivalent to zero energy: it is totally useless. What we do gain from the sun is in fact the ability to keep our entropy low. The sun gives us light and ultraviolet radiation, and we return all that energy to space in the form of infra-red radiation. Since infra-red is less energetic per photon than ultraviolet, more photons are used when we dump the energy than when we receive it. These extra degrees of freedom correspond to the energy possessing a greater entropy when it leaves than when it arrives. It is this entropy which we use to maintain the complex (far from random, hence very low-entropy) structures needed to sustain life. All the energy we borrow is recycled off into the universe in a ‘less organised’ form than when we got it. I may blog in greater depth on this paragraph later, for it strikes me as a very ‘real-world’ consideration that is of huge interest.

Back to the theory, there are various ways to make this concept rigorous, and they generally turn out to be in some way equivalent. I’ll give a nice one. Let’s go back to our large complex of N particles. We define a phase space to be the 6N dimensional space (don’t try to visualise it, except perhaps in the case N=1/3) with each point in the space representing a state of the N particles (with 6 degrees of freedom per particle to store its position in 3-space and its momentum in 3-space). Now imagine phase-space being chopped up into regions which we are going to say are “macroscopically indistinguishable.” For example, it might be that every configuration where the molecules are ‘fairly spread out around the room’ will look identical on a macro scale. We define the entropy of a state (which remember is just a point in phase space) as a function of the volume of the region it inhabits. More specifically

S(x) = k \log V(x)

where S is the entropy, k is a very small constant (Boltzmann’s constant) and V(x) is the volume of the region in phase-space inhabited by x.

It might be worth going back to the above examples and trying to check that the analysis in them agrees with this model.

What is going to happen if the state of the system is allowed to wander around in phase space randomly? It will meander around inside the same region for a while, until it crosses over into another region. In physical systems, the region it wanders into will almost certainly be either of comparable size or will be larger. In any case, it will almost never be much much much smaller (and therefore the logarithm of its volume multiplied by a small constant can never decrease significantly). Once N gets large, the probability of any decrease becomes so unbelievably small that it can be neglected, and we have apparently just derived the following physical principle:

(Second law of thermodynamics) The entropy of a physical system is always nondecreasing in time.

Great! That was easy. Unfortunately, there is now a huge snag. All of Newton’s laws, as well as those of Relativity and Quantum Mechanics, are time-reversible. If you replace the time co-ordinate t with -t, and run physics backwards, it predicts basically exactly the same results. Therefore, when the state of our system was going for its wander in phase space, even if it was bound to some physical laws, it might have been travelling forwards or backwards in time. If we do the same analyis as above but consider the state travelling backwards in time, we obtain the following:

(Time-reversed second law of thermodynamics) The entropy of a physical system is always nonincreasing in time.

Oh dear, so we seem to have deduced almost precisely the opposite statement. If you smell a rat in my ‘reversing time and doing the analysis’, just consider the example of the gas concentrated somewhere in the room. What my analysis is saying is that (assuming we don’t inject any input like ‘sucking the gas back up a pipe’) just as we would expect the gas to spread out to fill the room, the most likely way for it to have got to being concentrated, assuming the system is isolated, is from a state that was at one point uniform. Ok, it is unlikely (fortunately, given our need for oxygen to be readily available) for a uniformly distributed gas to spontaneously become concentrated. However, if we are given that it has become concentrated spontaneously, it is far more likely to have been uniform in the past than in any other state. In other words, the time-reversed second law of thermodynamics is valid for this system. Indeed, it must be as valid as the standard law for any system where the only assumptions are time-reversible.

Therefore do we have to deduce that this thermodynamics is all nonsense, or that entropy doesn’t actually change all that much, because it must balance the conflicting demands of the two laws stated above? We could, except we know that in the real world entropy is always observed to increase, sometimes by huge amounts, and never observed to decrease. We therefore would love a reason to accept the second law while rejecting the time-reversed version. I would now urge the reader to pause, take a deep breath, and then think about the phase space of the universe, with its absurd number of dimensions, and have a short think about the path the universe traces in this phase space as time varies from the start to the end of the universe’s existence.

What constraints must this path satisfy? Do they affect the thermodynamics significantly?

One constraint they satisfy is that they must pass through the current state of the universe. Unfortunately this doesn’t suggest much. We therefore come up with another constraint

Thermodynamic big bang theory: At the start of the universe (which we assume happened in the past), there was a big bang, which was an event of monumentally small entropy.

Firstly, why is this intuitive? (especially given black holes have such large entropy) At the big bang, the particles are all more-or-less in the same place, so the volume of the region in phase space is very small, regardless of the 3N momentum dimensions possibly being huge, the 3N spatial dimensions being almost nothing make the volume almost nothing. This isn’t actually totally convincing, but is a vague idea that will convince you until you think about it for a while. To me it seems that there are actually also problems with this entire analysis failing to be Lorentz invariant (though I don’t know, maybe these are removable).

Anyway, much more convincing is that this now begins to explain why the second law of thermodynamics is the one observed in nature, while the time reversed on is not. The universe has been set on a trajectory where it is flying through phase space out of the position of the big bang through its current state and off into the future. The initial condition is vital to give us the time-asymmetry. The reason we never see entropy decrease is that the 2nd law is constantly being fulfilled, and the reversed 2nd law is impossible to fulfil because in the time-reversed picture we know that we have to end up in a position of very low entropy, an assumption which radically modifies the probability space of possible trajectories to exclude those which satisfy the time-reversed 2nd law and include only those which satisfy the standard second law of thermodynamics, as is observed.

I don’t think this argument is quite flawless. It has a few holes, and I’d be willing to believe there are other theories as to how time asymmetry arises in the universe, but I think the ideas are interesting and deeply unexpected. I also am surprised by how useful and interesting the theory of entropy seems to be, especially its genuine relevence to accurately analysing many real-world situations. It also feels like something which could (probably does?) have useful applications in mathematics. Any comments readers have would be very appreciated. I am a total novice in this area, and would like to know more. I shall definitely be trying to attend David Tong’s lectures in Statistical Mechanics in Lent term, even if I don’t take watertight notes or do the example sheets, because I am increasingly convinced this is yet another piece of physical culture that mathematicians should try not to miss out on.

So, is the big bang responsible for my not having posted anything on this blog for about six months? It might be a bit, but thanks to the high-energy photons coming from the sun, I’m in the fortunate position of being able to keep myself at least moderately organised, and with no room-mate at the moment if all else fails I am able to increase the disorder of my bedroom in order to keep my personal entropy low.