You are currently browsing the monthly archive for January 2009.

While trying to prove some series (which should be easy to guess) were divergent, I stumbled across the following cute little result. We write with s.

Consider the integral .

Then the substitution gives this integral as equal to , so making such substitutions will just reduce this integral to solving , which is well-known.

This term I am trialling the use of a laptop in taking lecture notes in two part IA courses, to see if it worked out more effectively than my rather messy hand-written notes. There is a good chance I might abandon this method after a few more lectures, particularly in Probability, but for the time being I thought I would keep them updated online in case anyone might find them useful (if they’ve missed a lecture or whatever…).

A health warning worth mentioning is that my notes are often excessively concise, and omit or paraphrase quite a lot of what the lecturer writes. This is a note style that suits me (leaving me to fill in the gaps during revision, and omitting any material I am very comfortable with), but might need to be borne in mind if you are using them to help learn the course.

Any corrections or comments would be well-received.

Probability

Dynamics and Relativity

Update: I have actually completed both sets of notes, but do not wish to post them on the internet for public consumption (as this might undermine lecturers’ control of their teaching next year). Contact me if you would be interested and I’ll gladly email them to you.

My lack of recent activity was owing to a boat club trip to Seville, with lots of sun, rowing and walking, but little internet access. However, now I am back, and during the trip the following curious pseudo-problem was posed.

Many Cambridge college boat clubs keep track of pairs of their members who have had romantic liasons (precise definitions of which vary from club to club but “sexual kissing” seems to be the standard benchmark), and so build up what are known as *incest charts*, which are essentially graphs where each member is a vertex and two people are joined by an edge if they have engaged one another in acts of ‘incest’.

First and Third had plans to put an anonymous (the vertices are left unnamed) incest chart on their website, but given the great size of the club, the actual generation of the graphic creates an interesting problem which isn’t exactly mathematical but clearly requires some kind of algorithmic mathematical handling.

**Problem:** Given a graph, expressed as a set of edges (any vertices of degree zero are ignored), find an algorithm for generating a graphical representation of the graph which is maximally intelligible to someone viewing the website. In particular, edges should cross as infrequently as possible and edges should be short and ideally straight.

This is a problem familiar to anyone who has done graph theory. You get half way through a diagram and then end up having to include lots of loopy things to make your graph make sense. It seems, to the author, a difficult task. If anyone has any ideas or knows of such an algorithm already, I would be interested to hear.

In Part IA Vectors and Matrices this theorem was proved, but the following proof (which I thought of during the lecture and seemed most natural) wasn’t lectured, or if it was it wasn’t clear to me that this was the approach taken.

**Claim: **Let be an matrix with characteristic equation . Then .

**Proof: **We prove it in the special case where is diagonalisable.

is diagonalisable, so there exists a basis of eigenvectors . It will suffice to show that the linear map corresponding to sends everything to zero.

Any vector can be written , so by the linearity of it suffices to show that each component is mapped to the zero vector. But clearly , where is the eigenvalue of .

So we are done. ðŸ™‚

In my last post I discovered a surprising result about the distribution of , namely that as the dimension of the cube grew, the distribution actually seemed to get thinner, converging to almost a spike at . I am going to attempt to generalise that result a little here, and draw some (mainly conceptual and heuristic) conclusions to inform future intuition about such problems.

So we now look at a set of random variables which all have the same distribution (say, that of , which we assume to be nonconstant), and consider the distribution of .

By following a similar method to our investigation in the previous post, straightforward calculation and applications of the linearity of expectation and the mutual independence of the variables give:

- ;
- .

Therefore, as increases, the standard deviation increases at a much slower rate than the mean. In particular, we can apply Chebyshev’s inequality and read off two quick results.

- For ,Â .

These can be used to give quantitative estimates of the very quick drop in density of the distribution away from the mean. We shall conclude by briefly revisiting the result of the past post, and then drawing some general intuitive pointers.

In the previous problem, we set to have the square of a uniformly distributed variable on , and deduce that the result we require (with ) is essentially (ignoring some lower order terms) equivalent to finding such that for all sufficiently large

.

But this is immediate from the second result above and the fact that .

**Reflection on this general phenomenon**

The intuition I can use to make sense of this result derives from the heuristic *“if you take more samples, you are more likely to get an average”*. Although in our case we are just taking a sum, this is essentially equivalent to taking an average, and it is perhaps unsurprising that the distribution in the limit is therefore actually very dense around the expected value and almost zero everywhere else. It might actually be sensible to turn this intuition into mathematics.

So set , and multiplying past results by suitable constants give us that

In other words, the variance is inversely proportional to the number of samples we take before averaging, which is roughly what we would intuitively expect. So this result is actually, however surprising it seemed at first, really just the primary school idea that increasing sample size increases the accuracy of a result, but multiplying everything by .

This has been the (to me very surprising) result of my enquiry into the following beautiful problem, posed to the author by Prof. Ben Green. I am recording my findings primarily for my own reference, and would urge any reader not familiar with the problem to at least try to attack it themselves before reading on.

**Question: **If points (if you like, vectors) are selected randomly and independently with uniform probability from , show there exists such that for all .

My initial attempts to solve this problem were not particularly successful, since trying to relate and as they *both* varied over their distributions seemed tricky. My initial heuristic was along the lines of: *“The random variable * *ranges over the interval * *so as * *grows the distribution must spread itself across a greater distance, hence be less at any given point under a suitable continuous transform to keep the distribution ‘appropriate’ (with mean at the correct, also growing, place). Thus while the distribution does peak near the middle, it does not seem likely that a given length neighbourhood can have ** probability as **.”*

With this heuristic in mind, the only method of approach which seemed likely to work was to take some kind of sum or integral over various (and a number growing with )* * disjoint intervals on the distribution to get a spread-out enough estimate to eliminate the lack of an probability on any given interval. However, actually going ahead and doing something like this proved to be very tricky, especially for one such as myself with little formal probabilistic training.

So I decided to leave the above heuristic aside and just see how far from the probability looked like around a natural-looking modal point. I achieved the following surprising result.

**Theorem 1: **For a fixed constant ,

.

In particular, the above question is a trivial consequence of the case .

Owing to the nature of the Euclidean metric, it seemed natural to set up variables as follows:

Let be selected randomly and uniformly, and set . A simple application of the linearity of expectation showed that , and as seemed a much easier object to work with than I decided to take a look at for some constant to be played with later.

This is equal to

which is then clearly .

Which looks like the kind of quantity one can bound above by Chebyshev’s inequality and get a lower bound around . The initial heuristic relied on the ‘near uniformity’ assumption that this variance would be of magnitude just a little less than , but here I would be very pleasantly surprised.

But this implies the much lower than expected . In other words, the heuristic based on assuming is roughly smoothly distributed was fallacious. Instead, the distribution of has a ‘spike’ at its expected value and is almost zero away from it.

So .

Writing we can apply Chebyshev’s inequality to gain the lower bound , which is precisely theorem 1.

So my heuristic has been torn to pieces, and, instead, as gets large, the ‘amount of space’ in the -dimensional unit cube whose distance from the origin is within a certain (small) distance of Â actually must be rather a lot. For example, take . Then , so as tends to infinity the probability approaches a value of at least , so more than two thirds of points in a unit cube of high dimension are (with maximum error ) roughly distance from the origin (or, by symmetry, any other vertex).

I found this result initially very surprising (and sent Prof. Green a semi-apologetic email in which I acknowledged that I could just be having an evening of insanity and the next morning discover my estimates were based on false assumptions). The basic reason the smooth expected distribution ended up being a spike was that in the variance calculation the quadratic terms cancelled out, leaving only smaller order terms. In another post in the next few days I shall try to investigate whether this phenomenon (where the quadratic parts of are equal) can be found in any interesting families of general situations, as this now seems likely and should be investigated to inform my intuition in future.