You are currently browsing the monthly archive for February 2009.

The following (open) question seems to hold the key to an area of combinatorics I had previously not considered:

Question: For which $n$ does there exist an $n$ by $n$ matrix with orthogonal rows consisting only of 1s and -1s (an $n$Hadamard Matrix)?

There has been some progress towards answering this question. I am told it can be shown (and I shall probably do so in a later thread if it is possible for me to find the proof myself) that if there exists an $n$-Hadamard matrix then $n \in \{1,2,4\mathbb{N}\}$. It is also conjectured that for any $n$ in this set we can find an $n$-Hadamard matrix, although this has not yet been proved in full.

In this post I will not really touch on any of the above results, but instead just briefly explain how my attention was brought to these objects in order to shed light on the combinatorial importance of such matrices and the idea of representing combinatorial objects by such 1,-1 matrices (as opposed to using matrices with elements 0 and 1 or matrices over $\mathbb{F}_2$, which I had found more natural previously). It arose during my investigations into the following problem (set, in a slightly different language, by Tim Gowers to Part IA students in the Probability course of the tripos).

Problem: Given $n \geq 2$ and a base set $\Omega$ of size $2^n$, and distinct $2^{n-1}$-subsets $A_1,A_2,...,A_m$ with the property that $|A_i \cap A_j|=2^{n-2}$ whenever $i \not= j$, what is the largest possible value $M$ of $m$?

After a little fiddling around (for about an hour), I stumbled upon an inductive construction to prove that $m=2^n-1$ was possible, which turned out to be more or less equivalent to the construction below.

Proof that $M \geq 2^n-1$:

Take $\Omega = \{1,2,...,2^n\}$ wlog. If $n=2$ a construction is simple for $M=3$ ($\{1,2\},\{1,3\},\{1,4\}$).

If $n>2$ we let $t=2^{n-1}-1$ and pick distinct subsets $B_1,B_2,...,B_t$ of $\{1,2,...,2^{n-1}\}$ with size $2^{n-2}$ and pairwise intersections $2^{n-3}$ using the induction hypothesis. It is then clear that if we choose our $A_i$ as follows, they have the required properties:

• $A_i = B_i \cup (2^{n-1}+B_i) \ \ \ 1 \leq i \leq t$
• $A_{t+i} = B_i \cup (2^{n-1}+\{1,2,...,2^{n-1}\}\backslash B_i) \ \ \ 1 \leq i \leq t$
• $A_{2^n-1} = \{1,2,....,2^{n-1}\}$.

It then seemed sensible to look for a method of proving that $M<2^n$, so it would suffice to show that $m=2^n$ gave a contradiction. After a bizarre collaborative lunch with Freddie Manners, the following method emerged:

Proof that $M < 2^n$ unless $n=3$:

Suppose $A_1,A_2,...,A_{2^n}$ satisfy the required hypothesis. Then consider the matrix given by $M_{ij} = \mathbb{I}_{A_i}(j)$. Then $(M^{T} M)_{ij} = 2^{n-2}(1+\delta_{ij})$. But this thing can (by considering the characteristic equation of the 1-matrix) be shown to have a determinant equal to $2^{2K} (2^n+1)$ for some (very large) $K$. But its determinant must be a square, being $(det M)^2$. So we require the existence of a solution to the Diophantine equation $2^n+1 = y^2$. This is a routine equation, factorising and yielding the only solution as $n=y=3$. So for any other $n$ such sets $A_i$ cannot possibly exist.

This proof is clearly weird and a little undesirable in two respects. Firstly, and most obviously, it doesn’t work for the case $n=3$, which would require a separate argument (or, as is fairly simple to do, a computer check). Secondly, and more significantly, it reduces a combinatorial problem to solving a diophantine equation which, by coincidence, is easily solved. This has the feeling more of a bizarre accident than the reason for the result to be true. A much better reason was given to me in a supervision by Ben Green.

Proof 2: (here $n=3$ is also shown to be impossible)

Instead of taking a matrix of indicator functions we take a matrix of 1s and -1s, say $M_{ij} = 1-2\mathbb{I}_{A_i}(j)$. The reason this is sensible seems, to me, to be that we have symmetry in the statements $a \in A_i, a \not\in A_i$, since replacing all the sets by their complements will change nothing, and in this case equates to multipication by $-1$. In this universe of 1s and -1s, it is clear that any two rows of the matrix are orthogonal in the usual inner product. But they are also orthogonal to (1,1,…,1). So there is an linearly independent orthogonal set of size $2^n+1$ vectors in a space of dimension $2^n$. Contradiction.

In fact, looking at it this way shows that the problem was really about Hadamard matrices, and that the construction for $2^n-1$ was in fact a proof of the following result:

Result 1: There exists a $2^n$-Hadamard matrix for any $n \geq 2$.

Sylow’s Theorem, from group theory, states that:

1. If $p^k|| |G|$ then $G$ contains a subgroup of order $p^k$.
2. Any two such subgroups are conjugate.
3. The number $n_p$ of such subgroups satisfies $n_p \equiv 1 \mod p$.

The standard proof is to attack 1 using a clever binomial coefficient trick, and then use different arguments to tackle 2 and 3. The following proof, shown to the author by Prof. Leader of Trinity College, Cambridge, more or less proves all three parts simultaneously, and is very natural. The proof is probably slightly different from that I was originally shown, but is hopefully identical in spirit.

Proof: For a finite group $G, p||G|$, let $P$ be a maximal $p$-subgroup (a subgroup of largest cardinality where every element has order a power of $p$).

Proof of part 1: It will suffice if we can prove that $\frac{|G|}{|P|} \not\equiv 0 \mod p$. But $\frac{|G|}{|P|} = \frac{|G|}{|N|} \frac{|N|}{|P|}$, where $N$ is the normaliser of $P$ in $G$. Splitting the fraction in this way allows us to assign meaning to the quantities.

Since $P$ is normal in $N$, we can measure the second quantity as simply the order of the group $N/P$. Suppose $p$ divides this order. Then, by Cauchy’s Theorem, we can find $\bar{H} \leq N/P$ of order $p$. But then $\bar{H}P$ would be a $p$-subgroup and have order greater than $P$, contradicting its maximality.

Now, by the Orbit-Stabiliser theorem considering the action of conjugation on $P$ by $G$, we can see that the first quantity must be the size of the orbit of $P$ under this action. Let us call the set of these orbits $X$, so we need to show that $|X| \not\equiv 0 \mod p$. In fact, we can do better and show that $|X| \equiv 1 \mod p$, which will immediately give part 3 of the theorem as a consequence of part 2.

To do this we consider $P$ acting on $X$ by conjugation, and aim to show that $\{P\}$ is the only singleton orbit (and since $P$ is a $p$-subgroup, all the other orbits must have sizes divisible by $p$, so this would give the required congruence). Suppose $\{gPg^{-1}\}$ is another singleton orbit (with $g \not\in N$). Then $P$ normalises $gPg^{-1}$, which implies $g^{-1}Pg$ normalises $P$, so $g^{-1}Pg \leq N$. But in this case, it is clear that $g^{-1}PgP$ is a $p$-subgroup of order greater than that of $P$ (having nontrivial image of order $p^k$ under the quotient map $N \rightarrow N/P$). Thus the only singleton orbit is $\{P\}$, whence $|X| \equiv 1 \mod p$.

So it remains to prove statement 2 (essentially that $\text{Syl}_p(G) = X$). Suppose $Q$ is a Sylow $p$-subgroup. If we let $Q$ act by conjugation on $X$, then there must be some (unique) singleton orbit $P' = gPg^{-1}$ which is normalised by $Q$. But considering that the image of $QP'$ under the quotient map $N \rightarrow N/P'$ must be a single element (by maximality of $P'$ and the fact all elements of $Q$ have order divisible by $p$), we must have $Q=P'$. So $Q \in X$, as required.

As remarked above, the third part is now immediate.