## Alan Turing Apology

Yesterday, in response to the petition initiated by John Graham-Cumming, the British Prime Minister Gordon Brown issued a formal apology for the prosecution of Alan Turing under the Labouchere Amendment.

2009 has been a year of deep reflection – a chance for Britain, as a nation, to commemorate the profound debts we owe to those who came before. A unique combination of anniversaries and events have stirred in us that sense of pride and gratitude which characterise the British experience. Earlier this year I stood with Presidents Sarkozy and Obama to honour the service and the sacrifice of the heroes who stormed the beaches of Normandy 65 years ago. And just last week, we marked the 70 years which have passed since the British government declared its willingness to take up arms against Fascism and declared the outbreak of World War Two. So I am both pleased and proud that, thanks to a coalition of computer scientists, historians and LGBT activists, we have this year a chance to mark and celebrate another contribution to Britain’s fight against the darkness of dictatorship; that of code-breaker Alan Turing.

Turing was a quite brilliant mathematician, most famous for his work on breaking the German Enigma codes. It is no exaggeration to say that, without his outstanding contribution, the history of World War Two could well have been very different. He truly was one of those individuals we can point to whose unique contribution helped to turn the tide of war. The debt of gratitude he is owed makes it all the more horrifying, therefore, that he was treated so inhumanely. In 1952, he was convicted of ‘gross indecency’ – in effect, tried for being gay. His sentence – and he was faced with the miserable choice of this or prison – was chemical castration by a series of injections of female hormones. He took his own life just two years later.

Thousands of people have come together to demand justice for Alan Turing and recognition of the appalling way he was treated. While Turing was dealt with under the law of the time and we can’t put the clock back, his treatment was of course utterly unfair and I am pleased to have the chance to say how deeply sorry I and we all are for what happened to him. Alan and the many thousands of other gay men who were convicted as he was convicted under homophobic laws were treated terribly. Over the years millions more lived in fear of conviction.

I am proud that those days are gone and that in the last 12 years this government has done so much to make life fairer and more equal for our LGBT community. This recognition of Alan’s status as one of Britain’s most famous victims of homophobia is another step towards equality and long overdue.

But even more than that, Alan deserves recognition for his contribution to humankind. For those of us born after 1945, into a Europe which is united, democratic and at peace, it is hard to imagine that our continent was once the theatre of mankind’s darkest hour. It is difficult to believe that in living memory, people could become so consumed by hate – by anti-Semitism, by homophobia, by xenophobia and other murderous prejudices – that the gas chambers and crematoria became a piece of the European landscape as surely as the galleries and universities and concert halls which had marked out the European civilisation for hundreds of years. It is thanks to men and women who were totally committed to fighting fascism, people like Alan Turing, that the horrors of the Holocaust and of total war are part of Europe’s history and not Europe’s present.

So on behalf of the British government, and all those who live freely thanks to Alan’s work I am very proud to say: we’re sorry, you deserved so much better.

Gordon Brown

The unofficial tone of the apology may be surprising to certain people, but this is typical New Labour style, as confirmed by my British friends. I am very happy that Brown extended the apology to all who were mistreated under Britain’s homophobic laws. This apology is a landmark for LGBT rights. I hope that the British government will soon take a further step forward and issue a separate and full apology to all those who suffered the same way Turing did (many of whom are alive today) but didn’t happen to be war heroes.

However, there is something that is completely missing from the apology. While his work on breaking the Enigma cipher was of great importance, Turing’s monumental contributions to mathematics, computer science, and artificial intelligence deserve at least some mention. As the Paul Gray of the Time puts it: “But the fact remains that everyone who taps at a keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of a Turing machine.”

Filed under Computability, News

## Alan Turing Petition

John Graham Cumming has started an official petition for the British Prime Minister to “apologize for the prosecution of Alan Turing that led to his untimely death.” The petition already has over fifteen thousand signatures.

Alan Turing was the greatest computer scientist ever born in Britain. He laid the foundations of computing, helped break the Nazi Enigma code and told us how to tell whether a machine could think.

He was also gay. He was prosecuted for being gay, chemically castrated as a ‘cure’, and took his own life, aged 41.

The British Government should apologize to Alan Turing for his treatment and recognize that his work created much of the world we live in and saved us from Nazi Germany. And an apology would recognize the tragic consequences of prejudice that ended this man’s life and career.

All British citizens and residents (including expats) can sign the petition online. The deadline for signatures is January 20, 2010.

1 Comment

Filed under Computability, News

## On Not Generating Pseudorandom Primes

This post is related to the ongoing Polymath 4 project [Blog, Wiki] which looks at the problem of deterministically generating primes of a given size. Many interesting approaches have been discussed. One that I found particularly interesting was Albert Asterias‘s derandomization approach based on Nisan-Wigderson Pseudorandom Generators. I will not explain Asterias’s idea here, but it should be clear how it inspired this post.

As the title suggests, this post is not about the problem of designing pseudorandom generators for the purpose of finding primes. This post is about the completely opposite question:

How hard is it for a pseudorandom generator to avoid producing primes?

I will show that if a pseudorandom generator fails to generate primes in reasonable quantity then this must be partly explained by some specific types of irregularities in the distribution of primes. It is not at all surprising that such irregularities prevent some pseudorandom generators from generating primes. However, it is not immediately clear that all failures lead to such irregularities. A heuristic analysis of these irregularities suggests that it is quite difficult for a reasonable pseudorandom generator to avoid producing primes, but I also argue that these irregularities cannot explain all ways that a pseudorandom generator can fail to produce primes.

The irregularities in question are of two specific types. Let $t_{n-1}\cdots t_0$ be the binary representation of some $n$-bit number $t.$ For $0 \leq i \leq n-1,$ I define $\Delta_{i,n}(t)$ and $\nabla_{i,n}(t)$ as follows:

• $\Delta_{i,n}(t)$ is the (possibly negative) excess of $n$-bit primes whose lower order bits are $(1-t_i)t_{i-1}\cdots t_0$ versus those $n$-bit primes whose  lower order bits are $t_it_{i-1}\cdots t_0.$
• $\nabla_{i,n}(t)$ is the (possibly negative) excess of $n$-bit primes whose higher order bits are $t_{n-1}\cdots t_{i+1}(1-t_i)$ versus those $n$-bit primes whose higher order bits are $t_{n-1}\cdots t_{i+1}t_i.$

Thus $\nabla_{i,n}(t)$ measures the difference in the number of $n$-bit primes in two adjacent intervals of length $2^i,$ and dually $\Delta_{i,n}(t)$ measures the difference in the number of $n$-bit primes in two “adjacent” arithmetic progressions with step $2^{i+1}.$ Since this is unusual in certain contexts, let me warn you that the sign of the difference is important.

Irregularity Lemma. Let $T$ be a random variable that takes values in the set $\{0,1,\dots,2^{n}-1\}$ of $n$-bit integers. Suppose that $T$ is rather bad at generating primes, more precisely suppose that

$\mathbf{P}[T \in \mathrm{Prime}] \leq \mathbf{P}[U \in \mathrm{Prime}] - \varepsilon$

for some $\varepsilon > 0,$ where $U$ is uniformly distributed in $\{0,1,\dots,2^{n}-1\}.$ Then

• $\mathbf{E}[\Delta_{i,n}(T)] \geq 2^{n-i}\varepsilon/n$ must hold for some $0 \leq i \leq n-1.$
• $\mathbf{E}[\nabla_{j,n}(T)] \geq 2^j\varepsilon/n$ must hold for some $0 \leq j \leq n-1.$

Filed under Number Theory, Randomness

## Dr. Erdős, or: How I Learned to Stop Worrying and Love the Apocalypse!

(From xkcd.com)

Filed under Humor

## The Right Theorem

This post is about a simple and useful Fact:

The lattice of ideals of a Boolean algebra is a complete Heyting algebra.

The Fact is easily deduced from the Stone Representation Theorem. First, observe that the closed subsets of the Stone space $\mathcal{S}(\mathbb{B})$ of a Boolean algebra $\mathbb{B}$ are in a one-to-one correspondence with the ideals of $\mathbb{B}$ via the map that sends an ideal $I$ to the set of all ultrafilters contained in the complement of $I$ as subsets of $\mathbb{B}.$  This correspondence is order reversing, so we take complements to obtain an order preserving map from the ideals of $\mathbb{B}$ onto the open subsets of $\mathcal{S}(\mathbb{B}).$ The lattice of open sets of $\mathcal{S}(\mathbb{B})$ (or any topological space) is a complete Heyting algebra.

While simple enough, this proof is not elementary and it implicitly relies on the Boolean Prime Ideal Theorem. Moreover, establishing the Fact is a key step in the proof of the Stone Representation Theorem. Of course, the Fact itself cannot be found in Marshall Stone’s The Theory of Representation for Boolean Algebras [Trans. Amer. Math. Soc. 40 (1936), no. 1, 37–111] since complete Heyting algebras were essentially unknown at the time, but a proof can be gathered from the results in Chapter II. In fact, it is surprisingly difficult to find a standalone version of the Fact in the literature. (Theorem 46 and following remarks in Garrett Birkhoff’s Lattice-Ordered Groups [Ann. of Math. (2) 43 (1942). 298–331] is perhaps the earliest close call.)

The implicit use of the Boolean Prime Ideal Theorem in the proof of the Fact outlined above is only to show that the Stone space $\mathcal{S}(\mathbb{B})$ has points. In Pointless Topology, the Fact is indistinguishable from the Stone Representation Theorem. I did not try to find the earliest occurrence of the “Pointless Stone Representation Theorem” in the literature (which is with very likely in one of André Joyal’s unpublished notes), but it is discussed in Peter Johnstone’s essay The Point of Pointless Topology [Bull. Amer. Math. Soc. (N.S.) 8 (1983), no. 1, 41–53]. From the perspective of Pointless Topology, the Fact is just a statement of the Stone duality between the category of Stone spaces and the category of Boolean algebras. This is a very elegant perspective which fixes the reliance on the Boolean Prime Ideal Theorem, but it does not address the problem of elementarity.

The elementary proof sheds a completely different light on the Fact. In order to show that a complete lattice is a complete Heyting algebra, it is necessary and sufficient to show that it satisfies the infinite distributive law

$x \wedge (\bigvee_{y \in A} y) = \bigvee_{y \in A} (x \wedge y),$

where $A$ is an arbitrary set of lattice elements. It is easy enough to show that the lattice of ideals of a Boolean algebra satisfies this. To carry out the proof, it is natural to first prove the finite distributive law

$x \wedge (y \vee z) = (x \wedge y) \vee (x \wedge z)$

and then lift to the infinite case. This division is a manifestation of a general property of abstract algebras, which leads to a purely algebraic generalization of the Fact.

The ideals of a Boolean algebra are mildly disguised congruence relations on the Boolean algebra. The lattice of congruence relations of any algebra (hence, the lattice of ideals of a Boolean algebra) is always a complete algebraic lattice. Such a lattice is always meet-continuous, which means that

$x \wedge (\bigvee_{y \in D} y) = \bigvee_{y \in D} (x \wedge y)$

whenever $D$ is a directed set of lattice elements. Therefore, the lattice of congruence relations of an algebra satisfies the infinite distributive law of the previous paragraph if and only if it satisfies the finite distributive law. To summarize, the Fact is equivalent to the elementary statement that the lattice of ideals (equivalently, the lattice of congruence relations) of a Boolean algebra is distributive.

The elementary proof reveals that the close connection between Boolean algebras and Heyting algebras has little or nothing to do with the Fact itself. Indeed, the lattice of congruence relations of any algebra is a complete Heyting algebra exactly when it is distributive. On the other hand, the elementary proof does not reveal any of the implicit topological structure. The two views of the Fact are at opposite ends of the spectrum. Are the two perspectives fundamentally different or is there a broader point of view that reconciles both?

Filed under Algebra, Logic, Orderings

## Mathematical Discourse

Charles Wells has just released an “interactive” draft of his Handbook of Mathematical Discourse. Leafing through the Handbook, I found a wealth of interesting comments and suggestions regarding the use of certain terms and idioms in the classroom. The following excerpt from the Handbook preface does a reasonable job at describing the contents.

This Handbook is a report on mathematical discourse. Mathematical discourse as the phrase is used here refers to what mathematicians and mathematics students say and write

• to communicate mathematical reasoning,
• to describe their own behavior when doing mathematics, and
• to describe their attitudes towards various aspects of mathematics.

The emphasis is on the discourse encountered in post-calculus mathematics courses taken by math majors and ﬁrst year math graduate students in the USA.

The formatting leaves a little to be desired, but I am sure that the Handbook will be a wonderful teaching companion when it reaches its final state. I also recommend checking out abstractmath.org as well as Gyre & Gimble.

Filed under News, Teaching

## Real-World Concentration of Measure

Some of my recent work has led me to think about the concentration of measure phenomenon. The standard example for concentration of measure is that most of the surface area of a (high dimensional) planet is close to the equator. This led me to try to think of real-world examples of the phenomenon. There are plenty of realistic situations supporting the phenomenon, but I’m looking for more striking examples. More precisely, I would like an example where people have noticed the “unusually high” concentration and have sought or proposed explanations when there really isn’t much to explain.

The most obvious would be the fact that most of the world’s population lives around the equator. However, this has a completely different valid explanation (as witnessed by the concentration of population in Canada) related to weather and similar issues. Population density doesn’t appear to be nicely distributed along meridians either, mostly because humans tend to live on land rather than water. Looking for examples in terms of population density looks like a dead end.

Concentration of measure also occurs in graphs. On the vertices of the unit $n$-cube, concentration of measure says that most vertices are nearly half-way between $(0,\dots,0)$ and its antipodal point $(1,\dots,1)$. (Indeed, the central limit theorem predicts that most vertices have equally many $0$‘s and $1$‘s with an error proportional to $1/\sqrt{n}$.) One real world graph is the so-called social web of human connections. The small world phenomenon says that any two people are at most 6 steps apart in the social web. Could this be a manifestation of the concentration of measure phenomenon?

I’m not sure. I always thought that the numbers for the small world phenomenon didn’t add up. (Though that strongly depends on which numbers you decide to use.) Looking at the small world phenomenon as a manifestation of concentration of measure seems to give enough wiggle room for the numbers to fit together, but this is just circumstantial evidence. Is there any concrete evidence that supports this point of view?