Posts Tagged ‘Turing’

The Imitation Game: Part 3

In The Imitation Game: Part 2 we looked at Alan Turing’s work doing codebreaking at Bletchley Park during World War 2. In this final instalment of blog posts about the film The Imitation Game, we look at the work Turing did during the final years of his life, about pattern formation. This is based on a talk given by Professor Jamie Davies from the University of Edinburgh at an event called The Maths of the Imitation Game at the Filmhouse cinema.

You might be fooled into thinking that the film doesn’t touch on Turing’s work after Bletchley at all, but you would be wrong. The reference is subtle, but it is there from the very first scene. The opening part of the film shows Turing in his house, clearing up mess from a burglary. On a table we see a pine cone; on the walls we see pictures of strange spirals of dots and curious pictures of starfish. These pictures are again present in the final, very moving, scenes of the film, when Turing is at home suffering from the effects of his enforced chemical castration. But what do these images tell us about the work that Turing did?

pinecone with spirals

Count the number of spirals on a pinecone (first four shown here). You often find a Fibonacci number.

Count the number of petals in a daisy, or the number of spirals in a pinecone, or the spirals in the head of a sunflower. You will very often get a Fibonacci number: one that appears in the sequence 1,1,2,3,5,8,13,21,34,… In this sequence, you get the next number by adding up the previous two.

Turing claimed that the ubiquity of Fibonacci numbers in plant forms was not a coincidence, and wanted to show that such patterns could arise as the result of chemical reactions in the cells of the plant. In this view he was influenced by the book On Growth and Form by the Scottish mathematical biologist D’Arcy Wentworth Thompson.

In 1952 Turing wrote the paper “The chemical basis of morphogenesis” which was ground-breaking work in explaining how local chemical reactions could result in large-scale patterns. Fibonacci numbers in pine cones, stripes on a tiger, tentacle patterns on a starfish and hexagons in a fly’s eye were all predicted to be manifestations of the same type of chemical reaction. Today Turing’s idea is called a reaction-diffusion equation and is used in even more areas of science than just animal biology.

Haeckel's Kunstformen der Natur

Haeckel‘s picture showing various sea anemones with their tentacles. If you look closely in the film you can spot this picture on the wall of Turing’s house.

The idea is that patterns are formed by the interplay between two types of chemical: one that activates growth (called the activator) and one that inhibits growth (called the inhibitor). Production of the activator stimulates further production of the activator, but also stimulates creation of the inhibitor. Imagine a ring of cells around an embryo. A particular cell might grow ever-so-slightly faster than its neighbours, resulting in more of the activator being produced and thus even further growth. At the same time, the cell is producing the inhibitor chemical, meaning that the growth of cells adjacent to it is stunted. However, cells that are far enough away are not affected by the inhibitor, and so will grow as normal. This results in a sort of wave-like pattern around the embryo, of particular cells growing at regular intervals. And this is exactly what causes the regular arms of a starfish or the tentacles of a sea-anemone.

Turing was not only the first to come up with these ideas, but the first to test them using computer simulations. Working at the University of Manchester in 1951, he had access to the world’s first commercially available general-purpose digital computer, the Ferrati Mark 1, and he immediately started programming it to explore the consequences of his equations. Working by hand his equations would never have been tractable to solve, but using a computer he could see that they were a great model for the physical phenomena he was trying to explain.

Sadly Turing’s work was far ahead of his time, too difficult to understand for biologists (even though he made a conscious effort to explain things well!), and was largely ignored by the scientific community for fifty years. It was only in the late 1990s that the work was picked up again and Turing’s paper rediscovered, and today reaction-diffusion models are considered fundamental to the study of pattern-formation. Prof. Jamie Davies’ current research in Edinburgh is testing Turing’s ideas by ‘programming’ real living cells to make patterns on command, and his lab has been able to replicate all the predictions made by Turing.

To conclude, I think it is a shame that the film portrays Turing as being this incredibly autistic and single-minded person, giving all his love to machines and algorithms instead of real people. In reality he was a warm and friendly individual who had strong friendships and relationships, and, moreover, he did some of his most important work into living things rather than cold machines.

I hope my series of blog posts have highlighted the amazing research that Turing did, which revolutionised computer science, mathematics and biology, not to mention the pivotal role he played in breaking Enigma during World War 2. Many thanks go to John Longley, Tom Leinster and Jamie Davies for providing the material on which these posts have been based, and to the Filmhouse cinema in Edinburgh for hosting our event explaining the Maths of the Imitation Game.

Advertisements

The Imitation Game: Part 2

In The Imitation Game: Part 1 we looked at the early work of Alan Turing about universal computing machines, the limits of what computers can do, and whether computers could ever successfully imitate human brains. In this post we look at Turing’s work doing codebreaking at Bletchley Park during the Second World War, and the similarities and differences with codebreaking today. The material in this post is based on a talk given by Dr Tom Leinster from the School of Mathematics at the University of Edinburgh, as part of an event called The Maths of the Imitation Game at the Filmhouse cinema.

Bletchley Park in Buckinghamshire was the site of the Allied codebreakers in World War 2.

Bletchley Park in Buckinghamshire was the site of the Allied codebreakers in World War 2.

Bletchley Park, otherwise known as Station X, was a mansion in Milton Keynes that was used during World War 2 as the base for the Government Code and and Cypher School (GC&CS). (Earlier this year it was reopened as a museum, and I’m told it’s very good!) One of its main purposes was to decipher German messages, which were being encrypted using a device called the Enigma machine. It was thought that Enigma was unbreakable, and so it might have been were it not for Turing’s brilliance and subtle mistakes made by the German operators.

In a simple substitution cipher, each letter of the alphabet is replaced by another letter of the alphabet. For example, if we had S->F, H->U, E->R and P->Y, the word SHEEP would be encoded as FURRY. Such a code is easily breakable using frequency analysis: if E is the most common letter in the English language and R is the most common letter in the message, it’s likely that E has been encoded as R. The Enigma machine is much cleverer than that.

A 3-rotor Enigma machine with parts labelled.

A 3-rotor Enigma machine with parts labelled.

In a standard Enigma machine there are 3 rotors. Each rotor has each of the 26 letters of the alphabet inscribed around it, and each is set to an arbitrary position at the beginning of the day. When the operator types a letter on the keyboard, the signal from that key is sent through wiring to the first rotor, which encodes it as a new letter. It is then sent through wiring to the second rotor, which changes it again, and finally it is sent to the third rotor, which changes it again. The signal then goes around some fixed wiring (called a reflector) and then it returns through each of the rotors, finally lighting up a new letter on the lampboard which the operator writes down. Here’s the clever bit: after each new letter is entered, the rotors turn. Therefore, if you had typed S the first time and got an F, you could type S again and get a Z. The same letter is encoded differently every time it is pressed. The message itself is part of the encoding.

There are 3 different rotors, which the operator could choose out of a possible 5. This gives 5x4x3=60 options already for the initial setup. Then, each rotor can start out in any one of 26 different positions. This gives 26x26x26=17,576 options. So far, this is only about a million combinations. It sounds like a lot, but this is at a level where, with a little ingenuity, you could simply brute force all the possibilities. To make Enigma unbreakable a final layer of encryption was added: the plugboard. This used cables to pair up letters. For example, if E was paired with Q, then if E was typed then the machine would interpret this as a Q before transmitting it through the rotors. With this additional scrambling, the number of combinations of an Engima machine was over 159 million million million. There was no way they could ever hope to check all possible settings in a reasonable amount of time. And, to make matters worse, the Germans changed their settings each day, meaning that there was only 24 hours to figure out each code before everything changed.

What Turing did to crack Enigma was to build a machine capable of doing logical calculations that would eliminate a vast number of the possible settings. This was called the bombe and built on earlier work by Polish cryptanalysists. Basically it could try different rotor settings in turn, and look for logical contradictions that would show the settings to be impossible. (For an analogy, think of doing a sudoku puzzle where you might postulate that a 6 goes in a box, but that would result in two 3s in another row which can’t happen, so therefore it can’t be a 6.) Such contradictions might include:

  • Deciding that a letter was encoded as itself. This was impossible due to the way the signals through Enigma were sent round the reflector.
  • Having an asymmetry in the plugboard. If the B is connected to the N, then the N must also be connected to the B.
  • Having a letter in the plugboard connected to its neighbour. Operators were told not to do this.
  • Having plugboard settings that were used the previous day. Operators had to change all the settings every day.

There were many more contradictions like these. There were also sloppy practices among the operators. They would often not set the rotors to a truly random initial position, but would use their names (e.g. “BOB”) or would simply turn the rotors a few places from the day before. There were also common phrases in messages, e.g. the word “ein” appeared in 90% of messages, and “Heil Hitler” also appeared often. Wikipedia has a fascinating description of the methods used to crack Enigma, as well as the methods the Germans used to make Enigma even harder to crack.

Nowadays, encryption of information is done using mathematical algorithms rather than mechanical machines. The most commonly used is called RSA, and relies on the difficulty of factorising large numbers into primes. Just as in Turing’s time, codebreakers rely on having a large amount of information to help them look for patterns, and on computers to do the decryption. The big difference between then and now, according to Tom Leinster, is that during the war the government was spying on the Nazis; today they are spying on us.

Edward Snowden

Edward Snowden, who revealed details about the mass surveillance carried out by the NSA and GCHQ.

Earlier this year it was revealed that the UK’s Government Communications Headquarters (GCHQ) and the USA’s National Security Agency (NSA) had been systematically monitoring all of our emails, phonecalls, texts, web browsing and bank transactions. Their goal was to “collect all of the signals, all of the time”, regardless of whether or not anybody had done anything suspicious. And, just as in the film, the codebreakers often had complete autonomy over the information they collected, with even the highest in command being unaware of what they were doing.

Just as in the film, this codebreaking was only possible because of errors in the encryption of the data. The information leaked by Snowdon showed that the NSA had inserted a secret back door into the world’s most widely used cryptosystem, allowing it to break the encryption.

Alan Turing was a homosexual at a time when homosexuality was illegal, and his conviction and subsequent chemical castration were what led to his suicide in 1954. Today it is legal in the UK for two men to have sex, and even get married. This change in our law has come about because of campaigning and activism, but it is always dangerous to be an activist, speaking out for something that is considered against the law. It is easily argued that such campaigning is even more dangerous today, with the government carrying out mass surveillance of everyone in the population.

I shall end this post with the question asked by Tom Leinster in a piece he wrote for New Scientist in April: is it ethical for mathematicians to work for government intelligence agencies like GCHQ?

I look forward to hearing your comments!

Read The Imitation Game: Part 3 about Turing’s work in biology and pattern-formation.

The Imitation Game: Part 1

Imitation Game posterToday we had a wonderful event at the Filmhouse cinema in Edinburgh talking about The Maths of the Imitation Game. This is the film which tells the story of mathematician Alan Turing and his work codebreaking at Bletchley Park during the Second World War.

I’m going to admit up front: I liked the film. I’ve met other people who’ve been foaming at the mouth with anger over inaccuracies in the film, both with the historical aspects and also with how Turing was portrayed. I agree that the film isn’t perfect – they had to take a lot of liberties with how things were presented in order to make the story appeal to a mass audience. Events certainly did not happen the way the film depicts. And I do take issue with the film’s insinuation that Turing assisted Soviet spying, and also that Turing would have told classified secrets to a police officer. But I also think the film gets a lot of things right, it tries its best to explain key aspects of the story, and is wonderfully acted, especially by Cumberbatch as Turing.

If nothing else, the film is a great platform to start discussions about the important (theoretical) work that Turing did in his life and the repercussions of that work today. This is what our event at the Filmhouse was all about. We had three speakers talking about different aspects of his work, and I’ll summarise those discussions here. I’ll break it down into three blog posts as there is a lot to say about each one.

First up, Dr John Longley from the School of Informatics talked about Turing’s early work on computability. In our lives we are used to having different machines for different tasks: a toaster that warms your bread, a dishwasher to wash your plates and a blender to mash your food. We wouldn’t expect a single machine to perform all these different tasks. Yet, walk through to your study and you’ll find a machine on which you can do many things: play solitaire, manage your accounts, watch videos, search for prime numbers. Nowadays we aren’t at all surprised that a computer can do so much, but in Turing’s day such a machine was inconceivable. In fact, it was Turing who proposed that such a machine could exist. He had the idea of a ‘universal machine‘ which could emulate the output of any other given machine.

Every machine takes some sort of input, performs a certain task and then gives an output. The toaster’s input is your bread, which is heated according to the settings you’ve told it, and its output is some toasted bread. The Enigma machine takes as input the key presses of letters, it performs an encryption, and it outputs letters forming a coded message. For Turing’s universal machine, the input is a sequence of symbols that provide the instructions for what another simulated machine does and the input to this machine. The output is then the answer that the simulated machine would have provided if you had given it this particular input. This idea that the input may itself comprise a set of instructions was groundbreaking. Today we simply call this kind of input ‘software’.

Turing’s invention of the universal machine was designed to answer a long-standing problem in mathematics: can the answer to every mathematical question be determined mechanically? This was posed by David Hilbert in 1928. When Turing was only 23 years old, he showed that the answer to this question was negative, and he did this by coming up with something called the Halting problem.

Turing's bombe

Will Turing’s machine output the correct Enigma settings, or could it keep searching forever? This is called the Halting problem.

This simply asks: given a particular input to a machine, is it possible to determine whether the machine will ever halt and give an output? For example, in the film the machine (called “Christopher”) that Turing has invented is checking through possible settings of Enigma and everyone is waiting for it to tell them what the correct answer is. Will it ever stop, or will it keep whirring away for ever, stuck in some logical loop that prevents it from finishing? If you pull the plug, how can you be sure that in the next minute it wouldn’t have told you the answer? Turing proved that there is no single algorithm that can decide whether a given arbitrary program will halt for a given input. The halting problem is ‘undecidable’.

The conclusion from his work is that there are some questions in mathematics that are unknowable –  there is no computer program that will ever tell us whether they are true or false. However, this begs the question: is the human mind the same as a computer? Or could a brain do computations, do mathematics, that a machine could not? If so, what makes a brain different from a computer?

This question fascinated Turing and led him to invent The Imitation Game, which gives the film its title. Rather than asking the question “Can machines think?” which is very abstract and hard to pin down, he instead asked whether it was possible for a machine to imitate a human so well that another human could not tell that it was really a machine. In a Turing test, person A can interrogate player B by asking a series of questions. Turing said that if the interrogator decides as often as not that player B is a real person, then the machine has passed the test. Despite predicting that computers would be built to pass the test by the year 2000, it was only in June this year (2014) that a computer was said to have passed the test. This was a computer masquerading as a 13 year old Ukrainian boy, who fooled 33% of a panel of judges over the course of a 5-minute conversation.

What do you think? Could computers come to imitate humans one day? Is it just a case of having more computing power, or is it possible that brains can do things that computers fundamentally cannot?

Many thanks to John Longley for the material on which this blog post is based.

Read The Imitation Game: Part 2 about Turing’s role in codebreaking during World War 2 and moral questions about codebreaking today…