A while ago I gave a talk at IAP in their wonderful amphitheatre (slides at the bottom). The colloquium series at IAP is a very serious affair with lots of top scientists so I felt pressure to try and say something interesting (Jim Peebles gave a lovely talk the following week on the future of cosmology to celebrate the 75th anniversary of IAP).
I have been thinking a lot about the future recently, giving an AIMS public talk on the issues facing society in general due to the growth of Big Data and machine learning which you can watch here. I think this is really interesting and important, so I decided that should be the topic of my IAP talk.
The only problem was, I had no idea what to say! Machine learning, as it is currently applied in astronomy and cosmology, is fairly straight forward. Usually one has a classification problem and some training data. You pick some features that you think capture the important features of your data, you pick an algorithm (SVM, LDA, neural networks etc…), you train your algorithm with your training data, apply the result to your data and then write your paper. Typically you would see how things change with different training data sets or algorithms and you end up with a paper something like this one which we did on supernova classification. Not very stimulating for a general audience.
So I decided to look forward 20 years and ask the question, will computers and/or robots ever do “real” science? As you may imagine, this turned out to be quite a controversial topic and the most vocal people were certainly against the idea, but that is probably not surprising.
It is very human and alluring to think that we are special, and that what our best scientists or artists do is somehow unique. Yet much of science is based on the refutation of these kinds of idea: the Copernican Revolution, Darwinian evolution and the scientific method are essentially all based on the rejection of the notion that you, I, we or anyone in particular, are fundamentally special.
The Copernican revolution rejects the idea that we live at a special place or time in the universe. Darwinian evolution rejects the idea that we are fundamentally different from the animals. The Scientific Method rejects knowledge that cannot be reproduced anywhere or anytime by anybody. And yet, many physicists fervently believe that some aspects of what humans do will never be done by “computers”. Often it is “creativity” that comes up as the first candidate for a purely human activity.
But never is a dangerous term. Things change. A lot. It is worth remembering that the word “computer” goes back to the 1600’s and simply meant someone who computes. In the late 1800’s the word was typically used in astronomy to denote someone (often a woman) who would do tedious calculations. Now the idea of a human emulating a digital computer is strange (which is a tangent we could follow down the mechanical turking avenue but won’t!) but it illustrates how non-intuitive change can be over long timescales.
There has also been remarkable progress towards computer/robotic science and automated reasoning. The robotic scientist ADAM is able to implement the full scientific method, albeit in a well-defined search space, and produced what is probably the first non-human contribution to knowledge. You can see a video of ADAM in action here.
In preparing for my talk I came across a couple of very interesting online videos that are relevant to this question. The first is a talk by Gregory Chaitin that I actually saw in person at the Perimeter Institute on the search for the perfect language. The second is a talk by Douglas Hofstadter (of Godel, Escher, Bach fame) on analogy as the core of cognition.
So how would a computer do “great” science. One way would be to have a complete encoding or feature space for concepts or ideas. This is close to Leibniz’s idea of the Characteristica Universalis. Then a computer algorithm could simply apply some clever search algorithms to find concepts or ideas that fit the observable data best. To do so, it would need to be able to compute the implications of a given idea. For example, given an action, it would compute the observable implications, compare with available data, compute a likelihood, and then then jump to a new theory.
This is hard to imagine, but there has been remarkable progress in automated theorem proving software. I can (sort of!) imagine a robotic scientist that proposes theories through some encoding of the space of relevant concepts, derives logical derivations using allowed logical operations until it produces something that can be compared with data, computes the likelihood of the theory given the data, and then adapts the theory based on this outcome.
Perhaps this seems implausible and perhaps it is. It critically relies on the idea that one can cleverly parametrise the space of ideas, which might be impossible. But a great deal of research today is fairly algorithmic and I suspect that at a minimum the “bottom 50%” least-creative research could be done by computers within 10-20 years. Peter Norvig has a very interesting discussion of related issues in response to Noam Chomsky here.
It is worth remembering that scientific papers are supposed to be as logical and clear as possible. They should follow infallible logic, starting with axioms, deriving propositions, comparing with data and drawing clear conclusions. In writing a paper, humans attempt to emulate a digital ideal, interspersed with simple pictures and creative prose to excite and provide insight for their fellow analogue colleagues. A computer proving a theorem has no need of simple pictures or creative prose. If anyone is going to write a good logical paper, I think it is going to be a computer, with no need to fudge the results, fake the data or publish before it is ready for fear of perishing.
If you are interested you can see the slides of my talk here:
Updated 14 November 2013.