Mark G. Jackson

Mark-Jackson-Research-Profile-20140303-webI was born and raised in beautiful Portland, Oregon. I did my undergraduate degree at Duke University, then completed my Ph.D. in theoretical physics at Columbia University under Brian Greene. My research in superstring theory and cosmology continued with postdoctoral positions at the Fermi National Accelerator Laboratory, the Lorentz Institute for Theoretical Physics, the Paris Centre for Cosmological Physics, the Institut d’Astrophysique de Paris, and my current position at AIMS. I just founded the world’s first physics fundraising agency, Fiat Physica, Latin for “Let Physics Be Done.”

When not doing physics I also love traveling, improv comedy, music by Ennio Morricone, the smell of rain, opera, daydreaming on public transportation, reading novels with plot twists and Indian food. I can speak 1.5 languages (including English).

What inspired you to study science, and astrophysics?

Newton once said, “If I have seen further than others it is because I have stood on the shoulders of giants.” This is often quoted as an example of how modest he was, but actually it was the opposite: Newton was making fun of his hunchback rival Robert Hooke. Yes, even brilliant scientists can be jerks sometimes.

But the spirit of the quote is true. Each generation learns the knowledge available at the time, then questions it, teases it, pulls it, pushes it, squishes it until cracks form. We are then required to produce an answer capable of explaining everything known before, but withstanding the problems. What seems obvious to us now was once a revolutionary insight by a single person, and what seems impossible to us now will one day be laughed at.

I love that science gives us this connection with previous generations, and something to pass on to future generations. It’s like one of those “Generation Quilts” in which each thread is a little bit of knowledge. And if you’re lucky enough to add a few threads of your own, they will be there for all time.

Do you have a role model in science?

Linus Pauling. The only person to have won two unshared Nobel Prizes (Chemistry and Peace). And fellow Oregonian.

A quote that inspires you?

“We are all of us in the gutter, but some of us are looking at the stars.” — Oscar Wilde

Research interests

I completed my doctorate in physics researching whether superstring theory can naturally produce the three large spatial dimensions that we observe. Since then I have turned my attention to other topics including cosmic superstrings, signatures of new physics in the cosmic microwave background, holography and spin-statistics.

Favourite reference papers:

I always loved the article by Gross and Mende about stationary phase solutions to the scattering amplitudes. When you study the equations of particle interactions they are often very complicated, especially at high energies, sometimes requiring powerful computers to estimate the answer. But what this article finds is that superstring interactions can get *more simple* at high energies. In fact they can become so simple they can be solved with pencil and paper, and there is a very satisfying physical picture of how the strings behave during the interaction. I loved the way it turned the usual lore on its head.

Posted in General, Group Members | Leave a comment

Kai Staats

kai_pebblebeach-4x6

I am an entrepreneur, inventor, writer, and film maker.          In 1999 I co-founded and for ten years was CEO of Terra Soft Solutions, developer of Yellow Dog Linux. I helped to shape Linux for the Power architecture in the high performance computing space.

Terra Soft systems were used to process images from the Mars rovers at NASA JPL, to conduct real-time sonar imaging on-board the U.S. Navy submarines, to train both military and commercial pilots for Boein, and conduct bioinformatics research at DoE labs. In 2009 Staats sold the largest Sony PS3 cluster in the world to the U.S. Air Force Research Lab in Rome, New York.

I am now the Principal Manager of Over the Sun, LLC, a research and film production firm. Internal to OTS, my associates and I investigate product development opportunities, some of which lead to real-world implementation while others provide experience and knowledge for future, internal and client endeavours.

Building upon my ten years experience in working with leading scientists and researchers across the U.S., I have returned to my passion for science as a storyteller, capturing the curiosity, passion, and drive of those who work a lifetime to better understand the inner workings of the universe around them.

What inspired you to study science, and astrophysics?

An insatiable, child-like desire to know how it all works.

Do you have a role model in science?

My high school physics professor Dan Heim and life-long mentor and friend Ron Spomer, a renowned wildlife conservationist, writer and photographer.

A quote that inspires you?

“It takes a village to raise a child,” because it reinforces the reality that no amount of social networking, no advanced gadget or supercomputer will ever replace our intrinsic need for human parents, peers, mentors, and associates. As our population grows (out of control), it will be this fundamental parameter of healthy child rearing that will give our species the best chance for long-term survival, both here on Earth and as we head to the stars.

Research interests

With the formation of Over the Sun, LLC, I have since 2009 researched portable, personal computing, international communication systems, and large-scale renewable energy generation and storage.

Now, in my Masters work at AIMS, I am developing a model for scalable, bio-regenerative systems for long-duration human space travel and colonization of other planets such as Mars. This immense area of study will be narrowed, with initial consideration for three areas: resource generation / allocation (O2, food production, caloric intake, electrical power, etc.); social networks over the evolution of growing colonies (drawing correlation to Elman Services’s Bands, Tribes, Chiefdoms, and States); or the genetic viability of multi-generational human travel to distant stars.

Fortunately, the data for all three of these is readily available from a small network of researchers I have built prior to starting this project, and from public resources such as NASA’s web archives. My work will be the integration of this data into a scalable model which attempts to showcase periods of increasing or decreasing efficiency, improvements, and total system breakdowns.

Favourite reference papers:

To be perfectly honest, I have been reading mostly lay-person summaries of full research projects without investment into one particular journal. If I am allowed to update this particular point, I will do so in a few months time and have an answer worthy of a proper student (I hope :).

Posted in General, Group Members | Leave a comment

AIMS Seminar: Dr Ignacy Sawicki

This week we had a stimulating seminar by Iggy Sawicki, our new postdoc, who packed out our cosy seminar room with a talk entitled: “Testing dark energy as a function of scale”. You can see the slides of his talk below,  admire his relaxed delivery style and read his humorous take on life and cosmology here.

photo (96)

 

.

Posted in Events, Group Meetings, Group Members, Seminars | Leave a comment

Machine learning and the future of science

A while ago I gave a talk at IAP in their wonderful amphitheatre (slides at the bottom). The colloquium series at IAP is  a very serious affair with lots of top scientists so I felt pressure to try and say something interesting (Jim Peebles gave a lovely talk the following week on the future of cosmology to celebrate the 75th anniversary of IAP).

I have been thinking a lot about the future recently, giving an AIMS public talk on the issues facing society in general due to the growth of Big Data and machine learning which you can watch here. I think this is really interesting and important, so I decided that should be the topic of my IAP talk.

The only problem was, I had no idea what to say! Machine learning, as it is currently applied in astronomy and cosmology, is fairly straight forward. Usually one has a classification problem and some training data. You pick some features that you think capture the important features of your data, you pick an algorithm (SVM, LDA, neural networks etc…), you train your algorithm with your training data,  apply the result to your data and then write your paper. Typically you would see how things change with different training data sets or algorithms and you end up with a paper something like this one which we did on supernova classification. Not very stimulating for a general audience.

So I decided to look forward 20 years and ask the question, will computers and/or robots ever do “real” science? As you may imagine, this turned out to be quite a controversial topic and the most vocal people were certainly against the idea, but that is probably not surprising.

It is very human and alluring to think that we are special, and that what our best scientists or artists do is somehow unique. Yet much of science is based on the refutation of these kinds of idea: the Copernican Revolution, Darwinian evolution and the scientific method are essentially all based on the rejection of the notion that you, I, we or anyone in particular, are fundamentally special.

The Copernican revolution rejects the idea that we live at a special place or time in the universe. Darwinian evolution rejects the idea that we are fundamentally different from the animals. The Scientific Method rejects knowledge that cannot be reproduced anywhere or anytime by anybody. And yet, many physicists fervently believe that some aspects of what humans do will never be done by “computers”. Often it is  “creativity” that comes up as the first candidate for a purely human activity.

But never is a dangerous term. Things change. A lot. It is worth remembering that the word “computer” goes back to the 1600′s and simply meant someone who computes. In the late 1800′s the word was typically used in astronomy to denote someone (often a woman) who would do tedious calculations. Now the idea of a human emulating a digital computer is strange (which is a tangent we could follow down the mechanical turking avenue but won’t!) but it illustrates how non-intuitive change can be over long timescales.

There has also been remarkable progress towards computer/robotic science and automated reasoning. The robotic scientist ADAM is able to implement the full scientific method, albeit in a well-defined search space, and produced what is probably the first non-human contribution to knowledge. You can see a video of ADAM in action here.

In preparing for my talk I came across a couple of very interesting online videos that are relevant to this question. The first is a talk by Gregory Chaitin that I actually saw in person at the Perimeter Institute on the search for the perfect language. The second is a talk by Douglas Hofstadter (of Godel, Escher, Bach fame) on analogy as the core of cognition.

So how would a computer do “great” science. One way would be to have a complete encoding or feature space for concepts or ideas. This is close to Leibniz’s idea of the Characteristica Universalis. Then a computer algorithm could simply apply some clever search algorithms to find concepts or ideas that fit the observable data best. To do so, it would need to be able to compute the implications of a given idea. For example, given an action, it would compute the observable implications, compare with available data, compute a likelihood, and then then jump to a new theory.

This is hard to imagine, but there has been remarkable progress in automated theorem proving software. I can (sort of!) imagine a robotic scientist that proposes theories through some encoding of the space of relevant concepts, derives logical derivations using allowed logical operations until it produces something that can be compared with data, computes the likelihood of the theory given the data, and then adapts the theory based on this outcome.

Perhaps this seems implausible and perhaps it is. It critically relies on the idea that one can cleverly parametrise the space of ideas, which might be impossible.   But a great deal of research today is fairly algorithmic and I suspect that at a minimum the “bottom 50%” least-creative research could be done by computers within 10-20 years. Peter Norvig has a very interesting discussion of related issues in response to Noam Chomsky here.

It is worth remembering that scientific papers are supposed to be as logical and clear as possible. They should follow infallible logic, starting with axioms, deriving propositions, comparing with data and drawing clear conclusions. In writing a paper,  humans attempt to emulate a digital ideal, interspersed with simple pictures and creative prose to excite and provide insight for their fellow analogue colleagues. A computer proving a theorem has no need of simple pictures or creative prose. If anyone is going to write a good logical paper, I think it is going to be a computer, with no need to fudge the results, fake the data or publish before it is ready for fear of perishing.

If you are interested you can see the slides of my talk here:

Updated 14 November 2013.

Posted in General | Leave a comment

Dr Ignacy Sawicki

IggyI grew up in Poland in the final years of communism, moving to the UK for high school in the final years of Margaret Thatcher. I followed up a physics degree at Cambridge by a short digression in finance. Changing my mind yet again, I travelled to the US to study for a PhD at the University of Chicago under the supervision of Sean Carroll and Wayne Hu, where I worked on models of modifications of gravity which could account for the late-time acceleration of the universe. Following post-docs at New York University and the University of Heidelberg in Germany, I now find myself standing at the boundary of the Atlantic and Indian Oceans testing which way the wind is blowing.

What inspired you to study science, and astrophysics?

I think that I suffer from an existential need to categorise the world around me, order it in some way and put it in boxes. Science provided a scheme. Physics appealed because the number of labels was much smaller than elsewhere. It’s all billiard balls or the harmonic oscillator. To think that we can understand the universe in this way seemed audacious and still seems unbelievable.

Do you have a role model in science?

Unfortunately, only slow-roll models are allowed by the latest data.

A quote that inspires you?

“It is dangerous  be right in matters in which the authorities are wrong.” Voltaire

Research interests

Dark energy and theories of gravity. I think a lot about scalar fields in the context of general relativity, their health and purpose. And I worry whether we will ever be able to unambiguously test these ideas.

Favourite reference papers:

Sad to say, increasingly Facebook.

Posted in General, Visitors | Leave a comment

Discovery of a new supernova with SALT

We are happy to report our first confirmed discovery of a supernova for the international Dark Energy Survey (DES) using the Southern African Large Telescope (SALT).  DES, which is based in Chile, has just started science operations and hopes to discover thousands of supernovae over the next five years, including many Type Ia’s that were the basis of the discovery of the acceleration of the Universe that lead to the 2011 Nobel prize in physics.

Automated computer algorithms scan each night of observations from the Blanco telescope at the Cerro Tololo Inter-American Observatory in search of things that were not there before. When it finds something new it triggers an alert. Unfortunately the great majority of these alerts are false alarms and in addition there are many interlopers: asteroids, variable stars and active galactic nuclei all mimic the signature of a supernova. To confirm whether a candidate that triggered an alert is a real supernova requires a large telescope such as SALT to take the objects “fingerprint” and identify it conclusively.

The first fingerprinting with SALT of a DES candidate happened last week. DES13C1feu as it was prosaically named, was a supernova that exploded about 780 million years ago in a galaxy far, far away. The news of the dying star traveled across the vast emptiness of space until the photons ended in the Blanco telescope, triggering an alert in the computers of the DES team and causing a cascade of events that culminated in SALT quickly fingerprinting the candidate supernova.

Figure 1 is an animated image of the supernova as it appears in the arm of one of the spirals.  As is often the case, the supernova is approximately as bright as the entire host galaxy (which is why we can see them at vast distances).

DES13C1feu_zoomed_in

News of a cosmic death 780 Million years ago: before and after animation of the supernova as it appears in one of the spiral arms of the host galaxy. (Credit M. Smith, S. Crawford, E. Kasai, DES team)

Screen shot 2013-10-18 at 7.38.06 AM

SALT spectrum of the supernova (black) compared to the best-fitting template (in red) showing that it is a Type Ic supernova at redshift 0.059 (Credit M. Smith, S. Crawford, E. Kasai)


 The “fingerprinting” process proceeds by taking a spectrum which splits the light into its constituent wavelengths. After calibration the resulting curve is compared to a database of known supernovae, allowing it to be identified and aged. The spectrum, along with the best-fitting template from the known-object database, is shown in Figure 2.

We look forward to identifying and typing many more DES candidates with SALT!

Team members  in this project include Mat Smith (previously an AIMS postdoc), Steve Crawford, Eli Kasai, Roy Maartens and Patrice Okouma. You can read the official announcement of the supernova discovery here.

UPDATE November 10 2013: We have confirmed our first Type Ia supernova with SALT, at a redshift of z=0.15.

Posted in General | Leave a comment

The super-compressible Cosmic Microwave Background

One of the most striking features about the Cosmic Microwave Background (CMB) is that it is incredibly compressible from an information content point of view. The Planck satellite produced maps with of order a billion pixels whose information could be compressed almost perfectly into a power spectrum of order one thousand real numbers.

The Planck power spectrum

This already is a massive compression. But in addition, most of this information can be compressed further into just six of the parameters of the standard model, yielding a total compression of about one billion to one. This is both remarkable and annoying because we want to be surprised and find things that we can’t explain. And if there are things we can’t explain we want to have clear signals data about them, not just vague hints of their existence.

Anyway, to illustrate just how efficient the compression is, I took the binned WMAP 9 TT power spectrum data – I refused to use the Planck power spectra because they are available only in a FITS file (which is like keeping a fire extinguisher in a safe) – and did some symbolic regressions with the cool Eureqa tool to try to find some relatively simple analytic functions to fit the data. After some reasonably extensive searching involving a few million generations with multiple restarts I was able to get some “reasonable” fits. One was:

$D_{\ell} = 627.95 + 5.16\ell + 4248.78 \exp( -(0.0092 \ell – 1.92)^2) – 0.004 \ell^2 – 0.64 \ell \cos(\cos(\cos(5.16 \ell)) – \sqrt(\ell))$

where D_{\ell} is the usual set of Legendre polynomial coefficients of the angular two-point correlation function scaled by \ell(\ell + 1) [the above latex expression does not compile in wordpress for some reason].

This contains eight parameters and it isn’t often you see \cos(\cos(\cos( . ))) being used. The fit is shown against the WMAP data below.WMAP_symbolic_binned

Initial symbolic fit to the WMAP data

 This wasn’t a very exhaustive search but it illustrates how non-trivial it is to fit the CMB power spectrum beautifully with just six free parameters, especially when you consider that those six parameters are filtered through the Einstein equations, thermodynamics and the Boltzmann equation and about fourteen billions years of slow cooling and massaging.

As an aside, it seems to me like a good sign if a theory matches the data much better than any simple analytic formulae or parametrizations. When I was looking at the fit I was initially a little surprised to see the jagged appearance of the symbolic fit, shown by the blue line in the first figure. This, I realised, was because it was just drawing straight lines between the function points evaluated at the ell values of the 50 or so binned WMAP data points. So instead of plotting the theory against the WMAP central ell values, I plotted a zoom of the data against all the relevant ells and wow…suddenly that cos(cos(cos( ))) really pops out…WMAP_unbinned_zoom

Zoom of the first symbolic fit showing the high-frequency oscillations.

This is actually rather salubrious and shows the potential dangers of binning data before fitting to models. Binning isn’t model-independent because it opens up a large amount of high-frequency model phase-space which would actually not be a good fit to the full dataset. Rerunning the symbolic regression, now with the full 1100 or so unbinned WMAP data points instead gave the following good fit:

$D_{\ell}  = 3861.82 + 3.02 \ell\sin(1.88 – 0.015 \ell) – 2.39 \ell – 2304.25 \sin(1.88 – 0.015 \ell) – 740.99 \sin(1.88 – 0.015\ell) \cos(5.29 + 0.006\ell)$

which has none of the offending high-frequency terms at the expense of 11 free parameters; it is shown below. Now none of these symbolic fits are particularly amazing which illustrates the elegant minimialism of the theoretical predictions, especially when you consider that the theoretical model also fits the polarization spectra (TE and EE) with the same parameters. Now if only we understood the dark matter and dark energy that go into these predictions!WMAP_unbinned_symbolic

Fit to all the WMAP data which no longer has the high-frequency oscillations.
Posted in General | 1 Comment