The network takeover
Nature Physics 8, 14–16 (2012) doi:10.1038/nphys2188
Published online 22 December 2011
Reductionism, as a paradigm, is expired, and complexity, as a field, is tired. Data-based mathematical models of complex systems are offering a fresh perspective, rapidly developing into a new discipline: network science.
Reports of the death of reductionism are greatly exaggerated. It is so ingrained in our thinking that if one day some magical force should make us all forget it, we would promptly have to reinvent it. The real worry is not with reductionism, which, as a paradigm and tool, is rather useful. It is necessary, but no longer sufficient. But, weighing up better ideas, it became a burden.
“You never want a serious crisis to go to waste,” Ralph Emmanuel, at that time Obama's chief of staff, famously proclaimed in November 2008, at the height of the financial meltdown. Indeed, forced by an imminent need to go beyond reductionism, a new network-based paradigm is emerging that is taking science by storm. It relies on datasets that are inherently incomplete and noisy. It builds on a set of sharp tools, developed during the past decade, that seem to be just as useful in search engines as in cell biology. It is making a real impact from science to industry. Along the way it points to a new way to handle a century-old problem: complexity.
A better understanding of the pieces cannot solve the difficulties that many research fields currently face, from cell biology to software design. There is no 'cancer gene'. A typical cancer patient has mutations in a few dozen of about 300 genes, an elusive combinatorial problem whose complexity is increasingly a worry to the medical community. No single regulation can legislate away the economic malady that is slowly eating at our wealth. It is the web of diverging financial and political interests that makes policy so difficult to implement. Consciousness cannot be reduced to a single neuron. It is an emergent property that engages billions of synapses. In fact, the more we know about the workings of individual genes, banks or neurons, the less we understand the system as a whole. Consequently, an increasing number of the big questions of contemporary science are rooted in the same problem: we hit the limits of reductionism. No need to mount a defence of it. Instead, we need to tackle the real question in front of us: complexity.
The complexity argument is by no means new. It has re-emerged repeatedly during the past decades. The fact that it is still fresh underlines the lack of progress achieved so far. It also stays with us for good reason: complexity research is a thorny undertaking. First, its goals are easily confusing to the outsider. What does it aim to address — the origins of social order, biological complexity or economic interconnectedness? Second, decades of research on complexity were driven by big, sweeping theoretical ideas, inspired by toy models and differential equations that ultimately failed to deliver. Think synergetics and its slave modes; think chaos theory, ultimately telling us more about unpredictability than how to predict nonlinear systems; think self-organized criticality, a sweeping collection of scaling ideas squeezed into a sand pile; think fractals, hailed once as the source of all answers to the problems of pattern formation. We learned a lot, but achieved little: our tools failed to keep up with the shifting challenges that complex systems pose. Third, there is a looming methodological question: what should a theory of complexity deliver? A new Maxwellian formula, condensing into a set of elegant equations every ill that science faces today? Or a new uncertainty principle, encoding what we can and what we can't do in complex systems? Finally, who owns the science of complexity? Physics? Engineering? Biology, mathematics, computer science? All of the above? Anyone?
These questions have resisted answers for decades. Yet something has changed in the past few years. The driving force behind this change can be condensed into a single word: data. Fuelled by cheap sensors and high-throughput technologies, the data explosion that we witness today, from social media to cell biology, is offering unparalleled opportunities to document the inner workings of many complex systems. Microarray and proteomic tools offer us the simultaneous activity of all human genes and proteins; mobile-phone records capture the communication and mobility patterns of whole countries1; import–export and stock data condense economic activity into easily accessible databases2. As scientists sift through these mountains of data, we are witnessing an increasing awareness that if we are to tackle complexity, the tools to do so are being born right now, in front of our eyes. The field that benefited most from this data windfall is often called network theory, and it is fundamentally reshaping our approach to complexity.