Dominando o caos: calculando a probabilidade em sistemas complexos

quarta-feira, março 21, 2018

Entropy-based generating Markov partitions for complex systems featured

Chaos 28, 033611 (2018);

Nicolás Rubido1, Celso Grebogi2, and Murilo S. Baptista2


1Instituto de Física de Facultad de Ciencias (IFFC), Universidad de la República (UdelaR), Iguá 4225, Montevideo, Uruguay

2Institute for Complex Systems and Mathematical Biology (ICSMB), King's College, University of Aberdeen (UoA), AB24 3UE Aberdeen, United Kingdom

Source/Fonte: Nicolás Rubido


Finding the correct encoding for a generic dynamical system's trajectory is a complicated task: the symbolic sequence needs to preserve the invariant properties from the system's trajectory. In theory, the solution to this problem is found when a Generating Markov Partition (GMP) is obtained, which is only defined once the unstable and stable manifolds are known with infinite precision and for all times. However, these manifolds usually form highly convoluted Euclidean sets, are a priori unknown, and, as it happens in any real-world experiment, measurements are made with finite resolution and over a finite time-span. The task gets even more complicated if the system is a network composed of interacting dynamical units, namely, a high-dimensional complex system. Here, we tackle this task and solve it by defining a method to approximately construct GMPs for any complex system's finite-resolution and finite-time trajectory. We critically test our method on networks of coupled maps, encoding their trajectories into symbolic sequences. We show that these sequences are optimal because they minimise the information loss and also any spurious information added. Consequently, our method allows us to approximately calculate the invariant probability measures of complex systems from the observed data. Thus, we can efficiently define complexity measures that are applicable to a wide range of complex phenomena, such as the characterisation of brain activity from electroencephalogram signals measured at different brain regions or the characterisation of climate variability from temperature anomalies measured at different Earth regions.

The use of measures from the Information Theory for complex systems' analysis requires the estimation of probabilities. In practice, these probabilities need to be derived from finite data-sets, namely, electroencephalogram (EEG) signals coming from different brain regions, electrocardiogram (EKG) signals coming from the heart, or temperature anomalies coming from different Earth regions. Respectively, the complex systems in these cases are the brain, the heart, and the Earth climate—all being systems composed of many dynamically interacting components. The main reason behind using measures from the Information Theory to analyse complex systems is that these measures help to better understand and predict their behaviour and functioning. However, calculating probabilities from observed data is never straightforward; in particular, up-to-now, we lack practical ways to define them without losing useful (or adding meaningless) information in the process. In order to minimise these spurious additions or losses, we propose here a method to derive these probabilities optimally. Our method makes an entropy-based encoding of the measured signals, thus, transforming them into easy-to-handle symbolic sequences containing most of the relevant information about the system dynamics. Consequently, we can find the Information Theory measures, or any other spatio-temporal average, when we seek analysing a complex system.