header image
Emergent behaviour
February 23rd, 2012 under Computers, Distributed, rengolin, Science. [ Comments: 1 ]

There is a lot of attention to emergent behaviour nowadays (ex. here, here, here and here), but it’s still on the outskirts of science and computing.

Science

For millennia, science has isolated each single behaviour of a system (or system of systems) to study it in detail, than join them together to grasp the bigger picture. The problem is that, this approximation can only be done with simple systems, such as the ones studied by Aristotle, Newton and Ampere. Every time scientists were approaching the edges of their theories (including those three), they just left as an exercise to the reader.

Newton has foreseen relativity and the possible lack of continuity in space and time, but he has done nothing to address that. Fair enough, his work was much more important to science than venturing throughout the unknowns of science, that would be almost mystical of him to try (although, he was an alchemist). But more and more, scientific progress seems to be blocked by chaos theory, where you either unwind the knots or go back to alchemy.

Chaos theory exists for more than a century, but it was only recently that it has been applied to anything outside differential equations. The hyper-sensibility of the initial conditions is clear on differential systems, but other systems have a less visible, but no less important, sensibility. We just don’t see it well enough, since most of the other systems are not as well formulated as differential equations (thanks to Newton and Leibniz).

Neurology and the quest for artificial intelligence has risen a strong interest in chaos theory and fractal systems. The development in neural networks has shown that groups and networks also have a fundamental chaotic nature, but more importantly, that it’s only though the chaotic nature of those systems that you can get a good amount of information from it. Quantum mechanics had the same evolution, with Heisenberg and Schroedinger kicking the ball first on the oddities of the universe and how important is the lack of knowledge of a system to be able to extract information from it (think of Schroedinger’s cat).

A network with direct and fixed thresholds doesn’t learn. Particles with known positions and velocities don’t compute. N-body systems with definite trajectories don’t exist.

The genetic code has some similarities to these models. Living beings have far more junk than genes in their chromosomes (reaching 98% of junk on human genome), but changes in the junk parts can often lead to invalid creatures. If junk within genes (introns) gets modified, the actual code (exons) could be split differently, leading to a completely new, dysfunctional, protein. Or, if you add start sequences (TATA-boxes) to non-coding region, some of them will be transcribed into whatever protein they could make, creating rubbish within cells, consuming resources or eventually killing the host.

But most of the non-coding DNA is also highly susceptible to changes, and that’s probably its most important function, adapted to the specific mutation rates of our planet and our defence mechanism against such mutations. For billions of years, the living beings on Earth have adapted that code. Each of us has a super-computer that can choose, by design, the best ratios for a giving scenario within a few generations, and create a whole new species or keep the current one adapted, depending on what’s more beneficial.

But not everyone is that patient…

Programming

Sadly, in my profession, chaos plays an important part, too.

As programs grow old, and programmers move on, a good part of the code becomes stale, creating dependencies that are hard to find, harder to fix. In that sense, programs are pretty much like the genetic code, the amount of junk increases over time, and that gives the program resistance against changes. The main problem with computing, that is not clear in genetics, is that the code that stays behind, is normally the code that no one wants to touch, thus, the ugliest and most problematic.

DNA transcriptors don’t care where the genes are, they find a start sequence and go on with their lives. Programmers, we believe, have free will and that gives them the right to choose where to apply a change. They can either work around the problem, making the code even uglier, or they can go on and try to fix the problem in the first place.

Non-programmers would quickly state that only lazy programmers would do the former, but more experienced ones will admit have done so on numerous occasions for different reasons. Good programmers would do that because fixing the real problem is so painful to so many other systems that it’s best to be left alone, and replace that part in the future (only they never will). Bad programmers are not just lazy, some of them really believe that’s the right thing to do (I met many like this), and that adds some more chaos into the game.

It’s not uncommon to try to fix a small problem, go more than half-way through and hit a small glitch on a separate system. A glitch that you quickly identify as being wrongly designed, so you, as any good programmer would do, re-design it and implement the new design, which is already much bigger than the fix itself. All tests pass, except the one, that shows you another glitch, raised by your different design. This can go on indefinitely.

Some changes are better done in packs, all together, to make sure all designs are consistent and the program behaves as it should, not necessarily as the tests say it would. But that’s not only too big for one person at one time, it’s practically impossible when other people are changing the program under your feet, releasing customer versions and changing the design themselves. There is a point where a refactoring is not only hard, but also a bad design choice.

And that’s when code become introns, and are seldom removed.

Networks

The power of networks is rising, slower than expected, though. For decades, people know about synergy, chaos and emergent behaviour, but it was only recently, with the quality and amount of information on global social interaction, that those topics are rising again in the main picture.

Twitter, Facebook and the like have risen so many questions about human behaviour, and a lot of research has been done to address those questions and, to a certain extent, answer them. Psychologists and social scientists knew for centuries that social interaction is greater than the sum of all parts, but now we have the tools and the data to prove it once and for all.

Computing clusters have being applied to most of the hard scientific problems for half a century (weather prediction, earthquake simulation, exhaustion proofs in graph theory). They also took on a commercial side with MapReduce and similar mechanisms that have popularised the distributed architectures, but that’s only the beginning.

On distributed systems of today, emergent behaviour is treated as a bug, that has to be eradicated. In the exact science of computing, locks and updates have to occur in the precise order they were programmed to, to yield the exact result one is expecting. No more, no less.

But to keep our system out of emergent behaviours, we invariably introduce emergent behaviour in our code. Multiple checks on locks and variables, different design choices for different components that have to work together and the expectancy of precise results or nothing, makes the number of lines of code grow exponentially. And, since that has to run fast, even more lines and design choices are added to avoid one extra operation inside a very busy loop.

While all this is justifiable, it’s not sustainable. In the long run (think decades), the code will be replaced or the product will be discontinued, but there is a limit to which a program can receive additional lines without loosing some others. And the cost of refactoring increases with the lifetime of a product. This is why old products don’t get too many updates, not because they’re good enough already, but because it’s impossible to add new features without breaking a lot others.

Distant future

As much as I like emergent behaviour, I can’t begin to fathom how to harness that power. Stochastic computing is one way and has been done with certain level of success here and here, but it’s far from easy to create a general logic behind it.

Unlike Turing machines, emergent behaviour comes from multiple sources, dressed in multiple disguises and producing far too much variety in results that can be accounted in one theory. It’s similar to string theory, where there are several variations of it, but only one M theory, the one that joins them all together. The problem is, nobody knows how this M theory looks like. Well, they barely know how the different versions of string theory look like, anyway.

In that sense, emergent theory is even further than string theory to be understood in its entirety. But I strongly believe that this is one way out of the conundrum we live today, where adding more features makes harder to add more features (like mass on relativistic speeds).

With stochastic computing there is no need of locks, since all that matter is the probability of an outcome, and where precise values do not make sense. There is also no need for NxM combination of modules and special checks, since the power is not in the computation themselves, but in the meta-computation, done by the state of the network, rather than its specific components.

But that, I’m afraid, I won’t see in my lifetime.


In the future…
February 17th, 2012 under Corporate, Life, Politics, rengolin, World. [ Comments: 1 ]

In the future, people will be able to project three-dimensional films using holograms. These holograms could be placed among us, rather than at a stage, to give us a much better sense of reality and emotions than it is possible on a theatre or cinema.

When this technique gets common place, it’ll be possible to use it in the classroom. Actors would re-enact events in history, and children will be able to live the moment, rather than just listening to stories. The teachers, then, will have a much more fundamental role in teaching. They will comment on what’s happening, rather than merely serve as a narrator.

Holographic teaching has numerous advantages. Seeing the streets of London in 1666 on fire, running for your life is much more vivid than just chalk traces on a blackboard. Seeing Jews suffering on German camps, being a Jew on a German camp (minus the physical harm, of course), gives us a much better tool to avoid this in the future, and to do it to other people.

In the future, children will be able to live the credit crunch, the Syrian civil war, how the international community helped, and provoked, several conflicts in the Middle East and Africa. How people in the poorest parts of this world live without clean water or food, and how their parents die of unimaginable diseases and it falls on them the responsibility of raising a family, by the age of 4.

Children won’t be listeners, any more, they’ll live the moment, feel the pain, and learn that this is not acceptable, under any circumstances, for any living bean: Humans, animals, aliens.

However, you don’t have to wait for that glorious future to fix society. If things continue how they are, it is very likely that this future will never come to pass. If there is one constant in human history is the force of self-destruction. The more humans we have (we passed the 7bi barrier long ago), the stronger this force is.

There are several ways any of us can help save the world. The single most important you can do is to teach your children that ruthless selfish behaviour is not accepted, that the ends don’t justify the means, and that people deserve freedom to live and think for their own. Other things involve going to the most affected areas and work to revamp those cultures (not just bring food and water), help re-structure their governments (on their own terms) and work with your own government to stop invasive manoeuvres and third-party destructions for their own benefits.

A simple start is to help Avaaz. They do most of the bureaucracy, they go into the countries, they empower people, they turn rogue legislations around and, more importantly, they warn you before it’s too late.

Signing to their mailing list will give you a much better view of the world. You don’t have to donate money to help, just by signing the petitions, showing you care, is already a good start. The best part is that they will always ask you what’s the next step. How much effort they have to spend on this or that, and how much (and which) technology they have to develop to help their – our – cause.

I’m following Avaaz for a few years now, probably since its foundation, and I have to say that, not only they surpassed my expectations on what they could do with the world, but also on clarity, openness and use of technology and resources. They’re not a charity, they’re an activist group, and a very good one at that. If you were looking for something to support to help change the world, Avaaz is a great start.


 


License
Creative Commons License
We Support

WWF

EFF

National Autistic Society

Royal Society for the Prevention of Cruelty to Animals

DefectiveByDesign.org

End Software Patents

See Also
Disclaimer

The information in this weblog is provided “AS IS” with no warranties, and confers no rights.

This weblog does not represent the thoughts, intentions, plans or strategies of our employers. It is solely our opinion.

Feel free to challenge and disagree, and do not take any of it personally. It is not intended to harm or offend.

We will easily back down on our strong opinions by presentation of facts and proofs, not beliefs or myths. Be sensible.

Recent Posts