Anarchy and Science |
| July 16th, 2012 under Life, Politics, rengolin, Science, World. [ Comments: none ]
If the world needed more proof that rational thinking is off the menu when concerning humans, we now have a so-called anarchist group attacking science. Bombs, shootings and sabotage, with one single goal: to stop science destroying our lives once and for all.
If you didn’t get it, you’re not alone. I’m still trying to understand the whole issue, but the more I read, the more I’m sure it’s just humanity reaching record levels of stupidity. Again.
First of all, the actions don’t make sense in the realms of anarchy. For ages, anarchism has been a non-violent banner. The anarchist is not tame, but a pacifist. Anarchists fight for freedom of everything, mainly from violence and oppression. Since every state, no matter controlled by whom, is oppressive, anarchists fight the very existence of any central form of coercion.
Bakunin once wrote:
“But the people will feel no better if the stick with which they are being beaten is labeled ‘the people’s stick’.” (Statism and Anarchy )
This clearly means governments that base their choice on the people, such as democracies. For an anarchist, a democracy is as bad as dictatorship, as even in its purest form, it imposes the will of the average citizen onto the majority of the population. (If you thought it was the other way around, you clearly don’t understand democracy!).
In essence, anarchy is all about a long and non-violent migration to the total lack of central government, leaving the people (organised in local communities) to decide what’s best for themselves. If that works or not on a global level, I don’t know. But two key words pop out: non-violent and lack of central power.
In Peter Kropotkin’s own words:
Anarchism is a world-concept based upon a mechanical explanation of all phenomena, embracing the whole of Nature–that is, including in it the life of human societies and their economic, political, and moral problems. Its method of investigation is that of the exact natural sciences, by which every scientific conclusion must be verified. Its aim is to construct a synthetic philosophy comprehending in one generalization all the phenomena of Nature–and therefore also the life of societies (…) [source]
Thus anarchy, as science, is the art of finding the best answer by an iterative and non-violent method, without centralised powers dictating what the answer should be, but finding the answers by experimentation and verification, where everyone should come to the same conclusions.
Science has no central power and doesn’t provide support to any government or controlling body. There isn’t any scientist or organization in the world, nor ever has, that can dictate what scientists believe or can prove. The scientific method is the most democratic method of all, where every one can repeat the same experiments and reach the same results, otherwise the hypothesis is plain wrong, and there is nothing anyone can do to force it to be true.
Science has been used by governments to impose lifestyles, borders and general ignorance, yes. Science has been used to develop unfathomably powerful bombs, yes. And used over and over again to control and dominate countries and continents, yes. But that was never a merit of science, but of governments. Every major blame on science is, actually, the people. Describing how science has made our lives better, would be boring and redundant.
If some scientists are idiots, it doesn’t mean the whole science is. If governments abuse of the power, and science provide that power, it doesn’t mean science is to blame, but governments. If some bishops should burn in hell, it doesn’t mean religion is to blame, but what people make of it. The climate change fiasco, the US national health program criticisms and the whole “God Particle” boom in recent religious people has shown that people are still complete ignorants and prejudicial when evaluating external information.
Pen and paper have been much more harmful to the world than science, and over a much longer period. Pride and honour have wiped out entire civilizations for millennia, well before science was such embedded in our culture. Barons, kings and presidents don’t need science to destroy our lives, but it just happen to be available.
So, science and anarchy have two major points in common: non-violence and the lack of centralised government. Why on Earth would an anarchist group gratuitously attack scientists? Because they are not anarchists, they are just idiots. I truly hope this is an isolated incident. If anarchists of the world lose their minds like these ones, the only hope for humanity (in the long term) will be lost, and there will be no return.
Anarchist science policy
Copy cat |
| April 30th, 2012 under Physics, rengolin, Stories. [ Comments: 1 ]
Shaun was yet another physicist, working for yet another western country on yet another doomsday machine. Even that being far from the last world war, governments still had excuses to spend exorbitant amounts of money on secret projects that would never be used, just for the sake of the argument. It never matters what you do in war, but what’s the size of your gun, compared to the rest, and in that, his country was second to none. Not that anybody cared any more, or that anybody knew of that, since his country has never gone into a proper war in its history, but well, with these things, you can never be to sure, can you.
But I digress, Shaun, yes, the physicist. He had been working on his own project for nearly a decade now and had re-used the old pieces of the LHC in a much more miniaturized version, of course, but in essence, it was capable of creating elementary particles and at the same time entangle them. After the initial explosion, instead of losing the created particles into oblivion (what would be the point in entangling them in the first place, uh?), he actually converged the entangled particles back into atomic form. The idea was to create a clone army, or sub-atomic bombs, or whatever could be done to put fear in other countries. You know how scientists are attached to science-fiction, and Shaun was no exception.
In the beginning he wasn’t very successful, and it took him nearly 5 years to produce a pair of atoms with their quarks and gluons entangled on the other side. While you could easily make atoms entangle in normal lab conditions using lasers, at the moment you turned your machines off, they would go back into their natural state. But in this case, the effects were much more lasting. In recent years, he managed to create whole molecules that were virtually the same, stable for months, even years. Copy cats.
But what he didn’t expect (who would) was that his experiments were also touching the adjacent m-branes of parallel universes. It was hypothesised in the past that some forces could leak to adjacent universes, like gravity, and though that wasn’t widely accepted, it was very hard to prove it wrong. The problem is, until today, nobody had reached energy densities so intense as to actually make a remarkable effect on the parallel universes. Shaun did.
If the parallel universe was, like ours, sparsely populated, with a only handful of pseudo-sapient species, he’d probably have hit empty space. But the universe he found was nothing ordinary. In fact, Shaun’s own experiments for years had created a special condition, in which the aforementioned universe became aware of our own. I explain. His experiments, the entanglement of particles not always worked, as I said earlier, and the less they worked (ie. less matter on this universe), the more they leaked into the adjacent universe.
A door to your own room
In a lovely evening of spring, such as today, with daffodils and tulips blossoming, and the warm spells finally arriving, Shaun would normally be working. 30 storeys below ground. He would see none of that, or care for that matter. His new molecules (DNAs this time) were working at an alarming rate. He managed to duplicate an entire gene last week, and his team was now running loads of tests on the results. It required a lot of energy to create molecules enough to run all tests, but his lab had unlimited supply of everything.
With all his team elsewhere, Shaun was busy trying to expand his technique to achieve the whole sequence of a virus. That made the machine run at wild energy levels (quite a few Pev), and the whole thing destabilized for a moment, and stopped. Fearing he made the surrounding city go dark, he checked all energy inputs, and they were all fine. Trying to measure a few currents here and there, Shaun looked for his multimeter and, oddly, it was on the workbench, not where he’d left. Not surprised, somebody must have used it and not stored properly, it happens. With his multimeter in hand, he started checking all currents and they all look fine, apart from the 17th onwards, that the polarity was reversed.
That was odd. Seriously odd. As if his machine was actually providing energy back to the power plant, only that it was impossible (it was no fusion chamber!). Without a clue, Shaun went back to his desk, left the multimeter by the lamp and reclined his chair, looking in the infinite. The infinite, in this case, was his shelf rack. Everything was blurred, but a remarkably familiar yellow blur caught his attention, and his eyes focused for a moment, and clear as day (though it was never day in his lab), that was his multimeter. Exactly where he’d left, with the dangling red wire over the black one.
He looked back at the table, and sure enough, his multimeter was there, too. Obviously, that one was someone’s else, but just to be sure, he got his own, and started comparing them, finding the same imperfections, the same burnt mark, the same cuts. His head was not working any more, he went back where he found the other multimeter and started looking around, looking for clues. It could very easily be a prank, but his head was not thinking. It was in discovery mode.
Obsessive as he was, he started noticing differences to that part of the room, compared to what it usually was. Almost like the room was displaced in time, with that part a few hours, maybe days, back. And he started putting things in their own place, tidying up as a mechanical task to help him think. When he was satisfied with the place, he turned around and jumped so high backwards that he hit his head on a red pipe that was hanging from the ceiling. It was Shaun, looking back at himself, smiling.
“Hello”, said the other Shaun. “…”. “Yes, I see, you’re in a bit of a shock. That’s understandable, I um, let me help you with the concept.” Shaun said nothing.
“See, you are a very interesting specimen. We’ve been monitoring your experiment ever since we detected the leakage from your universe to ours. Generally, we wouldn’t ourselves believe in multiple universes, but as things were clearly leaking from your universe, we had no other alternative.” Shaun was still speechless. “As you probably have guessed by now, this part of the room is in our universe. Actually, the working part of your experiment has been inside our universe for quite some time. More specifically, ever since it started working…”.
“Hey!” Shaun opened his mouth for the first time. “You can’t possibly say that you guys did all the work!” – without even knowing who were they, but that was too big an insult to let that one pass. “Oh, no, you got me wrong, Shaun. No, you’re absolutely right, you did everything. We just provided our universe to you.”. Shaun was speechless again.
“Understand, we’re in a somewhat different level of technology than you. In some cases, much more advanced, in others, much less.”, the other Shaun continued after a pause, probing for any offence that he could have made. “In practical matters, we’re much more advanced. Our universe has been extremely kind to us. We have a very dense population throughout our known universe, it’s actually hard to get to know all the cultures yourself, we just don’t live long enough. The fact that your universe has been leaking energy has boosted our physics so much, that we managed to halve the energy consumption of all our technology and at the same time, more than double our energy production levels!”, Shaun would not let that one pass… “Lucky you, we have nothing of that…”
“I know! Very well indeed! And it’s in that respect that you guys are so much more advanced than us. Your theoretical physics is so advanced, your mathematics so robust, that make our feeble attempts in model our universe a pre-school matter.” – “Ha!” said Shaun, “our mathematics is broken, Goedel has proven it and Turing re-proved. Our theoretical physics is still fighting over string theory and the alternative and we’re getting nowhere fast!”.
“On the contrary, Shaun. Your universe is limited, so your mathematics can only reach thus far. Your theoretical physics is considering things that we never imagined possible. Our universe is lame next to yours, the challenges that you face are the most delicious delicatessen for our theoretical physicists. There is an entire community, the fastest growing of all times, just to consume the material you guys generated three centuries (of your time) ago!”
The other Shaun was breathless, smiling from ear to ear with a face like a dog waiting for you to throw the stick. There was a deep silence for a few moments. Shaun was afraid that someone would enter through the door and he would have to explain everything, and he was not sure he could, actually. He was still holding the last tool he was going to put somewhere safe. He looked at it and considered that that tool was not actually in his own universe, but somewhere else. Yet, it was there, on the same room.
“So,” – a pause – “how come you are… me?”, “Well, I’m not you, obviously, I’m just represented as you in this piece of our universe. I wouldn’t fit this room otherwise.”, “Oh, I get it.” lied Shaun. The other Shaun continued “You see, your studies has allowed us to extrapolate you idea and re-create your own universe inside our own. This room is just the connection point, if you go through that door” – and pointed to an old door that lead to the emergency exit – “you will continue inside our version of your universe.”, “Wait a minute, how much of our world have you replicated?”, “World, no, not just Earth, everything.” – a long pause, with wide open eyes. After a blink: “you mean, galaxies?”, “Yes, yes, all of them. Your universe is quite compact for all it has to offer, and we were firstly intrigued by that, but then we understood that it was necessary to have the constraints you have, and well, an important feature to generate such high quality theoretical physics.” “And we decided to lend an unused part of our universe so you could not only teach us by broadcasting your knowledge, but also running tests on our own universe.” “Most of your experiments are now part of our day-to-day life, from vehicles to communication devices to life-saving machines.” “You, Shaun, has made our lives so much better, that it was the least we could do.”
“Is there anyone living in this version of our universe? I mean, human … hum … clones?”, “No, no. We thought that would be improper. We do try to live in it, just for the curiosity, actually. There are some holiday packs to travel the wonderful places your universe has to offer. It’s nothing we don’t see in our own, but you know, travel agencies will always find an excuse to take your money, right?” and finished that sentence with a grin and almost a wink. His human traces were very good, almost as if he was observing for far too long, making Shaun to feel a little bit uneasy…
“Actually…” – the other Shaun continued – “maybe you could help us fixing a few things on this side of the universe. Make things a bit more suited to the people from our side, what do you think?”. With the rest of the team deep in tests, it’d be weeks before they would even consider going back to the main lab, and nobody else would dare to enter there, after the several claims (in the private circle that knew him) that his lab would produce a black-hole that would consume Earth and everything else.
Shaun decided to go in, at least to explore the very convincing copy of his own world. Going up the emergency exit, he found the lift all the way to the top, as expected. Outside, as expected, the early rays of the spring sun casting long shadows on the trees and buildings. The nearby cattle farm was empty, though. When the other Shaun noticed Shaun’s curiosity, he added, “Ah, yes, you see, we decided not to include mammals, as they could eventually evolve into sapient beings and we’d be altering the history of our own universe. We didn’t want to do that!”. Shaun thought it was sensible.
For several days, Shaun has listened to all complaints about his own universe and how would that fit into their own physiology. Animals were turned green to photosynthesise, trees would reproduce by multiple ways at the same time, genetic combination of more than a pair of chromosomes were allowed, as was normal in this new universe, and many of the landscapes were altered to fit the gigantic stature of most of its inhabitants. Some parts were left untouched, or the travel agencies would lose a huge market, and some were shortened and simplified, for the less elaborate, but still pseudo-sentient species.
Shaun was feeling very well, like a demi-god, changing landscapes and evolution at his own wishes, much like Slartibartfast. How fortunate was him, the only human – correction – the only being in his universe (as far as he knew) to play with a toy universe himself.
After meeting with leaders of the populations of the alter-universe, receiving gifts and commendations (and a few kisses from the lasses), it was time to return to his own universe. Shaun felt a bit tired, but after drinking a bit of their energetic beverage, he blasted back to alter-Earth in his new hyper-vehicle, to his own alter-lab. In there, only alter-Shaun was there to say goodbye. A handshake and a wink was enough to mean “I’ll be back, and thanks for all the fish”, which Shaun has taken as a warm gift, rather than a creepy resemblance.
But as soon as Shaun stepped up into his own universe, he noticed some things were out of place. After being in an alter-universe for so long, it was only natural to misplace normal concepts, but some things were not normal at all, like a 10 meter high corridor leading from his side of the room. Normally, It’d be no more than 2 meters and there was a very good reason for that: humans are not that tall!
He ran through it to find a huge door to a huge lift. In the lift were a few people still discussing what had happened. “It was definitely not that big! We must have shrunk!” said one, “No, that’s not possible, that’s Hollywoodian at best!” said the sceptic. Shaun took the lift up to the ground level, and ran to the farm nearby, fearing the worst.
And the worst happened. The cows were green, and the houses huge. Being a bad theoretical physicist himself, and not being able to count on the alter-physicists for theoretical matters, Shaun hasn’t taken into account that his machine was a duplication machine, of entangled particles. That means, for the lay to understand, that whatever happens to one, invariably happens to the other, no matter in what part of the universe, or in this case, in the multi-verse, they are.
That, thought Shaun, would take a bit more than a few days to fix… but he know how, and he was looking forward to fix it himself!
Emergent behaviour |
| February 23rd, 2012 under Computers, Distributed, rengolin, Science. [ Comments: 1 ]
There is a lot of attention to emergent behaviour nowadays (ex. here, here, here and here), but it’s still on the outskirts of science and computing.
For millennia, science has isolated each single behaviour of a system (or system of systems) to study it in detail, than join them together to grasp the bigger picture. The problem is that, this approximation can only be done with simple systems, such as the ones studied by Aristotle, Newton and Ampere. Every time scientists were approaching the edges of their theories (including those three), they just left as an exercise to the reader.
Newton has foreseen relativity and the possible lack of continuity in space and time, but he has done nothing to address that. Fair enough, his work was much more important to science than venturing throughout the unknowns of science, that would be almost mystical of him to try (although, he was an alchemist). But more and more, scientific progress seems to be blocked by chaos theory, where you either unwind the knots or go back to alchemy.
Chaos theory exists for more than a century, but it was only recently that it has been applied to anything outside differential equations. The hyper-sensibility of the initial conditions is clear on differential systems, but other systems have a less visible, but no less important, sensibility. We just don’t see it well enough, since most of the other systems are not as well formulated as differential equations (thanks to Newton and Leibniz).
Neurology and the quest for artificial intelligence has risen a strong interest in chaos theory and fractal systems. The development in neural networks has shown that groups and networks also have a fundamental chaotic nature, but more importantly, that it’s only though the chaotic nature of those systems that you can get a good amount of information from it. Quantum mechanics had the same evolution, with Heisenberg and Schroedinger kicking the ball first on the oddities of the universe and how important is the lack of knowledge of a system to be able to extract information from it (think of Schroedinger’s cat).
A network with direct and fixed thresholds doesn’t learn. Particles with known positions and velocities don’t compute. N-body systems with definite trajectories don’t exist.
The genetic code has some similarities to these models. Living beings have far more junk than genes in their chromosomes (reaching 98% of junk on human genome), but changes in the junk parts can often lead to invalid creatures. If junk within genes (introns) gets modified, the actual code (exons) could be split differently, leading to a completely new, dysfunctional, protein. Or, if you add start sequences (TATA-boxes) to non-coding region, some of them will be transcribed into whatever protein they could make, creating rubbish within cells, consuming resources or eventually killing the host.
But most of the non-coding DNA is also highly susceptible to changes, and that’s probably its most important function, adapted to the specific mutation rates of our planet and our defence mechanism against such mutations. For billions of years, the living beings on Earth have adapted that code. Each of us has a super-computer that can choose, by design, the best ratios for a giving scenario within a few generations, and create a whole new species or keep the current one adapted, depending on what’s more beneficial.
But not everyone is that patient…
Sadly, in my profession, chaos plays an important part, too.
As programs grow old, and programmers move on, a good part of the code becomes stale, creating dependencies that are hard to find, harder to fix. In that sense, programs are pretty much like the genetic code, the amount of junk increases over time, and that gives the program resistance against changes. The main problem with computing, that is not clear in genetics, is that the code that stays behind, is normally the code that no one wants to touch, thus, the ugliest and most problematic.
DNA transcriptors don’t care where the genes are, they find a start sequence and go on with their lives. Programmers, we believe, have free will and that gives them the right to choose where to apply a change. They can either work around the problem, making the code even uglier, or they can go on and try to fix the problem in the first place.
Non-programmers would quickly state that only lazy programmers would do the former, but more experienced ones will admit have done so on numerous occasions for different reasons. Good programmers would do that because fixing the real problem is so painful to so many other systems that it’s best to be left alone, and replace that part in the future (only they never will). Bad programmers are not just lazy, some of them really believe that’s the right thing to do (I met many like this), and that adds some more chaos into the game.
It’s not uncommon to try to fix a small problem, go more than half-way through and hit a small glitch on a separate system. A glitch that you quickly identify as being wrongly designed, so you, as any good programmer would do, re-design it and implement the new design, which is already much bigger than the fix itself. All tests pass, except the one, that shows you another glitch, raised by your different design. This can go on indefinitely.
Some changes are better done in packs, all together, to make sure all designs are consistent and the program behaves as it should, not necessarily as the tests say it would. But that’s not only too big for one person at one time, it’s practically impossible when other people are changing the program under your feet, releasing customer versions and changing the design themselves. There is a point where a refactoring is not only hard, but also a bad design choice.
And that’s when code become introns, and are seldom removed.
The power of networks is rising, slower than expected, though. For decades, people know about synergy, chaos and emergent behaviour, but it was only recently, with the quality and amount of information on global social interaction, that those topics are rising again in the main picture.
Twitter, Facebook and the like have risen so many questions about human behaviour, and a lot of research has been done to address those questions and, to a certain extent, answer them. Psychologists and social scientists knew for centuries that social interaction is greater than the sum of all parts, but now we have the tools and the data to prove it once and for all.
Computing clusters have being applied to most of the hard scientific problems for half a century (weather prediction, earthquake simulation, exhaustion proofs in graph theory). They also took on a commercial side with MapReduce and similar mechanisms that have popularised the distributed architectures, but that’s only the beginning.
On distributed systems of today, emergent behaviour is treated as a bug, that has to be eradicated. In the exact science of computing, locks and updates have to occur in the precise order they were programmed to, to yield the exact result one is expecting. No more, no less.
But to keep our system out of emergent behaviours, we invariably introduce emergent behaviour in our code. Multiple checks on locks and variables, different design choices for different components that have to work together and the expectancy of precise results or nothing, makes the number of lines of code grow exponentially. And, since that has to run fast, even more lines and design choices are added to avoid one extra operation inside a very busy loop.
While all this is justifiable, it’s not sustainable. In the long run (think decades), the code will be replaced or the product will be discontinued, but there is a limit to which a program can receive additional lines without loosing some others. And the cost of refactoring increases with the lifetime of a product. This is why old products don’t get too many updates, not because they’re good enough already, but because it’s impossible to add new features without breaking a lot others.
As much as I like emergent behaviour, I can’t begin to fathom how to harness that power. Stochastic computing is one way and has been done with certain level of success here and here, but it’s far from easy to create a general logic behind it.
Unlike Turing machines, emergent behaviour comes from multiple sources, dressed in multiple disguises and producing far too much variety in results that can be accounted in one theory. It’s similar to string theory, where there are several variations of it, but only one M theory, the one that joins them all together. The problem is, nobody knows how this M theory looks like. Well, they barely know how the different versions of string theory look like, anyway.
In that sense, emergent theory is even further than string theory to be understood in its entirety. But I strongly believe that this is one way out of the conundrum we live today, where adding more features makes harder to add more features (like mass on relativistic speeds).
With stochastic computing there is no need of locks, since all that matter is the probability of an outcome, and where precise values do not make sense. There is also no need for NxM combination of modules and special checks, since the power is not in the computation themselves, but in the meta-computation, done by the state of the network, rather than its specific components.
But that, I’m afraid, I won’t see in my lifetime.
Privacy on Modern Societies |
| November 21st, 2011 under Life, Politics, rengolin, Science, World. [ Comments: none ]
The concept of privacy is born from the antagonism between individuality and the desire to belong to a group. The instinctive drive to form groups – for protection, mating and warmth – is much older than the human race itself. It’s an instinct of almost every animal, and a successful characteristic or many plants and fungi. Individuality itself comes from pride and greed, two characteristics more specific to higher animals (such as felines, canines and primates).
Pack animals, like zebras, benefit a lot from being indistinguishable from each other (this is why they have stripes). Other animals, such as most felines, have leaders and there’s a succession line (much like royalty, but favouring physical strength). However, even on hierarchical species, the people is just the people, and they’re fine with it. Even on primates, you seldom see identification of one’s work or specific concerns with privacy. You can see them mimic privacy (if you beat them when doing something you wouldn’t do in public), but that’s Pavlov’s conditioning more than anything else.
However, group behaviour’s strengths and benefits if applied to the human race are quickly dismissed as communism.
There was a lot of group psychology in Marx’s political views (and a lot of Marxism in Pavlov’s ideas), hence, there was a strong rejection of any conditionalization of the people impose by the state or any strong enough body, on the capitalist side of the world.
The individual entrepreneurism of modern capitalism (as opposed to the original binary model from Adam Smith and co.), borne during the colonisation of America (no rules, no government), has been revamped by communism fears during the cold war, Cuba and now China.
As with any faith, the belief that individuality is the landmark of the human race brought its own problems.
First, individuality goes against most of other values we have as humans. My right to fart in a bloated bus goes against the respect I should have for others. My right to eat my pudding goes against the compassion I should have to spend the same amount for the mains of an impoverished child. My right to press the tooth-paste in the middle goes against my love for my wife.
Putting individuality higher than other important human values, such as respect, compassion and love, makes it a lot harder to live in societies. And given that we are now passing the 7 billion people, it’ll be a lot harder to be alone. But faith has no boundaries, nor logic. People were raised believing their individuality is more important than anything else and they die for it.
Biting the hand…
But life has it’s ways of being ironic, and deeply satisfying at it, for the bystanders. Extremely capitalist countries (like UK and US) have figured out long ago that such freedom cannot be. There is no society based on individuality (they’re antagonistic, after all). Worse still, a society that is purely based on individuality is a society without government. That, whose people have the right to do whatever they please. For this society to thrive, people would choose the right thing to do more often than not. That apolitical society has a name: anarchy. I don’t believe any government would like that!
To control people without telling them they’re being controlled, you have to resort to subversive techniques, extensively described in Orwell’s 1984. For centuries, both sides of the Atlantic have resorted to such measures, but today, no country is more Orwellian than the US.
Countries in Latin America or old USSR are failed nations (in the ayes of the American Government), where people know how bad it is and, well, live with it.
West European countries have, to a certain extent, succeeded in creating a more stable, if somewhat socialist, government. People still have their own liberties, but the government is strong and has it’s strong hand (NHS, public schools, social security, etc.). While they could do much better on many things, people know the failures and, well, live with it.
But the US is a special case. And the critical elements in the country’s history of the aggressive capitalism (internally and externally), individualism and greed, is biting the hand that fed it. For decades now, the government is increasing the grip on people’s freedoms, while increasing the liberty of major industries such as media, software, pharmaceutical, weapons on its grip of the government. After all, the recent breakdowns (like the one in 1929) of Enron, the Internet bubble and now the housing market and the financial crisis are signs that capitalism still has a lot to go wrong if unrestrained.
And still, the government gives more power to those same companies every year. The social reform Obama promised is yet to be seen, the technology-savvy campaign he did turned out on a technology-moron government, failing to understand basic concepts of day-to-day life that most Americans already know for ages. And since the US has such a power on the world’s economy, they’re spreading their chaos to Europe, as they did with Latin America for centuries (ever since Monroe Doctrine).
Recent court battles in EU for copyright infringements, the three-strike laws (rushed in by puppy Sarkozy even before the US) and all the prosecutions over Europe regarding software (Microsoft) and stupid hardware patents (Samsung vs. Apple), shows that stupidity took over the world, for good.
After the recording industry successfully convincing underpaid musicians that they were being robbed by piracy, and the successful creation of the the Digital Millennium Copyright Act, making legal things such as DRM (and illegal to have the right of privacy), and crippling their own patent system with useless patents (giving birth to a whole new industry, called patent trolls), the US government now is superseding itself by creating the Stop Online Piracy Act, the new idiocy that goes beyond any idiot boundaries any human being has ever gone.
The US government has consistently and strongly reminding us, the rest of the world, of countries like China, where people don’t have the right to freely access the internet (due to the Big Firewall of China), and how much better is the freedom that capitalist countries give you. That freedom, ultimately linked to individuality and the greed to make more money that your peers, is what makes the American capitalism thrive. But every action, every argument has been destroying this dream, for more than a decade already.
Of course, as with any decent Orwellian government, they don’t tell you your freedoms are being displaced. And people that do say that, like Richard Stallman, are tagged as crazy lunatics, in spite of what GNU has done for society in the last 30 years. Anyway, the government’s arguments are, actually, promoting freedom. The freedom for the companies to make pornographic profits at the expense of the population’s freedom.
We, the people
But the people is not fooled. Recent movements to occupy Wall Street and the increasing mention that capitalism is failing in the alternative media (blogs, independent media channels, etc.) are clear indications that the nation’s mindset is changing.
A recent survey has shown that 75% of the Americans disagree with the outrageous fines (or any fine at all) for copyright infringement. Actually, most of them are knowingly infringing copyright themselves.
So, how does this happen? From a nation that valued their individuality and community to a nation of filthy pirates that don’t give a dime about other people’s property? Well, nothing has actually happened. To the people, I mean. But two things have, indeed, happened to the government.
First, the notion of property, individuality and respect, that were never meant individually, are now showing its colour. Second, the greed in which people were bred made them respect so much their individuality that other people’s profit is not as important as their own comfort. While this is the driving factor behind the population fight against the failed patent and copyright system (a fight that I do support), it’s for the wrong reasons.
My view is that the patent system, copyright, the media industry, the firewall of China, etc. fail on a basic respect level. Not only individualism, mas also the sense of society and community. Respect is by far more important than individualism or community. It’s a concept that, when applied correctly, can derive communities that do respect your right to individuality and privacy, at the same time that it stops abuse short of damaging others.
Respect is not perfect, nor equal to everyone. There are always those that abuse of the system and people will get hurt, or killed, before the community can do anything about it. But isn’t it true to every kind of community? Do you really believe that SOPA will stop piracy more than harm loyal customers? Did DRM? Did DMCA? Did the Terrorism Act really stopped more terrorists than it locked up regular air travellers?
All those solutions were direct infringements of privacy, the right to defend yourself (ex. Guantanamo Bay and patent trolls), the right to share and give away (DRM), the right to use your property where and how it’s meant to be used (DRM). Now, the US is also losing the right to use the Internet. And don’t think that this is staying within their borders… it’s most definitely not!
Expect Cameron and Sarkozy to be adhering to that idea sooner than the Americans do…
Science vs. Business |
| July 30th, 2011 under Computers, Corporate, OSS, Politics, rengolin, Science. [ Comments: none ]
Since the end of the dark ages, and the emergence of modern capitalism, science has been connected to business, in one way or another.
During my academic life and later (when I moved to business), I saw the battle of those that would only do pure science (with government funding) and those that would mainly do business science (with private money). There were only few in between the two groups and most of them argued that it was possible to use private money to promote and develop science.
For years I believed that it was possible, and in my book, the title of this post wouldn’t make sense. But as I dove into the business side, every step closer to business research than before, I realised that there is no such thing as business science. It is such a fundamental aspect of capitalism, profit, that make it so.
Good mathematicians copy, best mathematicians steal. The three biggest revolutions in computing during the last three decades were the PC, the Open Source and Apple.
The PC revolution was started by IBM (with open platforms and standard components) but it was really driven by Bill Gates and Microsoft, and that’s what generated most of his fortune. However, it was a great business idea, not a great scientific one, as Bill Gates copied from a company (the size of a government), such as IBM. His business model’s return on investment was instantaneous and gigantic.
Apple, on the other hand, never made much money (not as much as IBM or Microsoft) until recently with the iPhone and iPad. That is, I believe, because Steve Jobs copied from a visionary, Douglas Engelbart, rather than a business model. His return on investment took decades and he took one step at a time.
However, even copying from a true scientist, he had to have a business model. It was impossible for him to open the platform (as MS did), because that was where all the value was located. Apple’s graphical interface (with the first Macs), the mouse etc (all blatantly copied from Engelbart). They couldn’t control the quality of the software for their platform (they still can’t today on AppStore) and they opted for doing everything themselves. That was the business model getting in the way of a true revolution.
Until today, Apple tries to do the coolest system on the planet, only to fall short because of the business model. The draconian methods Microsoft took on competitors, Apple takes on the customers. Honestly, I don’t know what’s worse.
On the other hand, Open Source was born as the real business-free deal. But its success has nothing to do with science, nor with the business-freeness. Most companies that profit with open source, do so by exploiting the benefits and putting little back. There isn’t any other way to turn open source into profit, since profit is basically to gain more than what you spend.
This is not all bad. Most successful Open source systems (such as Apache, MySQL, Hadoop, GCC, LLVM, etc) are so because big companies (like Intel, Apple, Yahoo) put a lot of effort into it. Managing the private changes is a big pain, especially if more than one company is a major contributor, but it’s more profitable than putting everything into the open. Getting the balance right is what boosts, or breaks, those companies.
The same rules also apply to other sciences, like physics. The United States are governed by big companies (oil, weapons, pharma, media) and not by its own government (which is only a puppet for the big companies). There, science is mostly applied to those fields.
Nuclear physics was only developed at such a fast pace because of the bomb. Laser, nuclear fusion, carbon nanotubes are mostly done with military funding, or via the government, for military purposes. Computer science (both hardware and software) are mainly done on the big companies and with a business background, so again not real science.
Only the EU, a less business oriented government (but still, not that much less), could spend a gigantic amount of money on the LHC at CERN to search for a mere boson. I still don’t understand what’s the commercial applicability of finding the Higgs boson and why the EU has agreed to spend such money on it. I’m not yet ready to accept that it was all in the name of science…
But while physics has clear military and power-related objectives, computing, or rather, social computing, has little to no impact. Radar technologies, heavy-load simulations, and prediction networks receive a strong budget from governments (especially US, Russia), while other topics such as how to make the world a better place with technology, has little or no space is either business or government sponsored research.
That is why, in my humble opinion, technology has yet to flourish. Computers today create more problems than they solve. Operating systems make our life harder than they should, office tools are not intuitive enough for every one to use, compilers always fall short of doing a great job, the human interface is still dominated by the mouse, invented by Engelbart himself in the 60’s.
Not to mention the rampant race to keep Moore’s law (in both cycles and profit) at the cost of everything else, most notably the environment. Chip companies want to sell more and more, obsolete last year’s chip and send it to the land fills, as there is no efficient recycling technology yet for chips and circuits.
Unsolved questions of the last century
Like Fermat’s theorems, computer scientists had loads of ideas last century, at the dawn of computing era, that are still unsolved. Problems that everybody tries to solve the wrong way, as if they were going to make that person famous, or rich. The most important problems, as I see, are:
- Computer-human interaction: How to develop an efficient interface between humans and computers as to remove all barriers on communication and ease the development of effective systems
- Artificial Intelligence: As in real intelligence, not mimicking animal behaviour, not solving subset of problems. Solutions that are based on emergent behaviour, probabilistic networks and automatons.
- Parallel Computation: Natural brains are parallel in nature, yet, computers are serial. Even parallel computers nowadays (multi-core) are only parallel to a point, where they go back on being serial. Serial barriers must be broken, we need to scratch the theory so far and think again. We need to ask ourselves: “what happens when I’m at the speed of light and I look into the mirror?“.
- Environmentally friendly computing: Most components on chips and boards are not recyclable, and yet, they’re replaced every year. Does the hardware really need to be more advanced, or the software is being dumber and dumber, driving the hardware complexity up? Can we use the same hardware with smarter software? Is the hardware smart enough to last a decade? Was it really meant to last that long?
All those questions are, in a nutshell, in a scientific nature. If you take the business approach, you’ll end up with a simple answer to all of them: it’s not worth the trouble. It is impossible, at short and medium term, to profit from any of those routes. Some of them won’t generate profit even in the long term.
That’s why there is no advance in that area. Scientists that study such topics are alone and most of the time trying to make money out of it (thus, going the wrong way and not hitting the bull’s eye). One of the gurus in AI at the University of Cambridge is a physicist, and his company does anything new in AI, but exploits the little effort on old school data-mining to generate profit.
They do generate profit, of course, but does it help to develop the field of computer science? Does it help tailor technology to better ourselves? To make the world a better place? I think not.
Computer Science vs Software Engineering |
| January 13th, 2011 under Corporate, rengolin, Science, Technology. [ Comments: none ]
The difference between science and engineering is pretty obvious. Physics is science, mechanics is engineering. Mathematics is (ahem) science, and building bridges is engineering. Right?
Well, after several years in science and far too much time in software engineering that I was hoping to tell my kids when they grow up, it seems that people’s beliefs are much more exacerbated about the difference, if there’s any, than their own logic seems to imply.
General beliefs that science is more abstract fall apart really quickly when you compare maths to physics. There are many areas of maths (statistics, for example) that are much more realistic and real world than many parts of physics (like string theory and a good part of cosmology). Nevertheless, most scientists will turn their noses up at or anything that resembles engineering.
From different points of view (biology, chemistry, physics and maths), I could see that there isn’t a consensus on what people really consider a less elaborate task, not even among the same groups of scientists. But when faced with a rejection by one of their colleagues, the rest usually agree on it. I came to the conclusion that the psychology of belonging to a group was more important than personal beliefs or preferences. One would expect that from young schoolgirls, not from professors and graduate students. But regardless of the group behaviour, there still is that feeling that tasks such as engineering (whatever that is) are mundane, mechanical and more detrimental to the greater good than science.
On the other side of the table, the real world, there are people doing real work. It generally consists of less thinking, more acting and getting things done. You tend to use tables and calculators rather than white boards and dialogue, your decisions are much more based on gut feelings and experience than over-zealously examining every single corner case and the result of your work is generally more compact and useful to the every-day person.
From that perspective, (what we’re calling) engineers have a good deal of prejudice towards (what we called) scientists. For instance, the book Real World Haskell is a great pun from people that have one foot on each side of this battle (but are leaning towards the more abstract end of it). In the commercial world, you don’t have time to analyse every single detail, you have a deadline, do what you can with that and buy insurance for the rest.
Engineers also produce better results than scientists. Their programs are better structured, more robust and efficient. Their bridges, rockets, gadgets and medicines are far more tested, bullet-proofed and safe than any scientist could ever hope to do. It is a misconception that software engineers have the same experience than an academic with the same time coding, as is a misconception that engineers could as easily develop prototypes that would revolutionise their industry.
But even on engineering, there are tasks and tasks. Even loathing scientists, those engineers that perform a more elaborate task (such as massive bridges, ultra-resistant synthetic materials, operating systems) consider themselves above the mundane crowd of lesser engineers (building 2-bed flats in the outskirts of Slough). So, even here, the more abstract, less fundamental jobs are taken at a higher level than the more essential and critical to society.
Is it true, then, that the more abstract and less mundane a task is, the better?
Since the first thoughts on general purpose computing, there is this separation of the intangible generic abstraction and the mundane mechanical real world machine. Leibniz developed the binary numeral system, compared the human brain to a machine and even had some ideas on how to develop one, someday, but he ended up creating some general-purpose multipliers (following Pascal’s design for the adder).
Leibniz would have thrilled in the 21th century. Lots of people in the 20th with the same mindset (such as Alan Turin) did so much more, mainly because of the availability of modern building techniques (perfected for centuries by engineers). Babbage is another example: he developed his differential machine for years and when he failed (more by arrogance than anything else), his analytical engine (far more elegant and abstract) has taken his entire soul for another decade. When he realised he couldn’t build it in that century, he perfected his first design (reduced the size 3 times) and made a great specialist machine… for engineers.
Mathematicians and physicists had to do horrible things (such as astrology and alchemy) to keep their pockets full and, in their spare time, do a bit of real science. But in this century this is less important. Nowadays, even if you’re not a climate scientist, you can get a good budget for very little real applicability (check NASA’s funded projects, for example). The number of people working in string theory or trying to prove the Riemann hypothesis is a clear demonstration of that.
But computing is still not there yet. We’re still doing astrology and alchemy for a living and hoping to learn the more profound implications of computing on our spare time. Well, some of us at least. And that comes to my point…
There is no computer science… yet
The beginning of science was marked by philosophy and dialogue. 2000 years later, man kind was still doing alchemy, trying to prove the Sun was the centre of the solar system (and failing). Only 200 years after that that people really started doing real science, cleansing themselves from private funding and focusing on real science. But computer science is far from it…
Most computer science courses I’ve seen teach a few algorithms, an object oriented language (such as Java) and a few courses on current technologies (such as databases, web development and concurrency). Very few of them really teach about Turin machines, group theory, complex systems, other forms of formal logic and alternatives to the current models. Moreover, the number of people doing real science on computing (given what appears on arXiv or news aggregation sites such as Ars Technica or Slashdot) is probably less than the number of people working with string theory or wanting a one-way trip to Mars.
So, what do PHDs do in computer science? Well, novel techniques on some old school algorithms are always a good choice, but the recent favourite has been breaking the security of the banking system or re-writing the same application we all already have, but for the cloud. Even the more interesting dissertations like memory models in concurrent systems, energy efficient gate designs are all commercial applications at most.
After all, PHDs can get a lot more money in the industry than remaining at the universities, and doing your PHD towards some commercial application can guarantee you a more senior position as a start in such companies than something completely abstract. So, now, to be honestly blunt, we are all doing alchemy.
Still, that’s not to say that there aren’t interesting jobs in software engineering. I’m lucky to be able to work with compilers (especially because it also involves the amazing LLVM), and there are other jobs in the industry that are as interesting as mine. But all of them are just the higher engineering, the less mundane rocket science (that has nothing of science). But all in all, software engineering is a very boring job.
You cannot code freely, ignore the temporary bugs, ask the user to be nice and have a controlled input pattern. You need a massive test infrastructure, quality control, standards (which are always tedious), and well documented interfaces. All that gets in the way of real innovation, it makes any attempt of doing innovation in a real company a mere exercise of futility and a mild source of fun.
This is not exclusive of the software industry, of course. In the pharmaceutical industry there is very little innovation. They do develop new drugs, but using the same old methods. They do need to get new medicines, more powerful out of the door quickly, but the massive amount of tests and regulation they have to follow is overwhelming (this is why they avoid as much as possible doing it right, so don’t trust them!). Nevertheless, there are very interesting positions in that industry as well.
Good question. People are afraid of going out of their area of expertise, they feel exposed and ridiculed, and quickly retract to their comfort area. The best thing that can happen to a scientist, in my opinion, is to be proven wrong. For me, there is nothing worse than being wrong and not knowing. Not many people are like that, and the fear of failure is what keeps the industry (all of them) in the real world, with real concerns (this is good, actually).
So, as far as the industry drives innovation in computing, there will be no computer science. As long as the most gifted software engineers are mere employees in the big corporations, they won’t try, to avoid failure, as that could cost them their jobs. I’ve been to a few companies and heard about many others that have a real innovation centre, computer laboratory or research department, and there isn’t a single one of them that actually is bold enough to change computing at its core.
Something that IBM, Lucent and Bell labs did in the past, but probably don’t do it any more these days. It is a good twist of irony, but the company that gets closer to software science today is Microsoft, in its campus in Cambridge. What happened to those great software teams of the 70’s? Could those companies really afford real science, or were them just betting their petty cash in case someone got lucky?
I can’t answer those questions, nor if it’ll ever be possible to have real science in the software industry. But I do plea to all software people to think about this when they teach at university. Please, teach those kids how to think, defy the current models, challenge the universality of the Turin machine, create a new mathematics and prove Gödel wrong. I know you won’t try (by hubris and self-respect), but they will, and they will fail and after so many failures, something new can come up and make the difference.
There is nothing worse than being wrong and not knowing it…
Inefficient Machines |
| September 20th, 2010 under Biology, Computers, rengolin, World. [ Comments: none ]
In most of the computers today you have the same basic structure: A computing hardware, composed by millions of transistors, getting data from the surroundings (normally registers) and putting values back (to other registers), and Data storage. Of course, you can have multiple computing hardware (integer, floating point, vectorial, etc) and multiple layers of data storage (registers, caches, main memory, disk, network, etc), but it all boils down to these two basic components.
Between them you have the communication channels, that are responsible for carrying the information back and forth. In most machines, the further you are from the central processing unit, the slower is the channel. So, satellite links will be slower than network cables that will be slower than PCIx, CPU bus, etc. But, in a way, as the whole objective of the computer is to transform data, you must have access to all data storage in the system to have a useful computer.
Imagine a machine where you don’t have access to all the data available, but you still depend on that data to do useful computation. What happens is that you have to infer what was the data you needed, or get it from a different path, not direct, but converted into subjective ideas and low-quality patterns, that have, then, to be analysed and matched with previous patterns and almost-random results come from such poor analysis.
This machine, as a whole, is not so useful. A lot less useful than a simple calculator or a laptop, you might think and I’d agree. But that machine also have another twist. The data that cannot be accessed have a way of changing how the CPU behave in unpredictable ways. It can increase the number of transistors, change the width of the communication channels, completely remove or add new peripherals, and so on.
This machine has, in fact, two completely separate execution modes. The short term mode, executed within the inner layer, in which the CPU takes decisions based on its inherent hardware and the information that is far beyond the outer layer, and the long term mode, executed in the outer layer, which can be influenced by the information beyond (plus a few random processes) but never (this is the important bit, never), by the inner layer.
The outer layer
This outer layer change data by itself, it doesn’t need the CPU for anything, the data is, itself, the processing unit. The way external processes act on this layer is what makes it change, in a very (very) slow time scale, especially when compared to the inner layer’s. The inner layer is, in essence, at the mercy of the outer layer.
This machine we’re talking about, sometimes called the ultimate machine, has absolutely nothing of ultimate. We can build computer that can easily access the outer layers of data, change them or even erase them for good as easy as they do with the data in the inner layer.
We, today, can build machines much more well designed that this infamous machine. When comparing designs, our current computers have a much more elaborate, precise and analytical design of a machine, we just need more time to get it to perfection, but it’s of my opinion that we’re already far beyond (in design matters) that of life.
Living creatures have brains, the CPU and the inner memory and the body (all the other communication channels and peripherals to the world beyond), and they have genes, the long-term storage that defines how the all the rest is assembled and how it behaves. But living creatures, unlike Lamarck’s beliefs, cannot change their own genes at will. Not yet.
The day humans start changing their own genes (and that’s not too far away), we’ll have perfected the design, and only then we would be able to call it: the ultimate machine. Only then, the design would have been perfect and the machine could, then, evolve.
Writing your own genes would be like giving an application the right to re-write the whole operating system. You rarely see that in a computer system, but that’s only because we’re limited to creating designs similar to ourselves. This is why all CPUs are sequential (even when they’re parallel), because our educational model is sequential (to cope with mass education). This is why our machines don’t self-mend since the beginning, because we don’t.
Self-healing is a complex (and dangerous) subject for us because we don’t have first-hand experience with it, but given the freedom we have when creating machines, it’s complete lack of imagination to not do so. It is a complete waste of time to model intelligent systems as if they were humans, to create artificial life with simple neighbouring rules and to think that automata is only a program that runs alone.
The intelligent design concept was coined by people that understand very little of design and even less about intelligence. The design of life is utterly poor. It wastes too much energy, it provides very little control over the process, it has too many variables and too little real gain in each process.
It is true that, in a hardware point of view, our designs are very bad when compared to nature’s. A chlorophyll is much more efficient than a solar cell, spider webs are much stronger than steel and so on. But the overall design, how the process work and how it gets selected, is just horrible.
If there were creators for our universe, it had to be a good bunch of engineers with no management at all, creating machines at random just because it was cool. There was no central planning, no project, ad-hoc feature emerging and lots of easter eggs. If that’s the image people want to have of a God, so be it. Long live the Agile God, a bunch of nerdy engineers playing with toys.
But design would be the last word I’d use for it…
| February 8th, 2010 under InfoSec, Life, Politics, rengolin, Science. [ Comments: none ]
A long time ago I read an article about some dangerous psychological studies in the 70’s. It’s funny to think that, at that time, things that we don’t even consider doing, were acceptable.
Can you imagine yourself with a periscope counting the seconds some truck drivers take to piss in a public toilet? Or pretending to rape a girl and risk getting shot (especially in the US)? It’s not just ethically incorrect, it’s dangerous!
Recently, I read an article about some students monitoring 350 million mobile calls just to figure out if the callee’d call you back. Not only in the 70’s that would be nonsense, but people would explode in rage, as it’d be just enough to prove all conspiracy theories at that time (not to mention the cold war).
This is not the first research using “unnamed” data from carriers or websites, nor will be the last. I myself proposed something similar to Yahoo! when I worked there to get the trends and act on the average (rather than tag individuals), and I see now that it’s becoming acceptable to allow research groups to openly read entire databases that before was considered private.
I don’t particularly dislike such type of research, especially when they’re done by universities, but the slight paranoia feeling creep up my spine sometimes. I guess that’s one of the issues that is dividing people into two very distinctive groups: those that ignore completely the privacy for the sake of comfort, and those that ignore comfort for the sake of privacy.
I am in between the two groups, but I can’t say I’m exactly average. I think I’m an extremist on both sides. I don’t mind storing my private emails on Google but I disable all Facebook add-ons and restrict access to all my personal data. I pay everything on the internet with my credit-card but I’ll refuse to the end of my days to use the biometric passport or iris recognition at airports.
There is no logic, really, it’s just the kind of thing you stick with. It is true that governments have more power to dig your data when they want, while Amazon will probably only have my credit-card number. But it’s also true that no government in the world can dig everyone’s data all the time, so it’s pretty improbable that someone is monitoring how many times I cross the Heathrow border.
In the end, only one thing makes out as logic in the whole scene: during the recent years, it was far more likely the government loosing all banking details of everyone in the country than some hacker invading Amazon to get my credit-card. Maybe that’s what’s keeping me from accepting IDs and biometric passports… or maybe I never will…
2010 – Year of what? |
| January 29th, 2010 under Computers, Life, OSS, Physics, rengolin, Unix/Linux, World. [ Comments: 2 ]
Ever since 1995 I hear the same phrase, and ever since 2000 I stopped listening. It was already the year of Linux in 95 for me, so why bother?
But this year is different, and Linux is not the only revolution in town… By the end of last year, the first tera-electronvolt collisions were recorded in the LHC, getting closer to see (or not) the infamous Higgs boson. Now, the NIF reports a massive 700 kilojoules in a 10 billionth of a second laser, that, if it continues on schedule, could lead us to cold fusion!!
The human race is about to finally put the full stop on the standard model and achieve cold fusion by the end of this year, who cares about Linux?!
Well, for one thing, Linux is running all the clusters being used to compute and maintain all those facilities. So, if it were for Microsoft, we’d still be in the stone age…
UPDATE: More news on cold fusion…
Logic and a bit of luck |
| January 17th, 2010 under Fun, Life, rengolin, Science. [ Comments: 3 ]
Most game-changing scientific discoveries had a lot of logic and critical thinking, but also a bit of luck involved. As most scientists, I don’t believe in luck, so the definition of luck here is being the right person in the right place at the right time. As most (good) scientists, I don’t believe, I state, hypothesise, prove, refute, so the definition of belief here is also obvious.
My point is that evolution wouldn’t have been formulated if Darwin hadn’t gone with the Beagle, genetics wouldn’t be so solid if Mendel hadn’t believed the contrary so fiercely, Plank wouldn’t have found the quantum if there wasn’t a major argument about the black-body spectrum and Einstein would have won the Nobel prize for any other thing if he hadn’t been so drawn by God playing dice.
My story today starts in a similar way, but in a much more mundane problem… I lost my keys.
There is nothing I hate more than loosing my keys, especially in the 25th of December when we’re going to hit the road in the 27th. I lost all my keys, car, house, even my USB key. These modern car keys are not easy to replicate, I’d have to buy the whole thing again and loosing your front door key is not the kind of thing you let pass with a simple copy, you have to change the whole set, especially when you’re going away for a week.
Well, after despair came fear. After fear, despair again. We searched the whole house, inside, outside and in between. Nothing. Brute force wasn’t helping, but that hadn’t stopped me to do it once in a while again, just in case. In between the despair brute-force moments, we decided to be logical about the situation and think, rather than search for the answer.
First point, we had a spare of either car and house, so at least we could still travel and come back home. My worries were, in fact, what would we find when we came back home… If I had lot my keys outside or had left them hanging off the front door’s key hole (happened more than once), it’d be just too easy for someone to clean the house while we were away.
So we tracked down every place we went, every thing we did. By logic, I couldn’t have lost them in the city or anywhere I would have gone by car. Nor I could have lost it inside the car, so at least we knew that it’d be either inside the house or around it (including the key hole, unfortunately). I almost cancelled our trip because of the key hole probability, but Renata, very logically, convinced me that everything we did could not have caused me to leave it there. It was very, very unlikely. So we went…
However very unlikely, that still bugged me the whole week and I felt a bit of panic when we got home. But to my comfort, the house was exactly the way we left. That was, in a twisted way, another indication that the key was not left in the key hole. It had to be inside the house. I went back to work, still using the spare keys, but always thinking about it, wondering wherever it was. Sometimes, just in case, I’d imagine that I would look somewhere and see the key there, and be very surprised I haven’t seen it there before. That feeling never came.
This week I thought enough was enough. I had to continue with my life, change the front door keys and buy the very expensive key set from the car’s manufacturers. I put a to-do in my mobile: “call toyota, landlord wrt keys”. It was then that luck stroke with an impeccable logic. I felt like Darwin finding the platypus or Mendel smashing peas.
I looked at our bag of snow jackets, hermetically sealed for the next winter (Cambridge has only one chance of snowing each year, and that was before Christmas), and thought: “If the keys are in there, we’ll only find out next winter.” The simple logic led me to think it’d be much cheaper for me to re-open the impossible-to-close-hermetically-sealed bag now and not find the key than to wait until next winter and have spent thousands of pounds for nothing. The risk assessment was positive, and that led me to the next piece of information that closed the gap: it was snowing before Christmas! It had to be there!
I opened the bag and tapped my jacket, nothing. But the logic was impeccable, I couldn’t be wrong. I wore the jacket and trusted logic above my own despair. Gently sliding my hands inside the pockets, as I always do. The pockets are deep, and I felt nothing at start, but that didn’t stop my trust in logic. Spock would have laughed at me if I did, it’s that serious, a vulcan could actually laugh. It was not out of faith or belief, it was the ultimately trust that scientists lay on logic above all feelings, common sense and general knowledge, that kept me going until I finally felt something…
« Previous entries