Copy cat |
| April 30th, 2012 under Physics, rengolin, Stories. [ Comments: 1 ]
Shaun was yet another physicist, working for yet another western country on yet another doomsday machine. Even that being far from the last world war, governments still had excuses to spend exorbitant amounts of money on secret projects that would never be used, just for the sake of the argument. It never matters what you do in war, but what’s the size of your gun, compared to the rest, and in that, his country was second to none. Not that anybody cared any more, or that anybody knew of that, since his country has never gone into a proper war in its history, but well, with these things, you can never be to sure, can you.
But I digress, Shaun, yes, the physicist. He had been working on his own project for nearly a decade now and had re-used the old pieces of the LHC in a much more miniaturized version, of course, but in essence, it was capable of creating elementary particles and at the same time entangle them. After the initial explosion, instead of losing the created particles into oblivion (what would be the point in entangling them in the first place, uh?), he actually converged the entangled particles back into atomic form. The idea was to create a clone army, or sub-atomic bombs, or whatever could be done to put fear in other countries. You know how scientists are attached to science-fiction, and Shaun was no exception.
In the beginning he wasn’t very successful, and it took him nearly 5 years to produce a pair of atoms with their quarks and gluons entangled on the other side. While you could easily make atoms entangle in normal lab conditions using lasers, at the moment you turned your machines off, they would go back into their natural state. But in this case, the effects were much more lasting. In recent years, he managed to create whole molecules that were virtually the same, stable for months, even years. Copy cats.
But what he didn’t expect (who would) was that his experiments were also touching the adjacent m-branes of parallel universes. It was hypothesised in the past that some forces could leak to adjacent universes, like gravity, and though that wasn’t widely accepted, it was very hard to prove it wrong. The problem is, until today, nobody had reached energy densities so intense as to actually make a remarkable effect on the parallel universes. Shaun did.
If the parallel universe was, like ours, sparsely populated, with a only handful of pseudo-sapient species, he’d probably have hit empty space. But the universe he found was nothing ordinary. In fact, Shaun’s own experiments for years had created a special condition, in which the aforementioned universe became aware of our own. I explain. His experiments, the entanglement of particles not always worked, as I said earlier, and the less they worked (ie. less matter on this universe), the more they leaked into the adjacent universe.
A door to your own room
In a lovely evening of spring, such as today, with daffodils and tulips blossoming, and the warm spells finally arriving, Shaun would normally be working. 30 storeys below ground. He would see none of that, or care for that matter. His new molecules (DNAs this time) were working at an alarming rate. He managed to duplicate an entire gene last week, and his team was now running loads of tests on the results. It required a lot of energy to create molecules enough to run all tests, but his lab had unlimited supply of everything.
With all his team elsewhere, Shaun was busy trying to expand his technique to achieve the whole sequence of a virus. That made the machine run at wild energy levels (quite a few Pev), and the whole thing destabilized for a moment, and stopped. Fearing he made the surrounding city go dark, he checked all energy inputs, and they were all fine. Trying to measure a few currents here and there, Shaun looked for his multimeter and, oddly, it was on the workbench, not where he’d left. Not surprised, somebody must have used it and not stored properly, it happens. With his multimeter in hand, he started checking all currents and they all look fine, apart from the 17th onwards, that the polarity was reversed.
That was odd. Seriously odd. As if his machine was actually providing energy back to the power plant, only that it was impossible (it was no fusion chamber!). Without a clue, Shaun went back to his desk, left the multimeter by the lamp and reclined his chair, looking in the infinite. The infinite, in this case, was his shelf rack. Everything was blurred, but a remarkably familiar yellow blur caught his attention, and his eyes focused for a moment, and clear as day (though it was never day in his lab), that was his multimeter. Exactly where he’d left, with the dangling red wire over the black one.
He looked back at the table, and sure enough, his multimeter was there, too. Obviously, that one was someone’s else, but just to be sure, he got his own, and started comparing them, finding the same imperfections, the same burnt mark, the same cuts. His head was not working any more, he went back where he found the other multimeter and started looking around, looking for clues. It could very easily be a prank, but his head was not thinking. It was in discovery mode.
Obsessive as he was, he started noticing differences to that part of the room, compared to what it usually was. Almost like the room was displaced in time, with that part a few hours, maybe days, back. And he started putting things in their own place, tidying up as a mechanical task to help him think. When he was satisfied with the place, he turned around and jumped so high backwards that he hit his head on a red pipe that was hanging from the ceiling. It was Shaun, looking back at himself, smiling.
“Hello”, said the other Shaun. “…”. “Yes, I see, you’re in a bit of a shock. That’s understandable, I um, let me help you with the concept.” Shaun said nothing.
“See, you are a very interesting specimen. We’ve been monitoring your experiment ever since we detected the leakage from your universe to ours. Generally, we wouldn’t ourselves believe in multiple universes, but as things were clearly leaking from your universe, we had no other alternative.” Shaun was still speechless. “As you probably have guessed by now, this part of the room is in our universe. Actually, the working part of your experiment has been inside our universe for quite some time. More specifically, ever since it started working…”.
“Hey!” Shaun opened his mouth for the first time. “You can’t possibly say that you guys did all the work!” – without even knowing who were they, but that was too big an insult to let that one pass. “Oh, no, you got me wrong, Shaun. No, you’re absolutely right, you did everything. We just provided our universe to you.”. Shaun was speechless again.
“Understand, we’re in a somewhat different level of technology than you. In some cases, much more advanced, in others, much less.”, the other Shaun continued after a pause, probing for any offence that he could have made. “In practical matters, we’re much more advanced. Our universe has been extremely kind to us. We have a very dense population throughout our known universe, it’s actually hard to get to know all the cultures yourself, we just don’t live long enough. The fact that your universe has been leaking energy has boosted our physics so much, that we managed to halve the energy consumption of all our technology and at the same time, more than double our energy production levels!”, Shaun would not let that one pass… “Lucky you, we have nothing of that…”
“I know! Very well indeed! And it’s in that respect that you guys are so much more advanced than us. Your theoretical physics is so advanced, your mathematics so robust, that make our feeble attempts in model our universe a pre-school matter.” – “Ha!” said Shaun, “our mathematics is broken, Goedel has proven it and Turing re-proved. Our theoretical physics is still fighting over string theory and the alternative and we’re getting nowhere fast!”.
“On the contrary, Shaun. Your universe is limited, so your mathematics can only reach thus far. Your theoretical physics is considering things that we never imagined possible. Our universe is lame next to yours, the challenges that you face are the most delicious delicatessen for our theoretical physicists. There is an entire community, the fastest growing of all times, just to consume the material you guys generated three centuries (of your time) ago!”
The other Shaun was breathless, smiling from ear to ear with a face like a dog waiting for you to throw the stick. There was a deep silence for a few moments. Shaun was afraid that someone would enter through the door and he would have to explain everything, and he was not sure he could, actually. He was still holding the last tool he was going to put somewhere safe. He looked at it and considered that that tool was not actually in his own universe, but somewhere else. Yet, it was there, on the same room.
“So,” – a pause – “how come you are… me?”, “Well, I’m not you, obviously, I’m just represented as you in this piece of our universe. I wouldn’t fit this room otherwise.”, “Oh, I get it.” lied Shaun. The other Shaun continued “You see, your studies has allowed us to extrapolate you idea and re-create your own universe inside our own. This room is just the connection point, if you go through that door” – and pointed to an old door that lead to the emergency exit – “you will continue inside our version of your universe.”, “Wait a minute, how much of our world have you replicated?”, “World, no, not just Earth, everything.” – a long pause, with wide open eyes. After a blink: “you mean, galaxies?”, “Yes, yes, all of them. Your universe is quite compact for all it has to offer, and we were firstly intrigued by that, but then we understood that it was necessary to have the constraints you have, and well, an important feature to generate such high quality theoretical physics.” “And we decided to lend an unused part of our universe so you could not only teach us by broadcasting your knowledge, but also running tests on our own universe.” “Most of your experiments are now part of our day-to-day life, from vehicles to communication devices to life-saving machines.” “You, Shaun, has made our lives so much better, that it was the least we could do.”
“Is there anyone living in this version of our universe? I mean, human … hum … clones?”, “No, no. We thought that would be improper. We do try to live in it, just for the curiosity, actually. There are some holiday packs to travel the wonderful places your universe has to offer. It’s nothing we don’t see in our own, but you know, travel agencies will always find an excuse to take your money, right?” and finished that sentence with a grin and almost a wink. His human traces were very good, almost as if he was observing for far too long, making Shaun to feel a little bit uneasy…
“Actually…” – the other Shaun continued – “maybe you could help us fixing a few things on this side of the universe. Make things a bit more suited to the people from our side, what do you think?”. With the rest of the team deep in tests, it’d be weeks before they would even consider going back to the main lab, and nobody else would dare to enter there, after the several claims (in the private circle that knew him) that his lab would produce a black-hole that would consume Earth and everything else.
Shaun decided to go in, at least to explore the very convincing copy of his own world. Going up the emergency exit, he found the lift all the way to the top, as expected. Outside, as expected, the early rays of the spring sun casting long shadows on the trees and buildings. The nearby cattle farm was empty, though. When the other Shaun noticed Shaun’s curiosity, he added, “Ah, yes, you see, we decided not to include mammals, as they could eventually evolve into sapient beings and we’d be altering the history of our own universe. We didn’t want to do that!”. Shaun thought it was sensible.
For several days, Shaun has listened to all complaints about his own universe and how would that fit into their own physiology. Animals were turned green to photosynthesise, trees would reproduce by multiple ways at the same time, genetic combination of more than a pair of chromosomes were allowed, as was normal in this new universe, and many of the landscapes were altered to fit the gigantic stature of most of its inhabitants. Some parts were left untouched, or the travel agencies would lose a huge market, and some were shortened and simplified, for the less elaborate, but still pseudo-sentient species.
Shaun was feeling very well, like a demi-god, changing landscapes and evolution at his own wishes, much like Slartibartfast. How fortunate was him, the only human – correction – the only being in his universe (as far as he knew) to play with a toy universe himself.
After meeting with leaders of the populations of the alter-universe, receiving gifts and commendations (and a few kisses from the lasses), it was time to return to his own universe. Shaun felt a bit tired, but after drinking a bit of their energetic beverage, he blasted back to alter-Earth in his new hyper-vehicle, to his own alter-lab. In there, only alter-Shaun was there to say goodbye. A handshake and a wink was enough to mean “I’ll be back, and thanks for all the fish”, which Shaun has taken as a warm gift, rather than a creepy resemblance.
But as soon as Shaun stepped up into his own universe, he noticed some things were out of place. After being in an alter-universe for so long, it was only natural to misplace normal concepts, but some things were not normal at all, like a 10 meter high corridor leading from his side of the room. Normally, It’d be no more than 2 meters and there was a very good reason for that: humans are not that tall!
He ran through it to find a huge door to a huge lift. In the lift were a few people still discussing what had happened. “It was definitely not that big! We must have shrunk!” said one, “No, that’s not possible, that’s Hollywoodian at best!” said the sceptic. Shaun took the lift up to the ground level, and ran to the farm nearby, fearing the worst.
And the worst happened. The cows were green, and the houses huge. Being a bad theoretical physicist himself, and not being able to count on the alter-physicists for theoretical matters, Shaun hasn’t taken into account that his machine was a duplication machine, of entangled particles. That means, for the lay to understand, that whatever happens to one, invariably happens to the other, no matter in what part of the universe, or in this case, in the multi-verse, they are.
That, thought Shaun, would take a bit more than a few days to fix… but he know how, and he was looking forward to fix it himself!
2010 – Year of what? |
| January 29th, 2010 under Computers, Life, OSS, Physics, rengolin, Unix/Linux, World. [ Comments: 2 ]
Ever since 1995 I hear the same phrase, and ever since 2000 I stopped listening. It was already the year of Linux in 95 for me, so why bother?
But this year is different, and Linux is not the only revolution in town… By the end of last year, the first tera-electronvolt collisions were recorded in the LHC, getting closer to see (or not) the infamous Higgs boson. Now, the NIF reports a massive 700 kilojoules in a 10 billionth of a second laser, that, if it continues on schedule, could lead us to cold fusion!!
The human race is about to finally put the full stop on the standard model and achieve cold fusion by the end of this year, who cares about Linux?!
Well, for one thing, Linux is running all the clusters being used to compute and maintain all those facilities. So, if it were for Microsoft, we’d still be in the stone age…
UPDATE: More news on cold fusion…
Phasers anyone? |
| November 21st, 2009 under Fun, Physics, rengolin. [ Comments: none ]
Star trek seems a long way and yet, a few news had made into the headlines exposing some achievements that might lead us closer to Roddenberry’s universe.
Some research just found anti-matter in an unusual place: lightning! It might be easier to produce a warp core that we originally thought. Given, of course, that sub-space exists and can be reached by an matter/anti-matter reaction.
Another research, from the University of California, has just found a way to create a medical tricorder. That, for me, is the best achievement so far. Not to mention time travels, teleportation, quantum computers and faster-than-light communication already achieved since the series was created.
Finally, the University of Canada just made the first phaser. Though, it’s still only set to stun…
But I have to say that I’m a bit worried. The Temporal Prime Directive might be needed a bit sooner than the 29th century…
Ad infinitum |
| February 12th, 2009 under Algorithms, Devel, Life, OSS, Physics, rengolin, World. [ Comments: none ]
Quality is fundamental in any job, and software is no exception. Although fairly good software is relatively easy to do, really good software is an art that few can truly reach.
While in some places you see a complete lack of understanding about the minimal standards of software development, in others you see it in excess. It’s no good either. In the end, as we all know, the only thing that prevails is common sense. Quality management, all sorts of tests and refactoring is fundamental to the agile development, but being agile doesn’t mean being time-proof.
One might argue that, if you keep on refactoring your code, one day it’ll be perfect. That if you have unit tests, regression tests, usability test (and they’re also being constantly refactored), you won’t be able to revive old bugs. That if you have a team always testing your programs, building a huge infrastructure to assure everything is user proof, users will never get a product they can’t handle. It won’t happen.
It’s like general relativity, the more speed you get, the heavier you become and it gets more difficult to get more speed. Unlike physics, though, there is a clear ceiling to your growth curve, from where you fall rather than stabilize. It’s the moment when you have to let go, take out what you’ve learned and start all over again, probably making the same mistakes and certainly making new ones.
It’s all about cost analysis. It’s not just money, it’s also about time, passion, hobbies. It’s about what you’re going to show your children when they grow up. You don’t have much time (they grow pretty fast!), so you need to be efficient.
Being efficient is quite different on achieving the best quality possible, and being efficient locally can also be very deceiving. Hacking your way through every problem, unworried about the near future is one way of screwing up things pretty badly, but being agile can lead you to the same places, just over prettier roads.
When the team is bigger than one person, you can’t possibly know everything that is going on. You trust other peoples judgements, you understand things differently and you end up assuming too much about some of the things. Those little things add up to the amount of tests and refactoring you have to run for each and every little change and your system will indubitably cripple up to a halt.
For some, time is money. For me, it’s much more than that. I won’t have time to do everything I want, so I better choose wisely putting all correct weights on the things I love or must do. We’re not alone, nor is all we do for ourselves, so it’s pretty clear that we all want our things to last.
Time, for software, is not a trivial concept. Some good software don’t even get the chance while some really bad things are still being massively used. Take the OS/2 vs. Windows case. But also some good software (or algorithms or protocols) have proven to be much more valuable and stable than anyone ever predicted. Take the IPv4 networking and the Unix operating system (with new clothes nowadays) as examples.
We desperately need to move to IPv6 but there’s a big fear. Some people are advocating for decades now that Unix is already decades deprecated and still it’s by far the best operating system we have available today. Is it really necessary to deprecate Unix? Is hardware really ready to take the best out of micro-kernel written in functional programming languages?
For how long does a software lives, then?
It depends on so many things that it’s impossible to answer that question, but there are some general rules:
- Is it well written enough to be easy to enhance to users’ request? AND
- Is it stable enough that won’t drive people away due to constant errors? AND
- Does it really makes the difference to people’s lives? AND
- Are people constantly being reminded that your software exists (both intentionally and unintentionally)? AND
- Isn’t there something else much better? AND
- Is the community big enough to make migration difficult?
If you answered no to two or more questions, be sure to review your strategy, you might already be loosing users.
There is another path you might find your answers:
- Is the code so bad that no one (not even its creator) understand it anymore? OR
- The dependency chain is so unbearably complicated, recursive and fails (or works) sporadically? OR
- The creator left the company/group and won’t give a blimp to answer your emails? OR
- You’re relying on closed-source/proprietary libraries/programs/operating systems, or they have no support anymore? OR
- Your library or operating system has no support anymore?
If you answered yes to two or more questions, be sure to review your strategy, you might already be on a one-way dead-end.
One thing is for sure, the only thing that is really unlimited is stupidity. There are some things that are infinite, but limited. Take a sphere, you can walk on a great circle until the end of all universes and you won’t reach the end, but the sphere is limited in radius, thus, size. Things are, ultimately, limited in the number of dimensions they’re unlimited.
Stupidity in unlimitedly unlimited. If the universe really has 10 dimensions, stupidity has 11. Or more. The only thing that will endure, when the last creature of the last planet of the last galaxy is alive is his/her own stupidity. It’ll probably have the chance to propagate itself and the universe for another age, but it won’t.
In software, then, bugs are bound to happen. Bad design has to take part and there will be a time when you have to leave your software rest in peace. Use your time in a more creative way because for you, there is no infinite time or resources. Your children (and other people’s children too) will grow quick and deprecate you.
Calliper, chalks and the axe! |
| September 10th, 2008 under Algorithms, Devel, Physics, rengolin. [ Comments: none ]
Years ago, when I was still doing physics university in São Paulo, a friend biochemist stated one of the biggest truths about physics: Physicist is the one that measures with a calliper, marks with chalk and cuts with an axe!.
I didn’t get it until I got through some courses that teaches how to use the mathematical tools available, extrapolate to the most infamous case, than expand in a series, take the first argument and prove the theorem. If you get the second argument, you’re doing fine physics (but floating point precision will screw up anyway).
Only recently I’ve learnt that some scientists are really doing a lot by following in the opposite direction. While most molecular dynamics simulation are going to the quantum level, taking ages to get to an averagely reasonable result (by quantum standards), some labs are actually beating them in speed and quality of results by focusing on software optimizations rather than going berzerk on the physical model.
It’s not like the infamous Russian pen (which is a hoax, by the way), it’s only the normal over-engineering that we normally see when people are trying to impress the rest of the world. The Russians themselves can do some pretty dumb simplifications like the cucumber picker or over-engineering like the Screw Drive that, in the end, created more problems than solved.
Very clear, in software development, the situation can be as bad as that. The complexity of over-designed interfaces or over-engineered libraries can render a project useless in a matter of months. Working around would increase the big ball of mud and re-writing from scratch would take a long time, not to mention include more bugs than it solves.
Things that I’ve recently seen as over-engineering were:
- Immutable objects (as arguments or on polymorphic lists): When you build some objects and feed them to polymorphic immutable lists (when creating a bigger object, for instance) and then need to change that afterwards you have to copy, change and then write back.
This is not only annoying but is utterly ineffective when the list is big (and thousands of objects need to be copied back and forth). The way out of it is to use the bridge pattern and create several (RW) implementations of your objects and lists and whatever you have but that also increases a lot on code complexity and maintenance.
My view of the matter is: protect your stuff from other people, not from yourself. As in “Library Consistent” or “Package-wise Consistent”.
- Abuse of “Standard algorithms“: Ok, one of the important concepts in software quality is the use of standards. I’ve written it myself, over and over. But, like water, using no standards will kill your project the same way as abusing of them.
So, if you create a
std::set that gives you the power of
log(N) searches, why the heck you’d use
std::find_if ( begin(), end(), MyComparator() );, that gives you linear searches? Worse, that find was actually before each and every insert!
std::set guarantees at least
N.log(N) speed on insertion, but the “standard fail-safe assurance” was giving it
N².log(N). For what? To assure no duplicated entries were ever inserted in the set, what was yet another thing guaranteed by the default container in question.
All in all, the programmer was only trying to follow the same pattern over the entire code. A noble cause, indeed.
Now, I’m still defining what’s worse: over-engineering or under-engineering… Funny, though, both have very similar effects on our lives…
Silly project of the week: molecule dynamics |
| July 9th, 2008 under Algorithms, Devel, Physics, rengolin. [ Comments: 1 ]
This week’s project is a molecular dynamics simulation. Don’t get too excited, it’s not using any of the state-of-art algorithms nor is assembling 3-dimensional structures of complex proteins. I began with a simple carbon chain using only coulomb’s law in a spring-mass system.
The molecule I’m using is this:
The drawing program is quite simple and wont work for most molecules, but for the 2-dimensional simple molecules (max. of 3 connections per atom) it kinda works.
Later on, putting the program to run, each atom “pushes” all others electrically and the spring “pulls” them back. A good way to solve that is to say that q1 . q2 / x² = – k . x = m . d²x/dx² (where x is a vector) and integrate numerically using Runge-Kutta.
But that’s my first openGL program, so I decided to go easy on the model and actually see it pseudo-working with an iterative-based simulation following the same equations above. This picture is a frame after a few iterations.
Quoting its page: “As this simulation is not using any differential solution, the forces grow and grow until the atom becomes unstable and break apart. Some Runge-Kutta is required to push the realism further.”
The webpage of the fully-functional prototype is HERE.
Book: Flat and Curved Space Times |
| May 8th, 2008 under Books, Physics, rengolin. [ Comments: none ]
The first time I read this book was during my special relativity course at university. I couldn’t understand a thing the teacher was saying (probably because his explanations were always: “you won’t be able to understand that”) and I needed to replace a 35% grade I got in the first exam to complete the course.
Well, hopeless as I was, headed to the library in search of a magical book (other classmates were helpless as well) and found this one. The magic in it is that, instead of trying to force the Lorentz transformations down the throat first and then explain the basic principles of relativity, it does it by simply showing the topology of the space and assuming that the speed of light is constant (pretty much the same path Einstein took in the first place).
So, the first chapter has no equations whatsoever, only graphics with light waves going back and forth and he derives the light-cones automagically from it, what happens to the “world” at high speeds and how does it affect our senses of reality. It goes on for all kinematic principles only using Newton equations and gamma. Lorentz transformations only appear in the fourth chapter.
After that, not only I could understand relativity as a whole, but I also got 90% grade on the final exam! It’s an old (88) book but time has no meaning for a very good book, especially for a subject that hasn’t changed that much in the last decades.
I recommend it to physics-wannabe as well as lay people with little background in math, and if your teacher is as hopeless as mine was, ignore him and read this book.
Click here for the US version.
Serial thinking |
| March 11th, 2008 under Algorithms, Computers, Devel, Fun, Physics, rengolin. [ Comments: 2 ]
I wonder why the human race is so tied up with serial thinking… We are so limited that even when we think in parallel, each parallel line is serial!
Take the universe. Every single particle in the universe know all the rules (not many) that they need to follow. On themselves, the rules are dumb: you have weight, charge and can move freely round the empty space. But join several particles together and they form a complex atom with much more rules (combined from the first ones) that, if combined again form molecules that form macro-molecules that form cells that form organs that form organisms that form societies etc. Each level makes an exponential leap on the number of rules from the previous one.
Than, the stupid humanoid looks at reality and says: “That’s too complex, I’ll do one thing at a time”. That’s complete rubbish! His zillions of cells are doing zillions of different things each, his brain is interconnecting everything at the same time and that’s the only reason he can breathe wee and whistle at the same time.
Now take machines. The industrialization revolutionized the world by putting one thing after the other, Alan Turing revolutionized the world again by putting one cell after the other in the Turing tape. Today’s processors can only think of one thing after the other because of that.
Today you have multi-core processors doing different things but still each one is doing things in serial (Intel’s HyperThreading is inefficiently working in serial). Vector processors like graphic cards and big machines like the old Crays were doing exactly the same thing over a list of different values and Quantum computers will do the same operation over an entangled bunch of qbits (which is quite impressive) but still, all of it is serial thinking!
Optimization of code is to reduce the number of serial steps, parallelization of code is to put smaller sets of serial instructions to work at the same time, even message passing is serial on each node, the same with functional programming, asynchronous communications, everything is serial at some point.
Trying to map today’s programming languages or machines to work at the holographic level (such as the universe) is not only difficult, it’s impossible. The Turing machine is serial by concept, so everything built on top of it will be serial at one point. There must be a new concept of holographic (or fractal) machine, where each part knows all rules but only with volume you can create meaningful results, where code is not done by organizing the high-level rules but by creating a dynamic for the simple rules that will lead to the expected result.
Such holographic machine would have a few very simple “machine instruction” like “weight of photon is 0x000” or “charge of electron is 1.60217646 × 10^-19” and time will define the dynamics. Functions would be a pre-defined arrangement of basic rules that must be stable, otherwise it’d blow up (like too many protons in the nucleus), but it wouldn’t blow up the universe (as in throw exceptions), it would blow up the group itself and it would become lots of smaller groups, up to the indivisible particle.
The operating system of such machine should take care of the smaller groups and try to keep the groups as big as possible by rearranging them in a stable manner, pretty much as a God would do to its universe when it goes crazy. Programs running on this operating system would be able to use God’s power (GodOS libraries) to manipulate the groups at their own discretion, creating higher beings, able to interact, think and create new things… maybe another machine… maybe another machine able to answer the ultimate question of Life, the Universe and Everything.
I know letting the machine live would be the proper way of doing it but that could take a few billion years or I’ll be quite tired of engineering the machine and it’s OS and I’ll just want to the the job done quickly after that…
There is a big fuzz about Non-Polynomial time problems (NP-complete), those that can’t be solved in a reasonable (polynomial) time. The classic example is the travelling salesman problem where a salesman has to go to each one of a number of cities. Which is the best path to follow to visit all of them in the smallest distance possible? With 3 or 4 it’s quite simple but when you have lots like 300 it becomes impossible for normal (serial) computers to solve.
Another problem quite fancy is the Steiner tree problem, where you have some points and you want to connect them using the least amount of strings. This is as complex as the problem above, can take forever (longer than the age of the universe) for relatively small sets of points, but if you use water and soap the problem is solved almost instantly.
Of course, soap films cannot calculate the last digit of PI but because every part of it know a small list of basic rules (surface tension increased by the soap molecules derived from opposite charges between atoms) every particle of the machine works together at the same time and the result is only achieved because the dynamic of the system has it’s least energy (least amount of strings) in that state.
It’s true that today’s computers are very efficient on working on a wide range of problems (thanks to Turing proving the classes of problems his tape could solve) but there are some that it can’t, given that we only have a few billion years yet of universe to spare. Such problems could be solved if there was a holographic machine.
More or less what I said was practically applied here. Thanks André for the link, this video is great!
How close is nano-computing? |
| October 25th, 2007 under Computers, Nano Tech, Physics, rengolin. [ Comments: 1 ]
In September, Sunny Bains wrote Why Nano still macro? and since then I’m thinking about it once in a while.
Recently, a study in the University of California showed how to create a demodulator using nanotubes. So far there were advances in memory containers such as this and that and also batteries but all of them, as Sunny remembers, trying to build small structures following the design of big things.
Quantum computation nowadays have exactly the same problem, quantum effects in a classic assembly, big, clumsy and very expensive. If it was required a quantum effect (the transistor) to make classical computational cheap and available what will be required to make quantum computers cheap? A SuperString effect? Something messing around with the Calabi–Yau shape of the 6 additional dimensions?
Anyway, back to nanotech, building a nano-battery is cool but using ATPs as the primary source for energy would be much cooler! Using the available nano-gears and nanotubes to make a machine is also cool but creating a single 2,3 Turing machine (recently proven to be universal) would be way better!
Once you have the extremely simple processor like that, a nano-modem, some storage and ATP as food you can do whatever you want for how long you like inside any living being on Earth. Add a few gears to make a propeller and you’re mobile! ;)
Of course it’s not that simple, but most of the time to state that something is viable means exactly the same as to say that it’s classic as in boring and clumsy and expensive and brute force… well, you got the idea…