header image
Build your own Flower Robot!
May 29th, 2007 under Fun, rvincoletto, Technology. [ Comments: none ]


Now you can build your own robot! Carnegie Mellon University’s Robotics Institute, USA, has released its recipes to build robots as home.

Using TeRK (Telepresence Robot Kit), you can find all pieces you need and even adapt others parts to do your own robot.

Right now, they have 4 recipes:

Qwerkbot Classic (The Qwerkbot Classic is the simplest mobile robot that you can build using a Qwerk processor. Utilizing the holes in the Qwerk enclosure as mount points for two motors and a caster the Qwerkbot recipe literally turns your Qwerk into a robot.)

Qwerkbot+ (The Qwerkbot+ adds a pan-tilt head to allow independent motion of the camera and robot base. This version is somewhat more challenging to build than the Qwerkbot Classic.)

AC Power (The AC power Adapter allows you to power a QweRK from an ordinary AC wall outlet.)

Flower (The Flower is a stationary robot with seven degrees of freedom. Once you have built the Flower, you can use TeRK’s Robot Universal Remote and Flower Power software to program its movements. You can program your Flower to rise or wilt and program the motions of its petals. Because the Flower is equipped with IR sensors on three of its petals, it can track objects moving in front of it. It can even catch a lightweight ball.)

While all others bots are for beginners, the Flowers is quite more complex and you can spend 10 hours building it.

But, how cute is that!

Flower Robot

They also have softwares for controlling your TeRK robot, like this Flower Plower to program your Flower Robot.

Flower Power Software

Actually, the robot’s secret is the internal electronic controller Qwerk, a microcomputer using Linux to control all cameras, USB devices, engines and sensors. The robot’s sftware is Open Source and you can use virtually any computer language.

Oh, yes! There are bad news… Now, they are selling the kit just in US. By the way, the Flower Robot Total cost of parts is $725,00.

The desktop I want…
May 20th, 2007 under rengolin, Technology, Thoughts. [ Comments: 1 ]

Nobody is happy with its computer because new stuff come out every day. Every time I want more I stop and think that in the next day I’ll feel the same way. But, what if I could have the dream desktop, what would it be?

To define how would it be I started by the thing I really hate:

  • Cables
  • Fan noise
  • Hi-res to squeeze more windows on the screen
  • Uncomfortable chairs
  • Mouse

Then, I thought on a powerful and silent computer, definitely multi-core, and thin… really thin. Monitors all around, no cables at all and a very comfortable seat (not chair) is essential. Mouse is useless and the keyboard should be the slimmest possible and almost silent.


Cables are the worst thing on a desktop. Notebooks are so much better on that matter but still far too uncomfortable to use, always looking down. Wireless power is getting of being real closer but still far away from commercial applications but unfortunately still the only true way of getting away with all cables.

If you have a wireless power transmitter in your wall, all the parts of a desktop (box, monitors, speakers, keyboard) could benefit. Also, exempting the power cord from the wireless rule would still allow to exist half of the cables in a desktop.

The rest is very simple: wi-fi, bluetooth and infrared can take care of all data, from keyboards and speakers to monitors and 3D virtual goggles.

Thin as a notebook

Notebooks are quite good on holding everything in a very narrow space, but there are two main problems with the standard approach, and one derives from the other: they are always underpowered because it’s hard to keep powerful CPUs cool and quiet.

Having such desktop but choosing for a Pentium M is such a shame. Also, multi-processed / multi-core machines are standard nowadays, choosing for less than two processors is not an option, I would say something about 16 cores, either 4 quad-cores or 8 dual-cores, doesn’t matter. This also helps keeping the machine cooler because you’ll most of the time use less than 16 and the heat will be distributed and more easily dissipated.

Also, fans are too noisy. You can get less noisy fans, at lower RPMs, but their size is huge and won’t fit in a very slim desktop. One solution is water cooling, but that would add more cables and conflict with rule number one. Another solution is liquid gas (such as nitrogen) in a closed cycle but to be thin it would have to be very well planned.

Monitors? Mouse?

When thinking of monitors, I had the opportunity to work with multiple monitors and liked, but the trouble in setting them up (in your desk) when related to your keyboard is never perfect and you end up using one monitor much more than the other.

Also, high resolution (1600×1280 or higher) is pointless. It gives you the impression you can fit more things in your screen but you end up maximizing your browser and console window instead of fitting more consoles and browses. The multiple desktop setting on Unix solves almost all problems but one very specific: the quick glance.

For development it’d be good to have multiple sources listings open where you could search and browse separately but view altogether with the other sources. For design, having all tools on one screen and the image on the other is great as well as rendering on one screen and still editing on the other (a thing that multiple desktops doesn’t solve if it’s running full-screen rendering).

Also for games it’s very important to look at things without clicking the bloody hat-switch and waiting minutes to get to the point you want and when you go back your aircraft (or car) is completely out of track.

The best thing is 3D virtual goggles, with movement detection but with an additional setting: whenever you move your head it moves the desktop in the other way. On Unix you can have two resolutions, one for the real thing (monitor) and other for the desktop, which can be much higher. This is specially good when you have a virtual goggle and can look around your desktop.

It also fits perfectly to games and parallel coding, as well as design. You can have zoom in and out, focus on the window you’re looking at (ie. follow mouse focus) and if there is also an eye movement detection you can really abandon the mouse forever!

The keyboard

The keyboard is essential for it’s where your hand will be 100% of the time (we had just retired the mouse). A good support for your wrist and elbow is required but the noise and the touch is essential. Some notebook keyboards are very good in response and noise levels but they’re too compact. A wider keyboard, with the keys closer to each other and with a bit less noise would be perfect.

A support for the keyboard is also a good idea. I hate to force my arm to match the keyboard’s position, specially if I want a very comfortable seat to be in. This support should be a strong telescopic arm, with all degrees of freedom you might need and that could fit on your seat without being a pain when you want to get up for a tea.

Some may say that, in a so futuristic computer the keyboard is also redundant for one can use voice commands to type but I find it very annoying. I refuse to talk to a computer that won’t trully understand me. The day it really knows what I’m talking about and act intelligently on what I say I may start talking to them, before that, a keyboard is all I need.

Comfy seat

The seat (not chair) is one of the most important part of the whole desktop. Unlike notebooks, the desktop won’t move too much and you should stay with it in a real decent place. Because you’re wearing virtual goggles and you have no mouse and the keyboard is in a telescopic arm you can freely lay down in what position best suits you.

The seat should be made of real leather and the room temperature should be controlled (otherwise you’ll sweat to death) and there should have speakers near your ears besides the head rest. A support for your feet and legs is also very important and all of that should be controlled via software from your computer.

Because very good algorithms have been developed to do surround sound using only two speakers and if you have two very good quality speaker you don’t even need a sub-woofer, that’s more than you need to have really good sound quality.

Of course, the chair must be wireless as well!

Document to understand
May 16th, 2007 under Devel, rengolin, Technology, Thoughts. [ Comments: none ]

Not always you have the opportunity to write a fresh new code. Sometimes you have to face a huge codebase, be it either one big monolithic code or thousands of small scripts and programs, it doesn’t matter, it’ll definitely be a nightmare. So, how to avoid stress and go through it with the fewest scratches possible?

It all depends on what you can do with the code…

If the last programmers were nice to you they have prepared test cases, documentation, doxygen comments, in-line comments, a wiki with all steps to compile, test and use the software and a plethora of resolved (and explained) problem-cases that can happen. I’m still to see that happening…

As Kernighan exposes in ;login: (apr 2006) writing tests should be the primary task of the programmer and not code. With existent codes, documentation should be the primary task instead of direct changes to the program.

If the program is documented already, your first task should read both the docs and the code, side by side but the case might probably be that you won’t understand properly. Nothing wrong with you, just that programmers tend to document what they don’t understand and fail to do so for what they do understand. But if you start changing old docs you might end up screwing everything so here’s a quick tip:

  1. Copy the code to a temp area so you can play at will
  2. Compile the code with your mind, read the code, understand from where all variables are coming, go backwards and check and add a comment before each line saying what each important line does
  3. Group lines in domains and create a bigger comment. This is specially important for balls of mud where the whole code is in a single function
  4. Check against the old docs and see if what it’s written is actually what’s happening. Documentation tend to be out of date very quickly


One side effect of documenting is that eventually you’ll understand the code better than the original coders. I explain. The original coders had one objective in mind: fix the bug. Most of the time and specially with balls of mud the fixes tend to be dirty, written as temporary and run forever. The original coder, most of the time, didn’t know how that bugfix fitted on the whole, he/she just knew it worked.

You, on the other hand, know the whole in a way no other coder had know before because you have documented it from start to finish and have the knowledge of what’s at stake on each change.

But that power have a problem: on the day you stop documenting you’ll be the “old coder” and stop understanding the code as a whole and start writing bad bugfixes yourself! So be diligent, be patient and most of all loyal to your grater purpose: to code better.

Back to the bug fixes… Once you find them you’ll notice that most of them are useless or redundant, or that lots of them can be replaced by a very small and simple shell script. Once you learnt all steps of a program you visualize the flow and it becomes very clear all shortcuts and optimizations available to you, and you have now the power to change it.

But after all those changes, a good chunk of the documentation is useless, specially those you made so, why bother? I tell you why, if you hadn’t documented in the first place you wouldn’t be able to optimize it that deep and throwing away most of your past documentation will (should) encourage you to document it again, in a higher level. Doing it you’ll see things that you couldn’t before (even when you had the whole system in your mind) because the system was not organized! That’s the next step!

Every time you document everything and see the whole system in your mind you visualize all optimizations available, this will force you to throw away old code and docs and redo again, which in turn, will come back to the same point but one step further.

The pure act of documenting allow you to optimize. Throwing away old docs allows you to go one step further.

Documents in tree

Instead of throwing away old docs might not be a good idea because people will not understand why you optimized that way and will probably go back to the original code once you’re off. Keep all documents, organize them in a tree and show only the high level docs at first, if one wants to know more it can go deeper in the tree to find out how it was and why you made it that way.

It’ll also help you to understand your own optimizations and change in the future when it makes no sense any more.

Wasting time

Some will argue you’re wasting time documenting when you could understand the code just by looking at it (use the source, luke) but that just isn’t true. People, as well as computer, have a short term and a long term memory and most codes won’t fit entirely on your short term memory (if they do, get a better job).

Navigating through your long term memory is expensive and will switch context on your short term memory and you’ll not be able to make connections properly. But if you have everything written in an organized structure it’ll be much easier for your brain (or any machine) to infer about the code.

It’s the same as testing, you may spend half of your project’s time doing the test cases but that could (and probably will) save you hundreds of hours (and probably your job) if something goes wrong, and it’s very likely to happen.

At last, writing test cases, documentation and proper code is not only part of your job, it’s part of your grater purpose in life: to code better. No job in the world is worth writing bad code, it’s like lying to keep the job… just don’t do it.

Middle Earth: Proxy
May 8th, 2007 under Distributed, rengolin, Technology. [ Comments: none ]

When updating the nodes I have to download several times (N for N nodes) the same packages, so a good idea is to have a proxy that would do it for me once and all nodes get from the local copy. For that we have the good old squid.

On the Master node:

$ sudo apt-get install squid

Than edit the config file. It’s rather huge but search for acl localhost and add the line below:

acl cluster src
http_access allow cluster

assuming your cluster is on that subnet.

Now, on each node (also on Master) set the environment variable (on .bashrc):

export http_proxy="http://master-node:3128/"
export ftp_proxy="http://master-node:3128/"

Also, a good idea is to increase the max cache object from 4Mb to, say 400M because the idea is to cache deb packages and not webpages. You can also limit the global size of the cache (like 1Gb) so old packages will be deleted.

# Per object (400MB)
maximum_object_size 409600 KB
minimum_object_size 64 KB
# Global (1GB)
cache_dir ufs /var/spool/squid 1000 16 256

Restart squid and you’re ready to go:

$ sudo /etc/init.d/squid restart


Creative Commons License
We Support



National Autistic Society

Royal Society for the Prevention of Cruelty to Animals


End Software Patents

See Also

The information in this weblog is provided “AS IS” with no warranties, and confers no rights.

This weblog does not represent the thoughts, intentions, plans or strategies of our employers. It is solely our opinion.

Feel free to challenge and disagree, and do not take any of it personally. It is not intended to harm or offend.

We will easily back down on our strong opinions by presentation of facts and proofs, not beliefs or myths. Be sensible.

Recent Posts