“computering” – a different presentation style

Keynote is nice, but it’s not well suited for technical presentations. Live coding on the other hand is hard and often fails horribly because of some minor typos etc.

Some month ago I visited the Python Usergroup Montreal and I saw an awesome live coding session from Rory Geoghegan. He was typing incredibly fast and 100% correct! Watching him, at first I was baffled, then I was amazed and then I got suspicious…

Rorey used Keyboard Cat to replay his code!

The “computering” gem

If you are into Programming-Screencasts like me, you should definitively watch the PeepCode Play by Play episode with Aaron Patterson and Corey Haines. In that episode they have a pairing session on some Ruby code. At some point they drift off and start computering with dinosaur-hands:

(Find all animated GIFs at the PeepCode Blog)

Based on the Keyboard Cat and inspired by the dinosaur-hands i created computering. With the computering command line application you can pretend to type amazingly fast and 100% accurate even in dinosaur-hands-mode!

computering is just a couple of days old, but you can already build interactive command line presentations with it using a simple Ruby-DSL. I used it for our last Ruby Usergroup Meeting and held a talk called OMG Rails4!

Screen Shot 2013-07-11 at 20.20.08

just don’t do it – the art of being lazy

One thing that I find myself doing a lot is skipping a lot of required programming tasks. From what you can find on the Interwebs only a lazy programmer is a good programmer!

This does not mean that you should be doing nothing though… it’s all about getting things done instead of getting stuck in the details!

don’t test your code

I am a huge fan of BDD and Test First. It’s hard writing all those tests when you are lazy. That’s why people tend to write Happy Path tests only.

I think that there are also a lot of areas, where writing tests does not even make sense. IE when you are creating prototypes, learning new frameworks or whole languages and when you are trying to sketch out an object model in code. In those cases, I don’t skip tests altogether, but I defer it to the point where I am confident with what I am going to create.

When you look at large Rails applications they usually have several layers of tests. Unit, Functional, Integration etc. Unfortunately, every layer of tests adds a layer of complexity. I have seen a lot of projects where people write Capybara Spec or Cucumber Features that try to cover the whole application logic! Those test-suites are often brittle and slow, maintenance is a huge PITA. I rather remove those tests and stick with Unit-Tests that might not cover all integration points, but are easy to maintain, understand and also fast.

Use full stack tests only for mission critical features. If you have a good monitoring and deployment process in place, fixing bugs can be cheap. What’s worse, slow and fragile tests, or no tests at all?

don’t refactor your code

I did a lot of refactoring, especially in large legacy systems, some times with no tests as a safety-net. Refactoring is not a value in itself. If a piece of software does it’s job, why would you even think about refactoring it?

Clean Code is something that you want to achieve in every software project. That is because clean code promises to be easier to change and maintain. So it’s not actually about clean, but changeable code. Refactoring code to make it beautiful is waste, always strive to make it more modular and in this regard, easier to change in the future.

I have set up some strickt rules for when I refactor a piece of code:

  • I am changing it anyways in order to add a feature or fix a bug.
  • I am totally pissed of the way it is implemented!
  • I am sitting on a plane and have nothing else to do…

don’t optimize your app

I don’t know how much time I wasted in discussions with other programmers, claiming that this or that part of an application needs some optimization. Most of the time it’s even worse, people implement optimized software right away… This thing, also known as Premature Optimization or how I call it Premature Ejaculation, is most often the source for the NR. 2 reason of why I refactor code (I’m pissed off).

Optimized code tends to be more complex or less readable and in 99% of all implementations, there is a simple, clean, readable and actual faster way to implement it anyways.

Especially code complexity or the complexity that frameworks introduce in order to be super fast comes with a big downside: decreased productivity! It feels like I spent a whole developer year just fixing bugs related to the Rails Asset Pipeline and I am not the only one.

don’t think out of the box

It’s a good thing when you are confident with your programming language, you know every shortcut of the IDE that you use, you run a database that you deeply understand and that you are able to tweak, your operating system is up for half a year and you know what buttons to push to keep it running!

There is no value in introducing new technologies to your application stack until you have a real need for it. Stability and Know-How are often underestimated! I think it makes absolute sense to squeeze the last drop of performance out of your existing stack, keep it as long as you can!

It’s a different thing for programmers though. Playing with new frameworks, databases, operating systems and coding in different types of IDEs is key for expanding your horizon. Learning a new programming language each year is one of those things, that keeps you up to date and lets you think about coding problems from different perspectives. But do you want to have all those languages, frameworks and databases in your application stack? Is it worth introducing heterogeneous systems to your company’s infrastructure?

don’t automate

There are tons of tasks in your daily workflow that could be automated. Automation is a key factor in terms of productivity. If you want to have fast and sound processes in your business, automation is probably the way to go.

On the opposite, I think that it’s fine to be doing manual tasks! It’s always a tradeoff between the value that the manual task provides, versus the cost that creating an automated tool, it’s maintenance and support costs.

Is it worth setting up a whole CI infrastructure, when all you have is just a single app to test? Well, it depends… As a lazy developer, I would look for services that do the heavy lifting for me.


Create something, put some sugar and cream on top, ship it!

OSS licensing

There are so many different license models out there in the open-source community. I don’t know what all the fuss is about… I hate legal stuff, just ship it and ask for permissions later!

Nevertheless I have a license for my open source projects on github as well and it’s just there to say FU to everyone that actually cares about licensing. It’s called THE BEER-WARE LICENSE:

 * ----------------------------------------------------------------------------
 * "THE BEER-WARE LICENSE" (Revision 42):
 * <[email protected]> wrote this file. As long as you retain this notice you
 * can do whatever you want with this stuff. If we meet some day, and you think
 * this stuff is worth it, you can buy me a beer in return Poul-Henning Kamp
 * ----------------------------------------------------------------------------


I modified it a little to come closer to my own needs (markdown + more beers!):

## License
"THE (extended) BEER-WARE LICENSE" (Revision 42.0815): [phoet](mailto:[email protected]) contributed to this project.

As long as you retain this notice you can do whatever you want with this stuff.
If we meet some day, and you think this stuff is worth it, you can buy me some beers in return.


Pulling strings on Raspberry Pi

I got my Raspberry PI Starter Kit this week. The Kit is very nice, as it comes packaged with everything you need in order to get your PI up and running ASAP. Just plugin the SD-Card, connect Ethernet cable and USB Power supply and your ready to go!

Booting Up

After the PI has booted from the provided Image on the SD-Card, it is accessible through SSH:

ssh [email protected]

It’s a good idea to copy the SSH public key to the machine, so that you do not have to type in the passphrase everytime:

ssh [email protected] "mkdir -p .ssh"
scp ~/.ssh/id_dsa.pub [email protected]:.ssh/authorized_keys

When the provided Raspberry PI distro is outdated, you can update it with the builtin admin tool:

sudo raspi-config 
=> select update, exit
sudo reboot

Bootstrap for Puppet

As an experiment, I wanted to provision my PI with Puppet. Since the PI comes packaged with Python as a programming language, you need to install Ruby in order to run Puppet.
There are several Ruby Versions available as Debian packages, but I wanted to try out RBenv just for fun. This is only an option if you have plenty of time, cause building a Ruby from source takes round about 2 hours…

# update aptitude
sudo apt-get update -y
# install some basics like git and support for ruby to compile and puppet to work
sudo apt-get install build-essential zlib1g-dev libssl-dev libreadline-dev git-core curl libyaml-dev -y

# clone the rbenv repo and setup the env
git clone git://github.com/sstephenson/rbenv.git ~/.rbenv
echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.profile
echo 'eval "$(rbenv init -)"' >> ~/.profile
exec $SHELL -l

# add the ruby-build plugin and install a ruby 1.9.3
git clone git://github.com/sstephenson/ruby-build.git ~/.rbenv/plugins/ruby-build
rbenv install 1.9.3-p327
# set the ruby as the global ruby version
rbenv global 1.9.3-p327
rbenv rehash
# check if it's working fine
ruby -v

# add the rbenv sudo plugin
git clone git://github.com/dcarley/rbenv-sudo.git ~/.rbenv/plugins/rbenv-sudo

Having a running Ruby, installing Puppet is just a matter of minutes, as it is distributed as a Ruby Gem. It’s a good idea though to disable RDoc and RI documentation generation to speed up the installation:

# puppet needs it's own user and group
sudo useradd --comment "Puppet" --no-create-home --system --shell /bin/false puppet
# disable documentation generation for gems
echo "gem: --no-ri --no-rdoc" > ~/.gemrc
# install the gem
gem install puppet
# make puppet executables available through rbenv
rbenv rehash
# check if facter is installed properly

If facter complains about a problem with the fact fqdn, just add a real hostname to /etc/hosts:

sudo vi /etc/hosts       raspberrypi.nofail.de raspberrypi

First Puppet run

After that puppet should run without warnings and you can start writing the first manifest:

mkdir /home/pi/puppet
cd /home/pi/puppet
vi base.pp

Add this to base.pp to install VIM to the Raspberry PI:

package { "vim":
  ensure => installed,

This manifest can be applied with rbenv sudo:

rbenv sudo puppet apply base.pp

Et voila, VIM is installed on your PI.


Working with the tools on my Mac is cool, so sharing a directory on the PI is a nice way of editing locally and running stuff on the server. Deriving from a basic installation of Samba that’s what needs to be done on the PI:

# install samba
sudo apt-get install samba samba-common-bin
# update config
sudo vi /etc/samba/smb.conf

In /etc/samba/smb.conf uncomment line security = user and add the following section:

  comment = Puppet share 
  path = /home/pi/puppet
  writeable = yes
  guest ok = no

Once you did this, add a Samba user and restart the service:

# make shared dir writable
chmod 777 /home/pi/puppet
# add the user for access over smb
sudo smbpasswd -a pi
# restart smb
sudo /etc/init.d/samba restart

Using ⌘+k on the Mac connects to a remote server. Enter smb://[email protected] and connect to the Puppet share. You can now access your files via open /Volumes/Puppet/.


In order to access the PI over the internet, subscribing to a free Dynamic DNS service like NO-IP is a nice way to get a static hostname for name resolution.
NO-IP has some Tools available for download that help you keeping the switching IP up to date. Since the provided binaries did not work on the PI, the NO-IP client needs to be compiled from source.

On my Airport Extreme I enabled port-forwarding so that i can route services to the PI.


If you want the PI to connect to your wireless network, just buy yourself a mini wifi USB adapter. It’s pretty easy to get the PI up and running with this on your local wifi, just enable the network device in /etc/network/interfaces, mine looks like this:

# /etc/network/interfaces
auto wlan0
iface wlan0 inet dhcp
wpa-conf /etc/wpa.config

auto eth0
iface eth0 inet dhcp

and add the wifi connection settings to /etc/wpa.config:

# /etc/wpa.config
  pairwise=CCMP TKIP
  group=CCMP TKIP

Raspberry PI with cream and sugar!

Bringing Usergroups on Ruby

I am one of the organizers of the local Ruby Usergroup in Hamburg for over two years now. We meet on a regular basis, every second wednesday each month. Some times we meet in a bar to grab some beers, some times we have full fledged community events with sponsors and high quality talks.

All this is currently managed using the On Ruby plattform that was created to bring the events close to

the Ruby devs. It also eases the process of creating events, submitting talks, finding locations and publishing all that to Twitter, Google+ and Mailinglists.

Some month ago, the project was forked by Cologne.rb and they built their custom version for the Ruby Usergroup in Cologne. After that some more Usergroups wanted to use the tool and that was the reason we merged the Cologne.rb codebase and created a whitelabel version of the application, which is customizable in many regards. It also comes with a Mobile version for iPhone and Android.

If you are interested in joining Hamburg, Cologne, Bremen, Saarland and Karlsruhe just follow the guide for creating your own OnRuby Usergroup!