On Reading Code

2012-08-02 15:14

“If you don’t have time to read, you don’t have the time or the tools to write.” –Stephen King


Quite a while ago GeeCon published the video taped talk of Kevlin Henney on "Cool Code". This keynote is great to watch for everyone who loves to read code - not the one you encounter in real world enterprise systems - but the one that truely teaches you lessons:

GeeCON 2012: Kevlin Henney - Cool Code from GeeCON Conference on Vimeo.


Teddy in Poznan

2012-05-27 20:03
Some images taken in Poznan after GeeCon - big Thanks! to Dawid for giving advise on where to go for sightseeing, exhibitions and going-out.

The tour started close to river Warta - it being a sunny day it seemed like a perfect fit to just walk through the city, starting along the river headed towards the cathedral:


   


After that Poznan Citadel was a great place to spend lunch time - sitting somewhere green and shady:




Afternoon was dedicated to discovering the city center, several local churches and the national galery:


    

GeeCon - Testing hell and how to fix it

2012-05-26 08:08
The last regular talk I went to was on testing hell at Atlassian – in particular the JIRA project. What happened to JIRA might actually be known to developers who have to deal with huge legacy projects that predate the junit and dependency injection era: Over time their test base grew into a monster that was hard to maintain and didn't help at all with making developers confident on checkin time that they would not break anything.

On top of 13k unit tests they head accumulated 4k functional tests, several hundreds of selenium user interface tests in 65 maven modules depending on 554 dependencies that represented quite some technology mix from old to new, ranging across different libraries for solving the same task. They used 60+ remote agents for testing, including AWS instances that were orchestrated by a Bamboo installation, had different plants for every supported version branch, tested in parallel.

Most expensive were platform tests that were executed every two to four weeks before each release – those tested JIRA with differing CPU configurations, JVMs, Browsers, databases, deployment containers. Other builds were triggered on commit, by dependencies or nightly.

Problem was that builds would take for 15 min for unit tests, one hour for functional tests, several hours for all the rest – that means developers get feedback only after they are home essentially blocking other developers' work. For unit tests that resulted in fix turnaround times of several hours, for integration tests several days. Development would slow down, developers became afraid of commits, it became difficult to release – in summary morale went down.

Their problems: Even tiny changes caused test avalanches. As tests were usually red, noone would really care. Developers would not run tests for effort reasons and got feedback only after leaving work.
Some obvious mistakes:

Tests were separate from the code they tested – in their case in a separate maven module. So on every commit the whole suite has to run. Also back when the code was developed dependency injection only just started to catch up which meant the code was entangled, closely coupled and hard to test in isolation. There were opaque fixtures hard coded in xml configuration files that captured application scope but had to be maintained in the tests.

Their strategy to better testing:

  • Introduce less fragile UI tests based on the page objects pattern to depend less on the actual layout and more on the functionality behind.
  • They put test fixtures into the test code by introducing REST APIs for modification and an introduction of backdoors, only open in the test environment.
  • Flickering tests were put to quarantine and either fixed quickly or deleted – if noone fixes them, they are probably useless anyway.


After those simple measures they started splitting the software into multiple real modules to limit scope of development and raise responsibility of development teams. That comes with the advantage of having tests close to the real code. But it comes with the cost of a more complex CI hierarchy. However in well organised software in such a project hierarchy commits turned out to tend to go into leaves only – which did lessen the number of builds quite a bit.

There is a tradeoff between speed vs. control: Modularizing means you no longer have all in one workspace, in turn it means faster development for most of your tasks. For large refactorings noone will stop you to put all code in one idea workspace.

The goal for Atlassian was to turn the pyramid of tests upside down: Have most but fast unit tests, have less REST/html tests and even less Selenium tests. Philosophy was to only provide REST tests if there is no way at all to cover the same function in a unit test.

In terms of speeding up execution they started batching tests against one instance to avoid installation time, merged tests, used in-process databases, mocked IO and webservers where possible. Also putting more hardware in does help, so does avoiding sleeping in tests.

In terms of splitting code – in addition to responsibility that can also be done by maturity to keep what is evolving quickly close together until it is stable.

The day finished with a really inspiring keynote by Kevlin Henney on Cool Code – showing several both either miserably failing or incredibly cool pieces of software. His intention when reading code is to extend a coders vocabulary when it comes to programming. That's why even the obfuscated c code competition does make for an interesting read as it tells you things about language features you otherwise might never have learned about before. One very important conclusion from his talk: “If you don't have the time to read, you have neither time nor tools to write.” - though being made by Stephen King on literature this statement might as well apply to software, after all to some extend what we produce is some kind of art, is some kind of literature in it's own right.

GeeCon - Solr at Allegro

2012-05-25 08:07
One particularly interesting to me was on Allegro's (polish Ebay) Solr usage. In terms of numbers: They have 20Mio offers in Poland, another 10Mio active offers in partnering countries. In addition in their index there are 50Mio inactive offers in Poland and 40 Mio closed offers outside that country. They serve 8Mio updates a day, that is 100 updates a second. Those are related to start/end of bidding phase, buy now actions, cancelled bids, bids themselves.

Per day they have 105Mio requests per day, on peak time in the evening that is 3.5k requests per second. Of those 75% are answered in less than 5ms, 90% in less than 20ms.

To achieve that performance they are using Solr. Coming from a database based system, going via a proprietary search product they are now happy users of Solr with much better customer support both from the community as well as from contractors than with their previous paid for solution.

The speakers went into some detail on how they solved particular technical issues: They had to decide to go for an external data feeder to avoid putting the database itself under too much load even when just indexing the updates. On updates they need to deal with having to reconstruct the whole document as updates for Solr right now mean deleting the old document and indexing the new one. In addition commits are pretty expensive, so they ended up delaying commits for as long as the SLA would allow (one minute) and committing them as batch.

They tried to shard indexes by category facetted by – that did not work particularly wrong as with their user behaviour it resulted in too many cross-shard requests. Index size was an issue for them so they reduced the amount of data indexed and stored in Solr to the absolute minimum – all else was out-sourced to a key-value store (in their case MongoDB).

When it comes to caching that proved to be the component that needed most tweaks – they put a varnish in front (Solr speaks xml over http which is simple enough to find caches for) – in relation with the index delay they had in place they could tune eviction times. Result were cache hit rates of about 30 to 40 percent. When it comes to internal caches: High eviction and low hit rates are a problem. Watch the Solr Admin Console for more statistics. Are there too many unique objects in your index? Are caches too small? Are there too many unique queries? They ended up binding users to solr backends by having a routing be sticky with the user's cookie – as users tend to drill down on the same dataset over and over again in their case that raised hit rates substancially. When tuning filter queries: Have them as independent as possible – don't use many unique combinations of the same filtering over and over again. Instead filter individually to better use that cache.

For them Solr proved to be a stable, efficient, flexible, easy to monitor and maintain and change system that ran without failure for the first 8 months with the whole architecture being designed and prototyped (at near production quality) by one developer in six months.

Currently the system is running on 10 solr slaves (+ power backup) compared to 25 nodes before. A full index takes 4 hours, bottlenecked at the feeder, potentially that could be pushed down to one hour. Updates of course flow in continuously.

GeeCon - managing remote projects

2012-05-24 08:05
In his talk on visibility in distributed teams Pawel Wrzeszcz motivated why working remotely might be benefitial for both, employees (less commute time, more family time) as well as employers (hiring world wide instead of local, getting more talent in). He then went into more detail on some best practices that worked for his company as well as for himself.

When it comes to managing your energy the trick mainly is to find the right balance between isolating work from private live (by having a separate area in your home, having a daily routine with fixed start and end times) and integrating work into your daily live and loving what you do: The more boring your job is, the less likely you are going to succeed when working remotely.

There are three aspects to work remotely successfully: a) having distributed meetings – essentially: minimize them. Have more 1 on 1 meetings to clear up any questions. Have technology support you where necessary (Skype is nice for calls with up to ten people, they also tried google hangouts, teamspeak and others. Take what works for you and your colleagues). b) For group decisions use online brainstorming tools. A wiki will do, so do google docs. There's fancier stuff should you need it. Asynchronous brainstorming can work. c) Learn to value asynchronous communication channels – avoid mail, wikis, issue trackers etc. are much better suited for longer documentation like communication.

Essentially what will happen is that issues within your organisation are revealed much more easily than working on-site.

GeeCon - failing software projects fast and rapidly

2012-05-23 08:04
My second day started with a talk on how to fail projects fast and rapidly. There are a few tricks to do that that relate to different aspects of your project. Lets take a look at each of them in turn.

The first measures to take to fail a project are organisational really:

  • Refer to developers as resources – that will demotivate them and express that they are replaceable instead of being valuable human beings.
  • Schedule meetings often and make everyone attend. However cancel them on short notice, do not show up yourself or come unprepared.
  • Make daily standups really long – 45min at least. Or better yet: Schedule weekly team meetings at a table, but cancel them as often as you can.
  • Always demand Minutes of Meeting after the meeting. (Hint: Yes, they are good to cover your ass, however if you have to do that, your organisation is screwed anyway.)
  • Plans are nothing, planning is everything – however planning should be done by the most experienced, estimation does not have to happen collectively (that only leads to the team feeling like they promissed something), rather have estimations be done by the most experienced manager.
  • Control all the details, assign your resources to tasks and do not let them self-organise.


When it comes to demotivating developers there are a few more things than the obvious critizing in public that will help destroy your team culture:

  • Don't invest in tooling – the biggest screen, fastest computer, most comfortable office really should be reserved for those doing the hard work, namely managers.
  • Make working off-site impossible or really hard: Avoid having laptops for people, avoid setting up workable VPN solutions, do not open any ssh ports into your organisation.
  • Demand working overtime. People will become tired, they'll sacrifice family and hobbies, guess how long they will remain happy coders.
  • Blindly deploy coding standards across the whole company and have those agreed upon in a committee. We all know how effective committee driven design (thanks to Pieter Hintjens for that term) is. Also demand 100% test coverage, forbid test driven development, forbid pair programming, demand 100% Junit coverage.
  • And of course check quality and performance as the very last thing during the development cycle. While at that avoid frequent deployments, do not let developers onto production machines – not even with read only access. Don't do small releases, let alone continuous deployment.
  • As a manager when rolling out changes: Forget about those retrospectives and incremental change. Roll out big changes at a time.
  • As a team lead accept broken builds, don't stop the line to fix a build – rather have one guy fix it while others continue to add new features.


When it comes to architecture there are a few certain ways to project death that you can follow to kill development:

  • Enforce framework usage across all projects in your company. Do the same for editors, development frameworks, databases etc. Instead of using the right tool for the job standardise the way development is done.
  • Employ a bunch of ivory tower architects that communicate with UML and Slide-ware only.
  • Remember: We are building complex systems. Complex systems need complex design. Have that design decided upon by a committee.
  • Communication should be system agnostic and standardised – why not use SOAP's xml over http?
  • Use Singletons – they'll give you tightly coupled systems with a decent amount of global state.


When it comes to development we can also make life for developers very hard:

  • Don't establish best practices and patterns – there is no need to learn from past failure.
  • We need not definition of done – everyone knows when something is done and what in particular that really means, right?
  • We need not common language – in particular not between developers and business analysts.
  • Don't use version control – or rely on Clear Case.
  • Don't do continuous integration.
  • Have no code ownership – in contrast have a separate module modified by a different developer and forbid others to contribute. That leaves us with a nice bus factor of 1.
  • Don't do pair programming to spread the knowledge. See above.
  • Don't do refactoring – rather get it right from the start.
  • Don't do non-functional requirements – something like “must cope with high load” is enough of a specification. Also put any testing at the end of the development process, do lots of manual testing (after all machines cannot judge quality as well as humans can, right?), post-pone all difficult pieces to the end, with a bit of luck they get dropped anyway. Also test evenly – there is no need to test more important or more complex pieces heavier than others.

Disclaimer for those who do not understand irony: The speaker Thomas Sundberg is very much into the agile manifesto, agile principles and xp values. The fun part of irony is that you can turn around the meaning of most of what is written above and get some good advise on not failing your projects.

GeeCon - TDD and it's influence on software design

2012-05-22 08:04
The second talk I went to on the first day was on the influence of TDD on software design. Keith Braithwaite did a really great job of first introducing the concept of cyclomatic complexity and than showing at the example of Hudson as well as many other open source Java projects that the average and mean cyclomatic complexity of all those projects actually is pretty close to one and when plotted for all methods pretty much follows a power law distribution. Comparing the properties of their specific distribution of cyclomatic complexities over projects he found out that the less steep the curve is, that is the more balance the distribution is, that is the less really complex pieces there are in the code the more likely are developers happy with the current state of the code. Not only that, also that distribution would be transformed into something more balanced after refactorings.

Now looking at a selection of open source projects he analyzed what the alpha of the distribution of cyclomatic complexity is for projects that have no tests at all, have tests and those that were developed according to TDD. Turns out that the latter ones were the ones with the most balanced alpha.

GeeCon - Randomized testing

2012-05-21 08:02
I arrived late during lunch time on Thursday for GeeCon – however just in time to listen to one of the most interesting talks when it comes to testing. Did you ever have the issue of writing code that runs well in your development environment but crashes as soon as it's rolled out at customers only to find out that their Locale setting was causing the issues? Ever had to deal with random test failure because against better advise your tests did depend on execution order that is almost guaranteed to be different on new JVM releases?

The Lucene community has encountered many similar issues. In effect they are faced with having to test a huge number of different configuration combinations in order to make sure that their software runs in all client setups. In recent months they developed an approach called randomised testing to tackle this problem: Essentially on each run “random tests” are run multiple times, each time with a slightly different configuration, input, in a different environment (e.g. Locale settings, time zones, JVMs, operating systems). Each of these configurations are pseudo random – however on test failure the framework will reveal the seed that was used to initialize that pseudo random number generator and thus allow you to reproduce the failure deterministically.

The idea itself is not new: published in a paper by Ntafos, used in fuzzers to identify security holes in applications this kind of technique is pretty well known. However applying it to write tests is a new idea used at Lucene.

The advantage is clear: With every new run of the test suite you gain confidence that your code is actually stable to any kind of user input. The downside of course is that you will discover all sorts of different issues and bugs not only in your code but also in the JVM itself. If your library is being used in all sorts of different setups fixing these issues upfront however is crucial to avoid users being surprised that it does not work well in their setup. Make sure to fix these failures quickly though – developers tend to ignore flickering tests over time. Adding randomness – and thereby essentially increasing the number of tests in your testsuite – will add the amount of effort to invest in fixing broken code.

Dawid Weiss gave a great overview of how random tests can be used to harden a code base. He introduced the testframework written at carrot search that isolated the random test features: It comes with a RandomizedRunner implementation that can be used to subsitute junit's own runner. It's capable of tracking test isolation by tracking spawned threads that might be leaking out of tests. In addition it provides utilities for instance for creating random strings, locals, numbers as well as annotations to denote how often a test should run and when it should run (always vs. nightly).

So when having tests with random input – how do you check for correctness? The most obvious thing to do is when being able to check the exact output. When testing a sorting method, not matter what the implementation and the input is – the output should always be sorted, which is easy enough to check. Also checking against simpler, but maybe in practice more expensive algorithms is an option.

A second approach is to do sanity checks: Math.abs() at least should always return positive integers. The third approach is to do no checking at all in some cases. Why would that help? You'd be surprised by how many failures and exceptions you get by actually using your API in unexpected ways or giving your program unexpected input. This kind of behaviour checking does not need any assertions.

Note: Really loved the APad/ iMiga that Dawid used to give his talk! Been such a long time since I last played with my own Amiga...

GeeCon 2012 - part 1

2012-05-20 11:02
Devoxx, Java Posse, Qcon, Goto Con, an uncountable number of local Java User Groups – aren't there enough conferences on just Java, that weird programming language that “makes developers stupid by letting them type too much boiler plate” (Keith Braithwaite)? I spent Thursday and Friday last week in Poznan at a conference called GeeCon – there main focus is on anything Java, including TDD, Agile and testability. It's all community organised – switching between Poznan and Krakow on a yearly basis, backed by two corresponding Java User groups with a clear focus on good speakers and interesting content: Really well done, wish they could have fit more talks into each of these days: Five tracks in parallel left one with just around 4 regular talks + keynotes each day. That does make for a very human start and end time – but it feels like there's so much going on in parallel that most likely you miss some of the particularly interesting content. Looking forward to the videos!

One note: If you are ever invited as a speaker to GeeCon: Do accept! It's really well organised, an incredibly friendly atmosphere, and a really tasty speaker's dinner. One thing that caught me be surprise this morning: My room was all paid for even though I stayed longer and had offered to cover the additional nights myself - Thanks guys, you rock!

Watch this space for more details on the talks in the coming days.