Devoxx – Day one – Java, Performance and Devops

2010-12-15 21:22
In his keynote Mark Reinhold provided some information on the very interesting features to be included in the Java 7 release. Generics will be easier to declare with the diamond operator. Nested try-finally constructs that are nowadays needed to safely close resources will no longer be necessary – their will be the option of implementing a Closeable interface supporting a method close() that get's called whenever objects of that class's type go out of scope. That way resources can be freed automatically. Though different in concept, it still reminds me a lot of the functionality typically provided by destructors in C++.

The support for lambda operators and direct method references that will greately help reducing clutter due to nested inner classes has been postponed for later Java releases. Though it took 4 years to come up with the Java 7 release new features are pretty much limited. However the current roadmap looks pretty much release date driven. The intention seems to be to get developers focussed on a limited set of reachable features to finally get the release out into the hands of users.

The speaker claimed Oracle to remain committed to Java development – first and foremost because of being a heavy Java user themselves. However also in order to generate revenue indirectly (through selling support and consulting for Java related products), directly (through Java support) and reducing internal development cost and Java friction.

Though Oracle had a JVM implementation of its own (jRocket) development of HotSpot will be continued – mostly due to a larger number developers being familiar with HotSpot. However monitoring and diagnosis tooling that was superior at jRocket is supposed to be ported to HotSpot.

In the core Java session I also went to the talk on Java performance analysis by Joshua Bloch. He a good job bringing the topic of performance analysis on complex systems to software developers. In ancient times it was quite easy to estimate a piece of code's static performance by static code analysis. Looking at the expression if (condition && secondCondition) it is still commonly considered to be faster to use “&&” over “&”. However looking at current CPU architectures that make heavy use of instruction pipelines it heavily depends on their branch prediction heuristics whether this statement is still true. Dirtying the pipeline by using && may well be more expensive than doing the extra evaluation. General message: The performance of your code in a real world system depends on the hardware it runs on, the operating system as well as the exact VM version used. Estimating performance based on static analysis only is no longer possible.

However even when doing benchmarks one might well reach false conclusions. It is common knowledge that running a benchmark on a VM is required to be run multiple times – VM warmup phases are well known to developers, so the common performance pattern for on specific function usually looks like that:

However even when repeating the test on the same machine multiple times, the values seen after warm-up may be skewed substantially. The only remedy to reaching false conclusions is to do several VM runs, average of the runs (and provide median etc. that are less susceptible to outliers) and provide error bars for each averaged run. When comparing two different implementations the only way to reliably tell which one is better than the other is to do statistical significance tests. Consider the diagram below. When leaving error bars out, the left implementation seems clearly better than the right. However when taking into account how widely skewed the performance numbers are and adding error bars to the entries, this is no longer the case: Both runs are no longer statistically significantly different.

Teddy in Antwerp

2010-12-12 21:30
When at Devoxx Teddy went to the city taking a few pictures of the Grote Markt, the Haven as well as the main train station.




Apache Lunch Devoxx

2010-12-11 21:30
On Twitter I suggested to host an Apache dinner during Devoxx. Matthias Wesendorf of Apache MyFaces was so kind to take up the discussion carrying it over to the Apache community mailing-list. It quickly turned out that there was quite some interest with several members and committers attending Devoxx. We scheduled the meetup for Friday after the conference during lunch time.
I pinged a few Apache related people I knew would attend the conference (being a speaker and a committer at some Apache project almost certainly resulted in getting a ping). Steven Noels kindly made a reservation at a restaurant close by and announced time and geo coordinates on party.apache.org. Although several speakers had left already that very same morning, we turned out to be eleven people – including Stephen Coleburn, Mathias Wessendorf, Steven Noels, Martijn Dashorst of the Apache Wicked project. Was great meeting all of you – and being able to put some faces to names :)

Devoxx – Day three

2010-12-10 21:28
The panel discussion on the future of Java was driven by visitor submitted and voted questions on the current state and future of Java. The general take-aways for me included the clear statement that the TCK will never be made available to the ASF. The promise of Oracle to continue supporting the Java community and remaining active in the JCP.

There was some discussion on whether coming Java versions should be backwards-incompatible. One advantage would be the removal of several Java puzzlers thus making it easier for Joe Java to write code in Java without knowing too much about potential inconsistencies. According to Joshua Bloch the language is no longer well suited to the average programmer who just simply wants to get his tasks done in a consistent and easy to use language: It has become too complicated over the course of the years and is in bitter need for simplification.

Having seen his presentation in Berlin at Buzzwords and silently following the project's progress online I skipped parts of the elastic search presentation. Instead went to the presentation on the Ghost-^wBoilerplate Busters from project Lombok. It always stroke me as odd that in a typical Java project there is so much code that can be generated automatically by Eclipse – such as getters/setters, equals/hashcode, delecation of methods and more. I never really understood why it is possible to generate all that code from Eclipse but not during compile time. Project Lombok however comes to the rescue here. As a compile time dependency it provides several annotations that are automatically converted to the correct code on the fly. It includes support for getter/setter generation, handling of closable resources (even with the current stable version of java), generation of thread safe lazy initialisation of member variables, automatic implementation of the composition over inheritance pattern and much more.

The library can be used from within Eclipse, in maven, ant, ivy, on Google App Engine. One of the developers in charge for IntelliJ who was in the audience announced that the library will be supported by the next version of IntelliJ as well.

Devoxx – Day 2 HBase

2010-12-09 21:25
Devoxx featured several interesting case studies of how HBase and Hadoop can be used to scale data analysis back ends as well as data serving front ends.

Twitter



Dmitry Ryaboy from Twitter explained how to scale high load and large data systems using Cassandra. Looking at the sheer amount of tweets generated each day it becomes obvious that with a system like MySQL alone this site cannot be run.

Twitter has released several of their internal tools under a free software license for others to re-use – some of them being rather straight forward, others more involved. At Twitter each Tweet is annotated by a user_id, a time stamp (ok if skewed by a few minutes) as well as a unique tweet_id. In order to come up with a solution for generating the latter one they built a library called snowflake. Though rather simple algorithm even works in a cross data-centre set-up: The first bits are composed of the current time stamp, the following bits encode the data-centre, after that there is room for a counter. The tweet_ids are globally ordered by time and distinct across data-centres without the need for global synchronisation.

With gizzard Twitter released a rather general sharding implementation that is used internally to run distributed versions of Lucene, MySQL as well as Redis (to be introduced for caching tweet timelines due to its explicit support for lists as data structures for values that are not available in memcached).

FlockDB for large scale social graph storage and analysis. Rainbird for time series analysis, though with OpenTSDB there is something comparable available for HBase. Haplocheirus for message vector caching (currently based on memcached, soon to be migrated to Redis for its richer data structures). The queries available through the front-end are rather limited thus making it easy to provide pre-computed, optimised version in the back-end. As with the caching problem a tradeoff between hit rate on the pool of pre-computed items vs. storage cost can be made based on the observed query distribution.

In the back-end of Twitter various statistical and data mining analysis are run on top of Hadoop HBase To compute potentially interesting followers for users, to extract potentially interesting products etc.
The final take-home message here: Go from requirements to final solution. In the space of storage systems there is not such thing as a silver bullet. Instead you have to carefully evaluate features and properties of each solutions as your data and load increase.

Facebook



When implementing Facebook Messaging (a new feature that was announced this week) Facebook decided to go for HBase instead of Cassandra. The requirements of the feature included massive scale, long-tail write access to the database (which more or less ruled out MySQL and comparable solutions) and a need for strict ordering of messages (which ruled out any eventually consistent system. The decision was made to use HBase.

A team of 15 developers (including operations and frontend) was working on the system for one year before it was finally released. The feature supports for integration of facebook messaging, IM, SMS and mail into one single system making it possible to group all messages by conversation no matter which device was used to send the message originally. That way each user's inbox turns into a social inbox.

Adobe



Cosmin Lehene presented four use cases of Hadoop at Adobe. The first one dealt with creating and evaluating profiles of the Adobe Media Player. Users would be associated with a vector giving more information on what types of genre the meda they consumed belonged to. These vectors would then be used to generate recommendations for additional content to view in order to increase consumption rate. Adobe built a clustering system that would interface Mahout's canopy- and k-means implementations with their HBase backend for user grouping. Thanks Cosmin for including that information in your presentation!

A second use case focussed on finding out more on the usage of flash on the internet. Using Google to search for flash content was no good as only the first 2000 results could be viewed thus resulting in a highly skewed sample. Instead they used a mixture of nutch and HBase for storage to retrieve the content. Analysis was done with respect to various features of flash movies, such as frame rates. The analysis revealed a large gap between the perceived typical usage and the actual usage of flash on the internet.

The third use case involves analysis of images and usage patterns on the Photoshop-in-a-browser edition of Photoshop.com. The forth use case dealt with scaling the infrastructure that powers businesscatalyst – a turn-key online business platform solution including analysis, campaigning and more. When purchased by Adobe the system was very successful business-wise. However the infrastructure was by no means able to put up with the load it had to accommodate. Changing to a back-end based on HBase led to better performance, faster report generation.

Devoxx – Day two – Hadoop and HBase

2010-12-08 21:24
In his session on the current state of Hadoop Tom went into a little more detail not only on the features released in the latest release or on the roadmap for upcoming releases (including Kerberos based security, append support, warm standby namenode and others).
He also gave a very interesting view on the current Hadoop ecosystem. More and more projects are currently being created that either extend Hadoop or are built on top of Hadoop. Several of these are being run as projects at the Apache Software Foundation, however some are available outside of Apache only. Using graphviz he created a graph of projects depending on or extending Hadoop and from that provided a rough classification of these projects.

As to be expected HDFS and Map/Reduce are part of the very basis of this ecosystem. Right next to them sits zookeeper, a distributed coordination and looking service.

Storage systems extending the capabilities of HDFS include HBase that adds random read/write as well as realtime access to the otherwise batch-oriented distributed file-system. With PIG and Hive and Cascading three projects are making it easier to formulate complex queries for Hadoop. Among the three, PIG is mainly focussed on expressing data filtering and processing, with SQL support being added over time as well. Hive came from the need for SQL formulation on Hadoop clusters. Cascading goes a slightly different way, providing a Java API for easier query formulation. The new kid on the block sort of is Plume, a project initiated by Ted Dunning that has the goal of coming up with a Map/Reduce abstraction layer inspired by Google's Flume Java publication.

There are several projects for data import into HDFS. Sqoop can be used for interfacing with RDMBS. Chukwa and Flume deals with feeding log data into the filesystem. For general co-ordination and workflow orchestration there is the release of Oozie, originally developed at Yahoo! as well as support for workflow definition in Cascading.

When storing data in Hadoop it is a common requirement to find a compact, structured representation of the data to store. Though human readable, xml files are not very compact. However when using any binary format, schema evolution commonly is a problem: Adding, renaming or deleting fields in most cases causes the need to upgrade all code interacting with the data as well as re-formatting already stored data. With Thrift, Avro and Protocol Buffers there are three options available for storing data in a compact, structured binary format. All three projects come with support for schema evolution by providing users no only to deal with missing data but also by providing a means to map old to new fields and vice versa.

Devoxx – Day two – Caching

2010-12-07 21:22
Day two started with a really good talk on caching architectures by Greg Luck. He first motivated why caching works: Even with SSIDs being available now there is still a huge performance gap between RAM access times and having to go to disk. The issue is even worse in systems that are architected in a distributed way making frequent calls to remote systems.

When sizing systems for typical load, what is oftentimes forgotten is that there is no such thing as typical load: Usually the load distribution observed over one day for a service used mainly in one time zone has the shape of an elephant – most queries are issued during lunch time (head of the elephant) with another but smaller peak during the afternoon. This pattern repeats when looking at the weekly distribution, repeats again when looking at the yearly distribution. When looking at the peak time of the year, at the peak day, at the peak time your lead may be increased by several orders of magnitude compared to average load.

Although query volume may be high in most applications that reach out for caching, these queries usually exhibit a power law distribution. This means that there are just a few queries being issued very frequently, however many queries are pretty seldom. This pattern allows for high cache hit rates thus reducing load substantially even during very busy times.

The speaker went into some more detail concerning different architectures: Usually projects start with one cache located directly on the frontend server. When scaling horizontally and adding more and more frontends this leads to an ever increasing load on the database during one period of lifetime for one cached item. The first idea employed to remedy this setup is to link the different caches to each other increasing cache hit rates. Problem here are updates racing to the various caches when the same query is issued to the backend by more than one frontend. The usual next step is to go for a distributed remote cache such as memcache. Of course this has the draw-back of now having to do a network call for each cache access slowing down response times by several milliseconds. Another problem with distributed caching systems is a theorem well known to people building distributed NoSQL databases: CAP says that you can get only two of the three desired properties consistency, availability and partition-tolerance. Ehcache with a terracotta back end lets you configure where your priority lies.

Devoxx – University – Cassandra, HBase

2010-12-06 21:20
During the morning session FIXME Ellison gave an introduction to the distributed NoSQL database Cassandra. Being generally based on the Dynamo paper from Amazon the key-value store distributes key/value pairs according to a consistent hashing schema. Nodes can be added dynamically making the system well suited for elastic scaling. In contrast to Dynamo, Cassandra can be tuned for the required consistency level. The system is tuned for storing moderately sized key/value pairs. Large blobs of data should not be stored into it. A recent addition to Cassandra has been the integration with Hadoop's Map/Reduce implementation for data processing. In addition Cassandra comes with a Thrift interface as well as higher level interfaces such as Hector.

In the afternoon Michael Stack and Jonathan Grey gave an overview of HBase covering basic installation, going into more detail concerning the APIs. Most interesting to me was the HBase architecture and optimisation details. The systems is inspired by Google's BigTable paper. It uses Apache HDFS as storage back-end inheriting the failure resilience of HDFS. The system uses Zookeeper for co-ordination and meta-data storage. However fear not, Zookeeper comes packaged with HBase, so in case you have not yet setup your own Zookeeper installation, HBase will do that for you on installation.

HBase is split into a master server for holding meta-data and region servers for storing the actual data. When storing data HBase optimises storage for data locality. This is done in two ways: On write the first copy usually goes to the local disk of the client, so even when storage exceeds the size of one block and remote copies get replicated to differing nodes, at least the first copy gets stored on one machine only. During a compaction phase that is scheduled regularly (usually every 24h, jitter time can be added to lower the load on the cluster) data is re-shuffled for optimal layout. When running Map/Reduce jobs against HBase this data locality can be easily exploited. HBase comes with its own input and output formats for Hadoop jobs.

HBase comes not only with Map/Reduce integration, it also publishes a Thrift interface, a REST interface and can be queried from an HBase shell.

Devoxx University – Productive programmer, HBase

2010-12-04 21:17
The first day at Devoxx featured several tutorials – most interesting to me was the pragramatic programmer. The speaker also is the author of the equally named book at O'Reilly. The book was the result of the observation that developers today are more and more IDE bound, no longer able to use the command line effectively. The result are developers that are unnecessarily slow when creating software. The goal was to bring usage patterns of productive software development to juniors how grew up in a GUI only environment. However half-way through the book, it became apparent that a book on command line wizardry only is barely interesting at all. So the focus was shifted and now includes more general productivity patterns.
The goal was to accelerate development – mostly by avoiding time consuming usage patterns (minimise mouse usage) and automation of repetitive tasks (computers are good at doing dull, repetitive tasks – that's what they are made for.
Second goal was increasing focus. Two main ingredients to that are switching off anything that disturbs the development flow: No more pop-ups, not more mail notifications, no more flashing side windows. If you have ever had the effect of thinking “So late already?” when your colleagues were going out for lunch – then you know what is meant by being in the flow. It takes up to 20min to get into this mode – but just the fraction of a second to be thrown out. With developers being significantly more productive in this state it makes sense to reduce the risk of being thrown out.
Third goal was about canonicality, fourth one on automation.
During the morning I hopped on and off the Hadoop talk as well – the tutorial was great to get into the system, Tom White went into detail also explaining several of the most common advanced patterns. Of course not that much new stuff if you sort-of know the system already :)

Devoxx Antwerp

2010-12-03 21:16
With 3000 attendees Devoxx is the largest Java Community conference world-wide. Each year in autumn it takes place in Antwerp/ Belgium, in recent years in the Metropolis cinema. The conference tickets were sold out long before doors were opened this year.
The focus of the presentations are mainly on enterprise Java featuring talks by famous Joshua Bloch, Mark Reihnhold and others on new features of the upcoming JDK release as well as intricacies of the Java programming language itself.
This year for the first time the scope was extended to include one whole track on NoSQL databases. The track was organised by Steven Noels. It featured fantastic presentations on HBase use cases, easily accessible introductions to the concepts and usage of Hadoop.
To me it was interesting to observe which talks people would go to. In contrast to many other conferences here the NoSQL/ cloud-computing presentations were less visited than I'd have expected. One reason might be the fact that especially on conference day two they had to compete with popular topics such as the Java puzzlers, Live Java posse and others. However when talking to other attendees their seemed to be a clear gap between the two communities caused probably by a mixture of

  • there being very different problems to be solved in the enterprise world vs. the free software, requirements and scalability driven NoSQL community. Although even comparably small companies (compared to the Googles and Yahoo!s of this world) in Germany are already facing scaling issues, these problems are not yet that pervasive in the Java community as a whole. To me this was rather important to learn, as coming from a Machine learning background, now working for a search provider and being involved with Mahout, Lucene and Hadoop scalability and a growth in data has always been one of the major drivers for any projects I have been working on so far.
  • Even when faced with growing amounts of data in the regular enterprise world developers seem to be faced with the problem of not being able to freely select the technologies to be used for implementing a project. In contrast to startups and lean software teams there still seem to be quite a few teams that are not only given what to implement but also how to implement the software unnecessarily restricting the tools to use to solve a given problem.

One final factor that drives developers adopting NoSQL and cloud computing technologies is the observation for the need to optimise the system as a whole – to think outside the box of fixed APIs and module development units. To that end the DevOps movement was especially interesting to me as only by getting the knowledge largely hidden in operations teams into development and mixing that with the skill of software developers can lead to truly elastic and adaptable systems.