Video is up - Josh Devins on Apache Hadoop at Nokia

2011-02-21 21:19

FOSDEM - HBase at Facebook Messaging

2011-02-17 20:17

Nicolas Spiegelberg gave an awesome introduction not only to the architecture that powers Facebook messaging but also to the design decisions behind their use of Apache HBase as a storage backend. Disclaimer: HBase is being used for message storage, for attachements with Haystack a different backend is used.

The reasons to go for HBase include its strong consistency model, support for auto failover, load balancing of shards, support for compression, atomic read-modify-write support and the inherent Map/Reduce support.

When going from MySQL to HBase some technological problems had to be solved: Coming from MySQL basically all data was normalised - so in an ideal world, migration would have involved one large join to port all the data over to HBase. As this is not feasable in a production environment instead what was done was to load all data into an intermediary HBase table, join the data via Map/Reduce and import all into the target HBase instance. The whole setup was run in a dark launch - being fed with parallel life traffic for performance optimisation and measurement.

The goald was zero data loss in HBase - which meant using the Apache Hadoop append branch of HDFS. The re-designed the HBase master in the process to avoid having a single point of failure, backup masters are handled by zookeeper. Lots of bug fixes went back from Facebooks engineers to the HBase code base. In addition for stability reason rolling restarts were added for upgrades, performance improvements, consistency checks.

The Apache HBase community received lots of love from Facebook for their willingness to work together with the Facebook team on better stability and performance. Work on improvements was shared between teams in an amazing open and inclusive model to development.


One additional hint: FOSDEM videos of all talks including this one have been put online in the meantime.

Slides of yesterday's Apache Hadoop Get Together

2011-01-28 21:40
This time with little less than 24 hours delay - the usual, by some impatiently expected, summary of the Apache Hadoop Get Together. The meetup took place at Zanox' event campus. The room was well filled with some fourty attendees from various companies, experience with Hadoop ranging from interested beginners to experienced users.

Slides of all presentations:



The first presentation was given by Josh Devins from Nokia in Berlin. He is working closely with the OVI maps team. After giving a general overview of the cluster setup as well as some information on what machines they are running Hadoop on. Currently Hadoop is used mostly to process log data and aggregate information from it. For that task scribe is used for log collection, standard Ganglia and Nagios for monitoring and graphing. When starting to process and aggregate log data the main challenge is a mixture of transforming the logs into some slightly consistant format, cleaning logs from noisy data and in some cases initiating the storage of further information from various services. Nokia is a heavy - and happy - user of Pig though they are looking into Hive for making data accessible to business analysts who usually are more familiar with SQL like languages.

As an example - the results of a few simple jobs on analysing location based searches were shown: Looking at where in the greater Berlin area searches for "Ikea" were issued - at least Ikea Tempelhof and Spandau were easy to make out. On a more serious use case similar information could be used for automatically detecting traffic jams. Currently Nokia is only scratching the surface of all information that could possibly be extracted. So there is quite some interesting work ahead.


In the second presentation Simon Willnauer gave a deep dive introduction to the various stunning performance improvements of Lucene4 - the not yet released, not backwards compatible trunk version of Apache Lucene. For more flexible indexing column stride fields have been integrated. With the introduction of an automaton implementation fuzzy query performance could be improved significantly reducing complexity from n to log n. In addition Simon had a great surprise to share with the audience: He proudly announced that Ted Dunning (you know that guy who is active on nearly every Hadoop mailing list, shares a lot of in-depth theoretical knowledge that is backed by proven practical experience?) and Doug Cutting (founder of Lucene, Hadoop and many other Apache projects) are going to be keynote speakers at Berlin Buzzwords.


In the third presentation Paolo Negri shared some inside as to how Wooga's Ruby on Rails/ MySQL based system was scaled. Disclaimer: Redis did play a major role when upping performance.

Videos will be published as soon as they are processed - thanks again to Cloudera for supporting the event by sponsoring video taping.

Apache Hadoop Get Together Berlin - January 2011

2010-12-28 16:31
This is to announce the next Apache Hadoop Get Together sponsored by Cloudera and Zanox that will take place in the Zanox Event Campus in Berlin.

When: January 27th 2011, 6p.m.

Where: zanox Event Campus (Please mark the changed event location.)


Größere Kartenansicht

As always there will be slots of 30min each for talks on your Hadoop topic. After each talk there will be a lot time to discuss. We head over to a bar after the event for some beer and something to eat.

Talks scheduled so far:

Simon Willnauer: "Lucene 4 - Revisiting problems for speed"

Abstract: This talk presents a brief case study of long standing problems in Lucene and how they have been approached to gain sizable performance improvements. Each of the presented problems will have brief introduction, implemented solution and resulting performance improvements. This talk might be interesting even for non-lucene folks.

Josh Devins: "Title: Hadoop at Nokia"
Abstract: In this talk, Josh will outline some of the ways in which Nokia is using Hadoop. We will start by having a quick look at the practical side of getting started with Hadoop and outline cluster hardware and configuration and management with tools like Puppet. Next we'll dive head first into how Hadoop and its' ecosystem are being utilized on a daily basis to perform business analytics, drive machine learning and help build data-driven products. We will also touch on how we go about collecting metrics from dozens of applications distributed in multiple data centers around the world. An open Q&A session will follow.

Paolo Negri: "The order of magnitude challenge: from 100K daily users to 1M "
Abstract: "Social games backends share many aspects of normal web applications, but exasperate scaling problems, follow this talk to see how we evolved and brought a plain ruby on rails app to sustain 5000 reqs/sec, moved part of our data from sql to nosql to reach 5 millions queries per minute and see what we learned from this experience."

Please do indicate on Upcoming or Xing if you are coming so we can more safely plan capacities.

A big Thank You goes to zanox for providing the venue for free for our event as well as to Cloudera for supporting videos being taped of the presentations.

Looking forward to seeing you in Berlin,
Isabel

Apache Hadoop - Trainings by Cloudera in Berlin

2010-12-22 23:53
Cloudera is offering trainings both for Administrators as well as for Developers early next year in Berlin. If your are getting started in using Apache Hadoop this might be a great option to get your developers and operations up to speed with the framework. If you are a regular of the local Apache Hadoop Get Together a discount code should have been sent to you by mail.