JAX: Pigs, snakes and deaths by 1k cuts

2013-05-20 20:32
In his talk on performance problems Rainer Schuppe gave a great introduction to
which kinds of performance problems can be observed in production and how to
best root-cause them.


Simply put performance issues usually arise due to a difference in either data
volumn, concurrency levels or resource usage between the dev, qa and production
environments. The tooling to uncover and explain them is pretty well known:
Staring with looking at logfiles, ARM tools, using aspects, bytecode
instrumentalisation, sampling, watching JMX statistics, and PMI tools.


All of theses tools have their own unique advantages and disadvantages. With
logs you get the most freedom, however you have to know what to log at
development time. In addition logging is i/o heavy, so doing too much can slow
the application down itself. In a common distributed system logs need to be
aggregated somehow. As a simple example of what can go wrong are cascading
exceptions spilled to disk that cause machines to run out of disk space one
after the other. When relying on logging make sure to keep transaction
contexts, in particular transaction ids across machines and services to
correlate outages. In terms of tool support, look at scribe, splunk and flume.


A tool often used for tracking down performance issues in development is the
well known profiler. Usually it creates lots of very detailed data. However it
is most valuable in development - in production profiling a complete server
stack produces way too much load and data to be feasable. In addition there's
usually no transaction context available for correlation again.


A third way of watching applications do their work is to watch via JMX. This
capability is built in for any Java application, in particular for servlet
containers. Again there is not transaction context. Unless you take care of it
there won't be any historic data.


When it comes to diagnosing problems, you are essentially left with fixing
either the "it does not work" case or the "it is slow case".


For the "it is slow case" there are a few incarnations:


  • It was always slow, we got used to it.

  • It gets slow over time.

  • It gets slower exponentially.

  • It suddenly gets slow.

  • There is a spontanous crash.




In the case of "it does not work" you are left with the following observations:


  • Sudden outages.

  • Always flaky.

  • Sporadic error messages.

  • Silent death.

  • Increasing error rates.

  • Misleading error messages.




In the end you will always be spinning in a Look at symptoms, Elimnate
non-causes, Identifiy suspects, Confirm and Eliminate comparing to normal. If
not done with that, leather, rinse, repeat. When it comes to causes for errors
and slowness you will usually will run into one of the following causes: In
many cases bad coding practices are a problem, too much load, missing backends,
resource conflicts, memory and resource leakage as well as hardware/networking
issues are causes.


Some symptoms you may observe include foreseeable lock ups (it's always slow
after four hours, so we just reboot automatically before that), consistent
slowness, sporadic errors (it always happens after a certain request came in),
getting slow and slower (most likely leaking resources), sudden chaos (e.g.
someone pulling the plug or someone removing a hard disk), and high utilisation
of resources.

Linear memory leak



In case of a linear memory leak, the application usually runs into an OOM
eventually, getting ever slower before that due to GC pressure. Reasons could
be linear structures being filled but never emptied. What you observe are
growing heap utilisation and growing GC times. In order to find such leakage
make sure to turn on verbose GC logging, do heapdumps to find leaks. One
challenge though: It may be hard to find the leakage if the problem is not one
large object, but many, many small ones that lead to a death by 1000 cuts
bleeding the application to death.


In development and testing you will do heap comparisons. Keep in mind that
taking a heap dump causes the JVM to stop. You can use common profilers to look
at the heap dump. There are variants that help with automatic leak detection.


A variant is the pig in a python issue where sudden unusually large objects
cause the application to be overloaded.


Resource leaks and conflicts


Another common problem is leaking resources other than memory - not closing
file handles can be one incarnation. Those problems cause a slowness over time,
they may lead to having the heap grow over time - usually that is not the most
visible problem though. If instance tracking does not help here, your last
resort should be doing code audits.


In case of conflicting resource usage you usually face code that was developed
with overly cautious locking and data integrity constraints. The way to go are
threaddumps to uncover threads in block and wait states.


Bad coding practices


When it comes to bad coding practices what is usually seen is code in endless
loops (easy to see in thread dumps), cpu bound computations where no result
caching is done. Also layeritis with too much (de-)serialisation can be a
problem. In addition there is a general "the ORM will save us all" problem that
may lead to massive SQL statements, or to using the wrong data fetch strategy.
When it comes to caching - if caches are too large, access times of course grow
as well. There could be never ending retry loops, ever blocking networking
calls. Also people tend to catch exceptions but not do anything about them
other than adding a little #fixme annotation to the code.


When it comes to locking you might run into dead-/live-lock problems. There
could be chokepoints (resources that all threads need for each processing
chain). In a thread dump you will typically see lots of wait instead of block
time.


In addition there could be internal and external bottlenecks. In particular
keep those in mind when dealing with databases.


The goal should be to find an optimum for your application between too many too
small requests that waste resources getting dispatched, and one huge request
that everyone else is waiting for.

JAX: Java HPC by Norman Maurer

2013-05-19 20:31
For slides see also: Speakerdeck: High performance networking on the JVM


Norman started his talk clarifying what he means by high scale: Anything above
1000 concurrent connections in his talk are considered high scale, anything
below 100 concurrent connections is fine to be handled with threads and blocking
IO. Before tuning anything, make sure to measure if you have any problem at
all: Readability should always go before optimisation.


He gave a few pointers as to where to look for optimisations: Get started by
studying the socket options - TCP-NO-DELAY as well as the send and receive
buffer sizes are most interesting. When under GC pressure (check the GC locks
to figure out if you are) make sure to minimise allocation and deallocation of
objects. In order to do that consider making objects static and final where
possible. Make sure to use CMS or G1 for garbage collection in order to
maximise throughput. Size areas in the JVM heap according to your access
patterns. The goal should always be to minimise the chance of running into a
stop the world garbage collection.


When it comes to using buffers you have the choice of using direct or heap
buffers. While the former are expensive to create, the latter come with the
cost of being zero'ed out. Often people start buffer pooling, potentially
initialising the pool in a lazy manner. In order to avoid memory fragmentation
in the Java heap, it can be a good idea to create the buffer at startup time
and re-use it later on.


In particular when parsing structured messages like they are common in
protocols it usually makes sense to use gathering writes and scattering reads
to minimise the number of system calls for reading and writing. Also try to
buffer more if you want to minimise system calls. Use slice and duplicate to
create views on your buffers to avoid mem copies. Use a file channel when
copying files without modifications.


Make sure you do not block - think of DNS servers being unavailable or slow as
an example.


As a parting note, make sure to define and document your threading model. It
may ease development to know that some objects will always only be used in a
single threaded context. It usually helps to reduce context switches as well as
may ease development to know that some objects will always only be used in a
single threaded context. It usually helps to reduce context switches as well as
keeping data in the same thread to avoid having to use synchronisation and the
use of volatile.


Also make a conscious decision about which protocol you would like to use for
transport - in addition to tcp there's also udp, udt, sctp. Use pipelining in
order to parallelise.


JAX: Hadoop overview by Bernd Fondermann

2013-05-18 20:29


After breakfast was over the first day started with a talk by Bernd on the
Hadoop ecosystem. He did a good job selecting the most important and
interesting projects related to storing data in HDFS and processing it with Map
Reduce. After the usual "what is Hadoop", "what does the general architecture
look like", "what will change with YARN" Bernd gave a nice overview of which
publications each of the relevant projects rely on:




  • HDFS is mainly based on the paper on GFS.

  • Map Reduce comes with it's own publication.

  • The big table paper mainly inspired Cassandra (to some extend), HBase,
    Accumulo and Hypertable.

  • Protocol Buffers inspired Avro and Thrift, and is available as free
    software itself.

  • Dremel (the storage side of things) inspired Parquet.

  • The query language side of Dremel inspired Drill and Impala.

  • Power Drill might inspire Drill.

  • Pregel (a graph database) inspired Giraph.

  • Percolator provided some inspiration to HBase.

  • Dynamo by Amazon kicked of Cassandra and others.

  • Chubby inspired Zookeeper, both are based on Paxos.

  • On top of Map Reduce today there are tons of higher level languages,
    starting with Sawzall inside of Google, continuing with Pig and Hive at Apache
    we are now left with added languages like Cascading, Cascalog, Scalding and
    many more.

  • There are many other interesting publications (Megastore, Spanner, F1 to
    name just a few) for which there is no free implementation yet. In addition
    with Storm, Hana and Haystack there are implementations lacking canonical
    publications.



After this really broad clarification of names and terms used, Bernd went into
some more detail on how Zookeeper is being used for defining the namenode in
Hadoop 2, how high availablility and federation works for namenodes. In
addition he gave a clear explanation of how block reports work on cluster
bootup. The remainder of the talk was reserved for giving an intro to HBase,
Giraph and Drill.

BigDataCon

2013-05-17 20:29


Together with Uwe Schindler I had published a series of articles on Apache
Lucene at Software and Support Media's Java Mag several years ago. Earlier this
year S&S kindly invited my to their BigDataCon - co-located with JAX to give a
talk of my choosing that at least touches upon Lucene.


Thinking back and forth about what topic to cover what came to my mind was to
give a talk on how easy it is to do text classification with Mahout when
relying on Apache Lucene for text analysis, tokenisation and token filtering.
All classes essentially are in place to integrate Lucene Analyzers with Mahout
vector generation - needed e.g. as a pre-processing step for classification or
text clustering.


Feel free to check out some of my sandbox code over at <a
href=``http://github.org/MaineC/sofia''>github</a>.


After attending the conference I can only recommend everyone interested in Java
programming and able to understand German to buy a ticket for the conference.
It's really well executed, great selection of talks (though the sponsored
keynotes usually aren't particularly interesting), tasty meals, interesting
people to chat with.

Hadoop Summit Amsterdam

2013-05-16 20:27


About a month ago I attended the first European Hadoop Summit, organised by
Hortonworks in Amsterdam. The two day conference brought together both vendors
and users of Apache Hadoop for talks, exhibition and after conference beer
drinking.


Russel Jurney kindly asked me to chair the Hadoop applied track during
Apache Con EU. As a result I had a good excuse to attend the event. Overall
there were at least three times as many submissions than could reasonably be
accepted. Accordingly accepting proposals was pretty hard.


Though some of the Apache community aspect was missing at Hadoop summit it was
interesting nevertheless to see who is active in this space both as users as
well as vendors.


If you check out the talks on Youtube make sure to not miss the two sessions by
Ted Dunning as well as the talk on handling logging data by Twitter.

ApacheConNA: Misc

2013-05-15 20:26


In his talk on Spdy Mathew Steele explained how he implemented the spdy protocol
as an Apache httpd module - working around most of the safety measures and
design decisions in the current httpd version. Essentially to get httpd to
support the protocol all you need now is mod_spdy plus a modified version of
mod_ssl.


The keynote on the last day was given by the Puppet founder. Some interesting
points to take away from that:




  • Though hard in the beginning (and half way through, and after years) it
    is important to learn giving up control: It usually is much more productive and
    leads to better results to encourage people to do something than to be
    restrictive about it. A single developer only has so much bandwidth - by
    farming tasks out to others - and giving them full control - you substantially
    increase your throughput without having to put in more energy.

  • Be transparent - it's ok to have commercial goals with your project. Just
    make sure that the community knows about it and is not surprised to learn about
    it.

  • Be nice - not many succeed at this, not many are truely able to ignore
    religion (vi vs. emacs). This also means to be welcoming to newbies, to hustle
    at conferences, to engage the community as opposed to announcing changes.




Overall good advise for those starting or working on an OSS project and seeking
to increase visibility and reach.

If you want to learn more on what other talks were given at ApacheCon NA or want to follow up in more detail on some of the talks described here check out the slides archive online.

ApacheConNA: Hadoop metrics

2013-05-14 20:25


Have you ever measured the general behaviour of your Hadoop jobs? Have you
sized your cluster accordingly? Do you know whether your work load really is IO
bound or CPU bound? Legend has it noone expecpt Allen Wittenauer over at
Linked.In, formerly Y! ever did this analysis for his clusters.


Steve Watt gave a pitch for actually going out into your datacenter measuring
what is going on there and adjusting the deployment accordingly: In small
clusters it may make sense to rely on raided disks instead of additional
storage nodes to guarantee ``replication levels''. When going out to vendors to
buy hardware don't rely on paper calculations only: Standard servers in Hadoop
clusters are 1 or 2u. This is quite unlike beefy boxes being sold otherwise.


Figure out what reference architecture is being used by partners, run your
standard workloads, adjust the configuration. If you want to run the 10TB
Terrasort to benchmark your hardware and system configuration. Make sure to
capture data during all your runs - have Ganglia or SAR, watch out for
intersting behaviour in io rates, cpu utilisation, network traffic. The goal is
to get the cpu busy, not wait for network or disk.


After the instrumentation and trial run look for over- and underprovisionings,
adjust, leather, rinse, repeat.


Also make sure to talk to the datacenter people: There are floor space, power
and cooling constraints to keep in mind. Don't let the whole datacenter go down
because your cpu intensive job is drawing more power than the DC was designed
for. Ther are also power constraints per floor tile due to cooling issues -
those should dictate the design.


Take a close look at the disks you deploy: SATA vs. SAS can make a 40%
performance difference at a 20% cost difference. Also the number of cores per
machines dictates the number of disks to spread the likelyhood of random read
access. As a rule of thumb - in a 2U machine today there should be at least
twelve large form factor disks.


When it comes to controllers he goal should be to get a dedicated lane to disc,
safe one controller if price is an issue. Trade off compute power against power
consumption.


Designing your network keep in mind that one switch going down means that one
rack will be gone. This may be a non-issue in a Y! size cluster, in your
smaller scale world it might be worth the money investing in a second switch
though: Having 20 nodes go black isn't a lot of fun if you cannot farm out the
work and re-replication to other nodes and racks. Also make sure to have enough
ports in rack switches for the machines you are planning to provision.


Avoid playing the ops whake-a-mole game by having one large cluster in the
organisation than many different ones where possible. Multi-tenancy in Hadoop is
still pre-mature though.


If you want to play with future deployments - watch out for HP currently
packing 270 servers where today are just two via system on a chip designs.


ApacheConNA: Monitoring httpd and Tomcat

2013-05-13 20:23


Monitoring - a task generally neglected - or over done - during development.
But still vital enough to wake up people from well earned sleep at night when
done wrong. Rainer Jung provided some valuable insights on how to monitor Apache httpd and Tomcat.


Of course failure detection, alarms and notifications are all part of good
monitoring. However so is avoidance of false positives and metric collection,
visualisation, and collection in advance to help with capacity planning and
uncover irregular behaviour.


In general the standard pieces being monitored are load, cache utilisation,
memory, garbage collection and response times. What we do not see from all that
are times spent waiting for the backend, looping in code, blocked threads.


When it comes to monitoring Java - JMX is pretty much the standard choice. Data
is grouped in management beans (MBeans). Each Java process has default beans,
on top there are beans provided by Tomcat, on top there may be application
specific ones.


For remote access, there are Java clients that know the protocol - the server
must be configured though to accept their connection. Keep in mind to open the
firewall in between as well if there is any. Well known clients include
JVisualVM (nice for interactive inspection), jmxterm as a command line client.


The only issue: Most MBeans encode source code structure, where what you really
need is change rates. In general those are easy to infer though.


On the server side for Tomcat there is the JMXProxy in Tomcat manager that
exposes MBeans. In addition there is Jolohia (including JSon serialisation) or
the option to roll your own.


So what kind of information is in MBeans:




  • OS - load, process cpu time, physical memory, global OS level
    stats. As an example: Here deviding cpu time by time geves you the average cpu
    concurrency.


  • Runtime MBean gives uptime.

  • Threading MBean gives information on count, max available threads etc

  • Class Loading MBean should get stable unless you are using dynamic
    languaes or have enabled class unloading for jsps in Tomcat.

  • Compliation contains HotSpot compiler information.

  • Memory contains information on all regions thrown in one pot. If you need
    more fine grained information look out for the Memory Pool and GC MBeans.




As for Tomcat specific things:




  • Threadpool (for each connector) has information on size, number of busy
    threads.

  • GlobalRequestProc has request counts, processing times, max time bytes
    received/sent, error count (those that Tomcat notices that is).

  • RequestProcessor exists once per thread, it shows if a request is
    currently running and for how long. Nice to see if there are long running
    requests.

  • DataSource provides information on Tomcat provided database connections.




Per Webapp there are a couple of more MBeans:




  • ManagerMBean has information on session management - e.g. session
    counter since start, login rate, active sessions, expired sessions, max active
    sinse restart sessions (here a restart is possible), number of rejected
    sessions, average alive time, processing time it took to clean up sessions,
    create and required rate for last 100 sessions

  • ServletMBean contains request count, accumulated processing time.

  • JspMBean (together with activated loading/unloading policy) has
    information on unload and reload stats and provides the max number of loaded
    jsps.




For httpd the goals with monitoring are pretty similar. The only difference is
the protocol used - in this case provided by the status module. As an
alternative use the scoreboard connections.


You will find information on




  • restart time, uptime

  • serverload

  • total number of accesses and traffic

  • idle workers and number of requests currently processed

  • cpu usage - though that is only accurate when all children are stopped
    which in production isn't particularly likely.




Lines that indicate what threads do contain waitinng, request read, send reply
- more information is documented online.


When monitoring make sure to monitor not only production but also your stress
tests to make meaningful comparisons.


ApacheConNA: On Security

2013-05-12 20:22


During the security talk at Apache Con a topic commonly glossed over by
developers was covered in quite some detail: With software being developed that
is being deployed rather widely online (over 50% of all websites are powered
by the Apache webserver) natually security issues are of large concern.


Currently there are eight trustworthy people on the foundation-wide security
response team, subscribed to security@apache.org. The team was started by
William A. Rowe when he found a volnarability in httpd. The general work mode -
as opposed to the otherwise ``all things open'' way of doing things at Apache -
is to keep the issues found private until fixed and publicise widely
afterwards.


So when running Apache software on your servers - how do you learn about
security issues? There is no such thing as a priority list for specific
vendors. The only way to get an inside scoop is to join the respective
project's PMC list - that is to get active yourself.


So what is being found? 90% of all security issues are found be security
researches. The remaining 10% are usually published accidentially - e.g. by
users submitting the issue through the regular public bug tracker of the
respective project.


In Tomcat currently no issues was disclosed w/o letting the project know. httpd
still is the prime target - even of security researchers who are in favour of
a full disclosure policy - the PMC cannot do a lot here other than fix issues
quickly (usually within 24 hours).


As a general rule of thumb: Keep your average release cycle time in mind - how
long will it take to get fixes into people's hands? Communicate transparently
which version will get security fixes - and which won't.


As for static analysis tools - many of those are written for web apps and as
such not very helpful for a container. What is highly dangerous in a web app
may just be the thing the container has to do to provide features to web apps.
As for Tomcat, they have made good experiences with Findbugs - most others have
too many false positives.


When dealing with a volnarability yourself, try to get guidance from the
security team on what is actually a security volnarability - though the final
decision is with the project.


Dealing with the tradeoff of working in private vs. exposing users affected by
the volnarability to attacks is up to the PMC. Some work in public but call the
actual fix a refactoring or cleanup. Given enough coding skills on the attacker
side this of course will not help too much as sort of reverse engineering what
is being fixed by the patches is still possible. On the other hand doing
everything in private on a separate branch isn't public development anymore.


After this general introduction Mark gave a good overview of the good, the bad
and the ugly way of handling security issues in Tomcat. For his slides
(including an anecdote of what according to the timing and topic looks like it
was highly related to the 2011 Java Hash Collision talk at Chaos Communication
Congress).


ApacheConNA: On documentation

2013-05-11 20:20


In her talk on documentation on OSS Noirin gave a great wrap up of the topic of
what documentation to create for a project and how to go about that task.


One way to think about documentation is to keep in mind that it fulfills
different tasks: There is conceptual, procedural and task-reference
documentation. When starting to analyse your docs you may first want to debug
the way it fails to help its users: ``I can't read my mail'' really could mean
``My computer is under water''.


A good way to find awesome documentation can be to check out Stackoverflow
questions on your project, blog posts and training articles. Users today really
are searching instead of browsing docs. So where to find documentation actually
is starting to matter less. What does matter though is that those pages with
relevant information are written in a way that makes it easy to find them
through search engines: Provide a decent title, stable URLs, reasonable tags
and descriptions. By the way, both infra and docs people are happy to talk to
*good* SEO guys.


In terms of where to keep documentation:


For conceptual docs that need regular review it's probably best to keep them in
version control. For task documentation steps should be easy to upgrade once
they fail for users. Make sure to accept bug reports in any form - be it on
Facebook, Twitter or in your issue tracker.


When writing good documentation always keep your audience in mind: If you don't
have a specific one, pitch one. Don't try to cater for everyone - if your docs
are too simplistic or too complex for others, link out to further material.
Understand their level of understanding. Understand what they will do after
reading the docs.


On a general level always include an about section, a system overview, a
description of when to read the doc, how to achieve the goal, provide
examples, provide a trouble shooting section and provide further information
links. Write breadth first - details are hard to fill in without a bigger
picture. Complete the overview section last. Call out context and
pre-requesites explicitly, don't make your audience do more than they really
need to do. Reserve the details for a later document.


In general the most important and most general stuff as well as the basics
should come first. Mention the number of steps to be taken early. When it comes
to providing details: The more you provide, the more important the reader will
deem that part.