JAX: Tales from production

2013-05-23 20:38
In a second presentation Peter Roßbach together with Andreas Schmidt provided
some more detail on what the topic logging entails in real world projects.
Development messages turn into valuable information needed to uncover issues
and downtime of systems, capacity planning, measuring the effect of software
changes, analysing resource usage under real world usage. In addition to these
technical use cases there is a need to provide business metrics.


When dealing with multiple systems you deal with correlating values across
machines and systems, providing meaningful visualisations to draw the correct
decisions.


When thinking of your log architecture you might want to consider storing not
only log messages. In addition facts like release numbers should be tracked
somewhere - ready to join in when needed to correlate behaviour with release
version. To do that also track events like rolling out a release to production.
Launching in a new market, switching traffic to a new system could be other
events. Introduce not only pure log messages but also provide aggregated
metrics and counters. All of these pieces should be stored and tracked
automatically to free operations for more important work.


Have you ever thought about documenting not only your software, it's interfaces
and input/output format? What about documenting the logged information as well?
What about the fields contained in each log message? Are they documented or do
people have to infer their meaning from the content? What about valid ranges
for values - are they noted down somewhere? Did you store whether a specific
field can only contain integers or whether some day it also could contain
letters? What about the number format - is it decimal, hexadecimal?


For a nice architecture documentation of the BBC checkout

Winning the metrics battle by the BBC dev blog.


There's an abundance of tools out there to help you with all sorts of logging
related topics:




  • For visualisation and transport: Datadog, kibana, logstash, statsd,
    graphite, syslog-ng

  • For providing the values: JMX, metrics, Jolokia

  • For collection: collecd, statsd, graphite, newrelic, datadog

  • For storage: typical RRD tools including RRD4j, MongoDB, OpenTSDB based
    on HBase, Hadoop

  • For charting: Munin, Cacti, Nagios, Graphit, Ganglia, New Relic, Datadog

  • For Profiling: Dynatrace, New Relic, Boundary

  • For events: Zabbix, Icinga, OMD, OpenNMS, HypericHQ, Nagios,JbossRHQ

  • For logging: splunk, Graylog2, Kibana, logstash




Make sure to provide metrics consistently and be able to add them with minimal
effort. Self adaption and automation are useful for this. Make sure developers,
operations and product owners are able to use the same system so there is no
information gap on either side. Your logging pipeline should be tailored to
provide easy and fast feedback on the implementation and features of the
product.


To reach a decent level of automation a set of tools is needed for:


  • Configuration management (where to store passwords, urls or ips, log
    levels etc.). Typical names here include Zookeeper,but also CFEngine, Puppet
    and Chef.

  • Deployment management. Typical names here are UC4, udeploy, glu, etsy
    deployment.

  • Server orchestration (e.g. what is started when during boot). Typical
    names include UC4, Nolio, Marionette Collective, rundeck.

  • Automated provisioning (think ``how long does it take from server failure
    to bringing that service back up online?''). Typical names include kickstart,
    vagrant, or typical cloud environments.

  • Test driven/ behaviour driven environments (think about adjusting not
    only your application but also firewall configurations). Typical tools that
    come to mind here include Server spec, rspec, cucumber, c-puppet, chef.

  • When it comes to defining the points of communication for the whole
    pipeline there is no tool you can use that is better than traditional pen and
    paper, socially getting both development and operations into one room.




The tooling to support this process goes from simple self-written bash scripts
in the startup model to frameworks that support the flow partially, up to
process based suites that help you. No matter which path you choose the goal
should always be to end up with a well documented, reproducable step into
production. When introducing such systems problems in your organisation may
become apparent. Sometimes it helps to just create facts: It's easier to ask for
forgiveness than permission.

JAX: Logging best practices

2013-05-22 20:37
The ideal outcome of Peter Roßbach's talk on logging best practices was to have
attendees leave the room thinking ``we know all this already and are applying
it successfully'' - most likely though the majority left thinking about how to
implement even the most basic advise discussed.


From his consultancy and fire fighter background he has a good overview of what
logging in the average corporate environment looks like: No logging plan, no
rules, dozens of logging frameworks in active use, output in many different
languages, no structured log events but a myriad of different quoting,
formatting and bracketing standards instead.


So what should the ideal log line contain? First of all it should really be a
log line instead of a multi line something that cannot be reconstructed when
interleaved with other messages. The line should not only contain the class
name that logged the information (actually that is the least important piece of
information), it should contain the thread id, server name, a (standardised and
always consistently formatted) timestamp in a decent resolution (hint: one new
timestamp per second is not helpful when facing several hundred requests per
second). Make sure to have timing aligned across machines if timestamps are
needed for correlating logs. Ideally there should be context in the form of
request id, flow id, session id.


When thinking about logs, do not think too much about human readability - think
more in terms of machine readability and parsability. Treat your logging system
as the db in your data center that has to deal with most traffic. It is what
holds user interactions and system metrics that can be used as business
metrics, for debugging performance problems, for digging up functional issues.
Most likely you will want to turn free text that provides lots of flexibility
for screwing up into a more structured format like json, or even some binary
format that is storage efficient (think protocol buffers, thrift, avro).


In terms of log levels, make sure to log development traces on trace, provide
detailed problem analysis stuff on debug, put normal behaviour onto info. In
case of degraded functionality, log to warn. In case of things you cannot
easily recovered from put them on error. When it comes to logging hierarchies -
do not only think in class hierarchies but also in terms of use cases: Just
because your http connector is used in two modules doesn't mean that there
should be no way to turn logging on just for one of the modules alone.


When designing your logging make sure to talk to all stakeholders to get clear
requirements. Make sure you can find out how the system is being used in the
wild, be able to quantify the number of exceptions; max, min and average
duration of a request and similar metrics.


Tools you could look at for help include but are not limited to splunk, jmx,
jconsole, syslog, logstash, statd, redis for log collection and queuing.


As a parting exercise: Look at all of your own logfiles and count the different
formats used for storing time.

JAX: Java performance myths

2013-05-22 20:37
This talk was one of the famous talks on Java performance myths by Arno Haase.
His main point - supported with dozens of illustrative examples was for
software developers to stop trusting in word of mouth, cargo cult like myths
that are abundant among engineers. Again the goal should be to write readable
code above all - for one the Java compiler and JIT are great at optimising. In
addition many of the myths being spread in the Java community that are claimed
to lead to better performance are simply not true.


It was interesting to learn how many different aspects of both software and
hardware contribute to code performance. Micro benchmarks are considered
dangerous for a reason - creating a well controlled environment that matches
what the code will encounter in production is influenced by things like just in
time compilation, cpu throttling, etc.


Some myths that Arno proved wrong include final making code faster (in case of
method parameters it doesn't make a difference up to bytecode being identical
with and without), inheritance being always expensive (even with an abstract
class between the interface and the implementation Java 6 and 7 can still
inline the method in question). Another one was on often wrongly scoped Java
vs. C comparisons. One myth resolved around the creation of temporary objects -
since Java 6 and 7 in simple cases even these can be optimised away.


When it comes to (un-)boxing and reflection there is a performance penalty. For
the latter mostly for method lookup, not so much for calling the method. What we
are talking about however are penalties in the range of about 1000 compute
cycles. Compared to doing any remote calls this is still dwarfed. Reflection on
fields is even cheaper.


One of the more wide spread myths resolved around string concatenation being
expensive - doing a ``A'' + ``B'' in code will be turned into ``AB'' in
bytecode. Even doing the same with a variable will be turned into the use of
StringBuilder ever since -XX:OptimizeStringConcat was turned on by default.


The main message here is to stop trusting your intuition when reasoning about a
system's performance and performance bottlenecks. Instead the goal should be to
go and measure what is really going on. Those are simple examples where your
average Java intuition goes wrong. Make sure to stay on top with what the JVM
turns your code into and how that is than executed on the hardware you have
rolled out if you really want to get the last bit of speed out of your
application.

JAX: Does parallel equal performant?

2013-05-21 20:34
In general there is a tendency to set parallel implementations to being equal
to performant implementations. Except in the really naive case there is always
going to be some overhead due to scheduling work, managing memory sharing and
network communication overhead. Essentially that knowledge is reflected in
Amdahl's law (the amount of serial work limits the benefit from running parts
of your implementation in parallel, http://en.wikipedia.org/wiki/Amdahl's_law),
and Little's law (http://en.wikipedia.org/wiki/Little's_law) in case of queuing
problems.


When looking at current Java optimisations there is quite a bit going on to
support better parallelisation: Work is being done to provide for improving
lock contention situations, the GC adaptive sizing policy has been improved to
a usable state, there is added support for parallel arrays and lampbda's
splitable interface.


When it comes to better locking optimisations what is most notable is work
towards coarsening locks at compile and JIT time (essentially moving locks from
the inside of a loop to the outside); eliminating locks if objects are being
used in a local, non-threaded context anyway; and support for biased locking
(that is forcing locks only when a second thread is trying to access an
object). All three taken together can lead to performance improvements that
will almost render StringBuffer and StringBuilder to exhibit equal performance
in a single threaded context.


For pieces of code that suffer from false sharing (two variables used in
separate threads independently that end up in the same CPU cacheline and as a
result are both flushed on update) there is a new annotation: Adding the
"@contended" annotation can help the compiler for which pieces of code to add
cacheline padding (or re-arrange entirely) to avoid that false sharing from
happening. One other way to avoid false sharing seems to be to look for class
cohesion - coherent classes where methods and variables are closely related
tend to suffer less from false sharing. If you would like to view the resulting
layout use the "-XX:PrintFieldLayout" option.


Java 8 will bring a few more notable improvements including changes to the
adaptive sizing GC policy, the introduction of parallel arrays that allow for
parallel execution of predicates on array entries, changes to the concurrency
libraries, internalised iterators.


JAX: Pigs, snakes and deaths by 1k cuts

2013-05-20 20:32
In his talk on performance problems Rainer Schuppe gave a great introduction to
which kinds of performance problems can be observed in production and how to
best root-cause them.


Simply put performance issues usually arise due to a difference in either data
volumn, concurrency levels or resource usage between the dev, qa and production
environments. The tooling to uncover and explain them is pretty well known:
Staring with looking at logfiles, ARM tools, using aspects, bytecode
instrumentalisation, sampling, watching JMX statistics, and PMI tools.


All of theses tools have their own unique advantages and disadvantages. With
logs you get the most freedom, however you have to know what to log at
development time. In addition logging is i/o heavy, so doing too much can slow
the application down itself. In a common distributed system logs need to be
aggregated somehow. As a simple example of what can go wrong are cascading
exceptions spilled to disk that cause machines to run out of disk space one
after the other. When relying on logging make sure to keep transaction
contexts, in particular transaction ids across machines and services to
correlate outages. In terms of tool support, look at scribe, splunk and flume.


A tool often used for tracking down performance issues in development is the
well known profiler. Usually it creates lots of very detailed data. However it
is most valuable in development - in production profiling a complete server
stack produces way too much load and data to be feasable. In addition there's
usually no transaction context available for correlation again.


A third way of watching applications do their work is to watch via JMX. This
capability is built in for any Java application, in particular for servlet
containers. Again there is not transaction context. Unless you take care of it
there won't be any historic data.


When it comes to diagnosing problems, you are essentially left with fixing
either the "it does not work" case or the "it is slow case".


For the "it is slow case" there are a few incarnations:


  • It was always slow, we got used to it.

  • It gets slow over time.

  • It gets slower exponentially.

  • It suddenly gets slow.

  • There is a spontanous crash.




In the case of "it does not work" you are left with the following observations:


  • Sudden outages.

  • Always flaky.

  • Sporadic error messages.

  • Silent death.

  • Increasing error rates.

  • Misleading error messages.




In the end you will always be spinning in a Look at symptoms, Elimnate
non-causes, Identifiy suspects, Confirm and Eliminate comparing to normal. If
not done with that, leather, rinse, repeat. When it comes to causes for errors
and slowness you will usually will run into one of the following causes: In
many cases bad coding practices are a problem, too much load, missing backends,
resource conflicts, memory and resource leakage as well as hardware/networking
issues are causes.


Some symptoms you may observe include foreseeable lock ups (it's always slow
after four hours, so we just reboot automatically before that), consistent
slowness, sporadic errors (it always happens after a certain request came in),
getting slow and slower (most likely leaking resources), sudden chaos (e.g.
someone pulling the plug or someone removing a hard disk), and high utilisation
of resources.

Linear memory leak



In case of a linear memory leak, the application usually runs into an OOM
eventually, getting ever slower before that due to GC pressure. Reasons could
be linear structures being filled but never emptied. What you observe are
growing heap utilisation and growing GC times. In order to find such leakage
make sure to turn on verbose GC logging, do heapdumps to find leaks. One
challenge though: It may be hard to find the leakage if the problem is not one
large object, but many, many small ones that lead to a death by 1000 cuts
bleeding the application to death.


In development and testing you will do heap comparisons. Keep in mind that
taking a heap dump causes the JVM to stop. You can use common profilers to look
at the heap dump. There are variants that help with automatic leak detection.


A variant is the pig in a python issue where sudden unusually large objects
cause the application to be overloaded.


Resource leaks and conflicts


Another common problem is leaking resources other than memory - not closing
file handles can be one incarnation. Those problems cause a slowness over time,
they may lead to having the heap grow over time - usually that is not the most
visible problem though. If instance tracking does not help here, your last
resort should be doing code audits.


In case of conflicting resource usage you usually face code that was developed
with overly cautious locking and data integrity constraints. The way to go are
threaddumps to uncover threads in block and wait states.


Bad coding practices


When it comes to bad coding practices what is usually seen is code in endless
loops (easy to see in thread dumps), cpu bound computations where no result
caching is done. Also layeritis with too much (de-)serialisation can be a
problem. In addition there is a general "the ORM will save us all" problem that
may lead to massive SQL statements, or to using the wrong data fetch strategy.
When it comes to caching - if caches are too large, access times of course grow
as well. There could be never ending retry loops, ever blocking networking
calls. Also people tend to catch exceptions but not do anything about them
other than adding a little #fixme annotation to the code.


When it comes to locking you might run into dead-/live-lock problems. There
could be chokepoints (resources that all threads need for each processing
chain). In a thread dump you will typically see lots of wait instead of block
time.


In addition there could be internal and external bottlenecks. In particular
keep those in mind when dealing with databases.


The goal should be to find an optimum for your application between too many too
small requests that waste resources getting dispatched, and one huge request
that everyone else is waiting for.

Note to self - Java heap analysis

2012-02-09 21:30
As I keep searching for those URLs over and over again linking them here. When running into JVM heap issues (an out of memory exception is a pretty sure sign, so can be the program getting slower and slower over time) there's a few things you can do for analysis:

Start with telling the effected JVM process to output some statistics on heap layout as well as thread state by sending it a SIGQUIT (if you want to use the number instead - it's 3 - avoid typing 9 instead ;) ).

More detailed insight is available via jConsole - remote setup can be a bit tricky but is well doable and worth the effort as it gives much more detail on what is running and how memory consumption really looks like.

For an detailed analysis take a heap dump with either jmap, jConsole or by starting the process with the JVM option -XX:+HeapDumpOnOutOfMemoryError. Look at it either with jhat or the IBM heap analyzer. Also netbeans offers nice support for searching for memory leaks.

On a more general note on diagnosing java stuff see Rainer Jung's presentation on troubleshooting Java applications as well as Attila Szegedi's presentation on JVM tuning.

Devoxx – Day one – Java, Performance and Devops

2010-12-15 21:22
In his keynote Mark Reinhold provided some information on the very interesting features to be included in the Java 7 release. Generics will be easier to declare with the diamond operator. Nested try-finally constructs that are nowadays needed to safely close resources will no longer be necessary – their will be the option of implementing a Closeable interface supporting a method close() that get's called whenever objects of that class's type go out of scope. That way resources can be freed automatically. Though different in concept, it still reminds me a lot of the functionality typically provided by destructors in C++.

The support for lambda operators and direct method references that will greately help reducing clutter due to nested inner classes has been postponed for later Java releases. Though it took 4 years to come up with the Java 7 release new features are pretty much limited. However the current roadmap looks pretty much release date driven. The intention seems to be to get developers focussed on a limited set of reachable features to finally get the release out into the hands of users.

The speaker claimed Oracle to remain committed to Java development – first and foremost because of being a heavy Java user themselves. However also in order to generate revenue indirectly (through selling support and consulting for Java related products), directly (through Java support) and reducing internal development cost and Java friction.

Though Oracle had a JVM implementation of its own (jRocket) development of HotSpot will be continued – mostly due to a larger number developers being familiar with HotSpot. However monitoring and diagnosis tooling that was superior at jRocket is supposed to be ported to HotSpot.

In the core Java session I also went to the talk on Java performance analysis by Joshua Bloch. He a good job bringing the topic of performance analysis on complex systems to software developers. In ancient times it was quite easy to estimate a piece of code's static performance by static code analysis. Looking at the expression if (condition && secondCondition) it is still commonly considered to be faster to use “&&” over “&”. However looking at current CPU architectures that make heavy use of instruction pipelines it heavily depends on their branch prediction heuristics whether this statement is still true. Dirtying the pipeline by using && may well be more expensive than doing the extra evaluation. General message: The performance of your code in a real world system depends on the hardware it runs on, the operating system as well as the exact VM version used. Estimating performance based on static analysis only is no longer possible.

However even when doing benchmarks one might well reach false conclusions. It is common knowledge that running a benchmark on a VM is required to be run multiple times – VM warmup phases are well known to developers, so the common performance pattern for on specific function usually looks like that:

However even when repeating the test on the same machine multiple times, the values seen after warm-up may be skewed substantially. The only remedy to reaching false conclusions is to do several VM runs, average of the runs (and provide median etc. that are less susceptible to outliers) and provide error bars for each averaged run. When comparing two different implementations the only way to reliably tell which one is better than the other is to do statistical significance tests. Consider the diagram below. When leaving error bars out, the left implementation seems clearly better than the right. However when taking into account how widely skewed the performance numbers are and adding error bars to the entries, this is no longer the case: Both runs are no longer statistically significantly different.

Devoxx – Day three

2010-12-10 21:28
The panel discussion on the future of Java was driven by visitor submitted and voted questions on the current state and future of Java. The general take-aways for me included the clear statement that the TCK will never be made available to the ASF. The promise of Oracle to continue supporting the Java community and remaining active in the JCP.

There was some discussion on whether coming Java versions should be backwards-incompatible. One advantage would be the removal of several Java puzzlers thus making it easier for Joe Java to write code in Java without knowing too much about potential inconsistencies. According to Joshua Bloch the language is no longer well suited to the average programmer who just simply wants to get his tasks done in a consistent and easy to use language: It has become too complicated over the course of the years and is in bitter need for simplification.

Having seen his presentation in Berlin at Buzzwords and silently following the project's progress online I skipped parts of the elastic search presentation. Instead went to the presentation on the Ghost-^wBoilerplate Busters from project Lombok. It always stroke me as odd that in a typical Java project there is so much code that can be generated automatically by Eclipse – such as getters/setters, equals/hashcode, delecation of methods and more. I never really understood why it is possible to generate all that code from Eclipse but not during compile time. Project Lombok however comes to the rescue here. As a compile time dependency it provides several annotations that are automatically converted to the correct code on the fly. It includes support for getter/setter generation, handling of closable resources (even with the current stable version of java), generation of thread safe lazy initialisation of member variables, automatic implementation of the composition over inheritance pattern and much more.

The library can be used from within Eclipse, in maven, ant, ivy, on Google App Engine. One of the developers in charge for IntelliJ who was in the audience announced that the library will be supported by the next version of IntelliJ as well.

Devoxx – Day two – Caching

2010-12-07 21:22
Day two started with a really good talk on caching architectures by Greg Luck. He first motivated why caching works: Even with SSIDs being available now there is still a huge performance gap between RAM access times and having to go to disk. The issue is even worse in systems that are architected in a distributed way making frequent calls to remote systems.

When sizing systems for typical load, what is oftentimes forgotten is that there is no such thing as typical load: Usually the load distribution observed over one day for a service used mainly in one time zone has the shape of an elephant – most queries are issued during lunch time (head of the elephant) with another but smaller peak during the afternoon. This pattern repeats when looking at the weekly distribution, repeats again when looking at the yearly distribution. When looking at the peak time of the year, at the peak day, at the peak time your lead may be increased by several orders of magnitude compared to average load.

Although query volume may be high in most applications that reach out for caching, these queries usually exhibit a power law distribution. This means that there are just a few queries being issued very frequently, however many queries are pretty seldom. This pattern allows for high cache hit rates thus reducing load substantially even during very busy times.

The speaker went into some more detail concerning different architectures: Usually projects start with one cache located directly on the frontend server. When scaling horizontally and adding more and more frontends this leads to an ever increasing load on the database during one period of lifetime for one cached item. The first idea employed to remedy this setup is to link the different caches to each other increasing cache hit rates. Problem here are updates racing to the various caches when the same query is issued to the backend by more than one frontend. The usual next step is to go for a distributed remote cache such as memcache. Of course this has the draw-back of now having to do a network call for each cache access slowing down response times by several milliseconds. Another problem with distributed caching systems is a theorem well known to people building distributed NoSQL databases: CAP says that you can get only two of the three desired properties consistency, availability and partition-tolerance. Ehcache with a terracotta back end lets you configure where your priority lies.

Devoxx Antwerp

2010-12-03 21:16
With 3000 attendees Devoxx is the largest Java Community conference world-wide. Each year in autumn it takes place in Antwerp/ Belgium, in recent years in the Metropolis cinema. The conference tickets were sold out long before doors were opened this year.
The focus of the presentations are mainly on enterprise Java featuring talks by famous Joshua Bloch, Mark Reihnhold and others on new features of the upcoming JDK release as well as intricacies of the Java programming language itself.
This year for the first time the scope was extended to include one whole track on NoSQL databases. The track was organised by Steven Noels. It featured fantastic presentations on HBase use cases, easily accessible introductions to the concepts and usage of Hadoop.
To me it was interesting to observe which talks people would go to. In contrast to many other conferences here the NoSQL/ cloud-computing presentations were less visited than I'd have expected. One reason might be the fact that especially on conference day two they had to compete with popular topics such as the Java puzzlers, Live Java posse and others. However when talking to other attendees their seemed to be a clear gap between the two communities caused probably by a mixture of

  • there being very different problems to be solved in the enterprise world vs. the free software, requirements and scalability driven NoSQL community. Although even comparably small companies (compared to the Googles and Yahoo!s of this world) in Germany are already facing scaling issues, these problems are not yet that pervasive in the Java community as a whole. To me this was rather important to learn, as coming from a Machine learning background, now working for a search provider and being involved with Mahout, Lucene and Hadoop scalability and a growth in data has always been one of the major drivers for any projects I have been working on so far.
  • Even when faced with growing amounts of data in the regular enterprise world developers seem to be faced with the problem of not being able to freely select the technologies to be used for implementing a project. In contrast to startups and lean software teams there still seem to be quite a few teams that are not only given what to implement but also how to implement the software unnecessarily restricting the tools to use to solve a given problem.

One final factor that drives developers adopting NoSQL and cloud computing technologies is the observation for the need to optimise the system as a whole – to think outside the box of fixed APIs and module development units. To that end the DevOps movement was especially interesting to me as only by getting the knowledge largely hidden in operations teams into development and mixing that with the skill of software developers can lead to truly elastic and adaptable systems.