Open Source Summit Prague 2017 - part 1

2017-10-23 11:18
Open Source Summit, formerly known as LinuxCon, this year took place in Prague. Drawing some 2000 attendees to the lovely Czech city, the conference focussed on all things Linux kernel, containers, community and governance. The first day started with three crowded keynotes: First one by Neha Narkhede on

Keynotes

Apache Kafka and the Rise of the Streaming Platform. Second one by Reuben Paul (11 years old) on how hacking today really is just childs play: The hack itself might seem like toying around (getting into the protocol of children's toys in order to make them do things without using the app that was intended to control them). Taken into the bigger context of a world that is getting more and more interconnected - starting with regular laptops, over mobile devices to cars and little sensors running your home the lack of thought that goes into security when building systems today is both startling and worrying at the same time.

The third keynote of the morning was given by Jono Bacon on what it takes to incentivise communities - be it open source communities, volunteer run organisations or corporations. According to his perspective there are four major factors that drive human actions:

  • People thrive for acceptance. This can be exploited when building communities: Acceptance is often displayed by some form of status. People are more likely to do what makes them proceed in their career, gain the next level in a leadership board, gain some form of real or artificial title.
  • Humans are a reciprocal species. Ever heart of the phrase "a favour given - a favour taken"? People who once received a favour from you are more likely to help in the long run.
  • People form habits through repetition - but it takes time to get into a habit: You need to make sure people repeat the behaviour you want them to show for at least two months until it becomes a habit that they themselves continue to drive without your help. If you are trying to roll out peer review based, pull request based working as a new model - it will take roughly two months for people to adapt this as a habit.
  • Humans have a fairly good bullshit radar. Try to remain authentic, instead of automated thank yous, extend authentic (I would add qualified) thank you messages.


When it comes to the process of incentivising people Jono proposed a three step model: From hook to reason to reward.

Hook here means a trigger. What triggers the incentivising process? You can look at how people participate - number of pull requests, amount of documentation contributed, time spent giving talks at conferences. Those are all action based triggers. What's often more valuable is to look out for validation based triggers: Pull requests submitted, reviewed and merged. He showed an example of a public hacker leaderboard that had their evaluation system published. While that's lovely in terms of transparency IMHO it has two drawbacks: It makes it much easier to evaluate known wanted contributions than what people might not have thought about being a valuable contribution when setting up the leadership board. With that it also heavily influences which contribtions will come in and might invite a "hack the leadership board" kind of behaviour.

When thinking about reason there are two types of incentives: The reason could be invisible up-front, Jono called this submarine rewards. Without clear prior warning people get their reward for something that was wanted. The reason could be stated up front: "If you do that, then you'll get reward x". Which type to choose heavily depends on your organisation, the individual giving out the reward as well as the individual receiving the reward. The deciding factor often is to be found in which is more likely authentic to your organisation.

In terms of reward itself: There are extrinsic motivators - swag like stickers, t-shirts, give-aways. Those tend to be expensive, in particular if shipping them is needed. Something that in professional open source projects is often overlooked are intrinsic rewards: A Thank You goes a long way. So does a blog post. Or some social media mention. Invitations help. So do referrals to ones own network. Direct lines to key people help. Testimonials help.

Overall measurement is key. So is concentrating on focusing on incentivising shared value.

Limux - the loss of a lighthouse



In his talk, Matthias Kirschner gave an overview of Limux - the Linux rolled out for the Munich administration project. How it started, what went wrong during evaluation, which way political forces were drawing.

What I found very interesting about the talk were the questions that Matthias raised at the very end:

  • Do we suck at desktop? Are there too many depending apps?
  • Did we focus too much on the cost aspect?
  • Is the community supportive enough to people trying to monetise open source?
  • Do we harm migrations by volunteering - as in single people supporting a project without a budget, burning out in the process instead of setting up sustainable projects with a real budget? Instead of teaching the pros and cons of going for free software so people are in a good position to argue for a sustainable project budget?
  • Within administrations: Did we focus too much on the operating system instead of freeing the apps people are using on a day to day basis?
  • Did we focus too much on one star project instead of collecting and publicising many different free software based approaches?


As a lesson from these events, the FSFE launched an initiative to drive developing code funded by public money under free licenses.

Dude, Where's My Microservice

In his talk Dude, Where's My Microservice? - Tomasz Janiszewski from Allegro gave an introduction to what projects like Marathon on Apache Mesos, Docker Swarm, Kubernetes or Nomad can do for your Microservices architecture. While the examples given in the talk refer to specific technologies, they are intented to be general purpose.

Coming from a virtual machine based world where apps are tied to virtual machines who themselves are tied to physical machines, what projects like Apache Mesos try to do is to abstract that exact machine mapping away. Is a first result from this decision, how to communicate between micro services becomes a lot less obvious. This is where service discovery enters the stage.

When running in a microservice environment one goal when assigning tasks to services is to avoid unhealthy targets. In terms of resource utilization instead of overprovisioning the goal is to use just the right amount of your resources in order to avoid wasting money on idle resources. Individual service overload is to be avoided.

Looking at an example of three physical hosts running three services in a redundant matter, how can assigning tasks to these instances be achieved?

  • One very simple solution is to go for a proxy based architecture. There will be a single point of change, there aren't any in-app dependencies to make this model work. You can implement fine-grained load balancing in your proxy. However this comes at the cost of having a single point of failure, one additional hop in the middle, and usually requires using a common protocol that the proxy understands.
  • Another approach would be to go for a DNS based architecture: Have one registry that holds information on where services are located, but talking to these happens directly instead of through a proxy. The advantages here: No additional hop once the name is resolved, no single point of failure - services can work with stale data, it's protocol independent. However it does come with in-app dependencies. Load balancing has to happen local to the app. You will want to cache name resolution results, but every cache needs some cache invalidation strategy.


In both solutions you will also still have logic e.g. for de-registrating services. You will have to make sure to register your service only once is successfully booted up.

Enter the Service Mesh architecture, e.g. based on Linker.d, or Envoy. The idea here is to have what Tomek called a sidecar added to each service that talks to the service mesh controller to take care of service discovery, health checking, routing, load balancing, authn/z, metrics and tracing. The service mesh controller will hold information on which services are available, available load balancing algorithms and heuristics, repeating, timeouts and circuit breaking, as well as deployments. As a result the service itself no longer has to take care of load balancing, ciruict breaking, repeating policies, or even tracing.

After that high level overview of where microservice orchestration can take you, I took a break, following a good friend to the Introduction to SoC+FPGA talk. It's great to see Linux support for these systems - even if not quite as stable as would be an ideal world case.

Trolling != Enforcement

The afternoon for me started with a very valuable talk by Shane Coughlan on how Trolling doesn't equal enforcement. This talk was related to what was published on LWN earlier this year. Shane started off by explaining some of the history of open source licensing, from times when it was unclear if documents like the GPL would hold in front of courts, how projects like gplviolations.org proofed that indeed those are valid legal contracts that can be enforced in court. What he made clear was that those licenses are the basis for equal collaboration: They are a common set of rules that parties not knowing each other agree to adhere to. As a result following the rules set forth in those licenses does create trust in the wider community and thus leads to more collaboration overall. On the flipside breaking the rules does erode this very trust. It leads to less trust in those companies breaking the rules. It also leads to less trust in open source if projects don't follow the rules as expected. However when it comes to copyright enforcement, the case of Patrick McHardy does imply the question if all copyright enforcement is good for the wider community. In order to understand that question we need to look at the method that Patrick McHardy employs: He will get in touch with companies for seemingly minor copyright infringements, ask for a cease and desist to be signed and get a small sum of money out of his target. In a second step the process above repeats, except the sum extracted increases. Unfortunately with this approach what was shown is that there is a viable business model that hasn't been tapped into yet. So while the activities by Patrick McHardy probably aren't so bad in and off itself, they do set a precedent that others might follow causing way more harm. Clearly there is no easy way out. Suggestions include establishing common norms for enforcement, ensuring that hostile actors are clearly unwelcome. For companies steps that can be taken include understanding the basics of legal requirements, understanding community norms, and having processes and tooling to address both. As one step there is a project called Open Chain publishing material on the topic of open source copyright, compliance and compliance self certification.

Kernel live patching

Following Tomas Tomecek's talk on how to get from Dockerfiles to Ansible Containers I went to a talk that was given by Miroslav Benes from SuSE on Linux kernel live patching.

The topic is interesting for a number of reasons: As early as back in 2008 MIT developed something called Ksplice which uses jumps patched into functions for call redirection. The project was aquired by Oracle - and discontinued.

In 2014 SuSE came up with something called kGraft for Linux live patching based on immediate patching but lazy migration. At the same time RedHat developed kpatch based on an activeness check.

In the case of kGraft the goal was to be able to apply limited scope fixes to the Linux kernel (e.g. for security, stability or corruption fixes), require only minimal changes to the source code, have no runtime cost impact, no interruption to applications while patching, and allow for full review of patch source code.

The way it is implemented is fairly obvious - in hindsight: It's based on re-useing the ftrace framework. kGraft uses the tracer for inception but then asks ftrace to return back to a different address, namely the start of the patched function. So far the feature is available for x86 only.

Now while patching a single function is easy, making changes that affect multiple funtions get trickier. This means a need for lazy migration that ensures function type safety based on a consistency model. In kGraft this is based on a per-thread flag that marks all tasks in the beginning and makes waiting for them to be migrated possible.

From 2014 onwards it took a year to get the ideas merged into mainline. What is available there is a mixture of both kGraft and kpatch.

What are the limitations of the merged approach? There is no way right now to deal with data structure changes, in particular when thinking about spinlocks and mutexes. Consistency reasoning right now is done manually. Architectures other than X86 are still an open issue. Documentation and better testing are open tasks.

Open development and inner source for fun and profit

2017-05-26 07:17
Last in a row if interesting talks at Adobe Open Source Summit was on Open Development/ Inner Source and how it benefits internal projects given by Michael Marth. Note: He knows there's subtle differences between inner source and open development, but mentioned to use the terms interchangeably in his talk.

So what is inner source all about? Essentially: Use all the tools and processes that already work for open source projects, just internally. (Company) public mailing lists, documentation, chat software, issue trackers. Taken to it's core this is very simplistic though. The more interesting aspects are waiting when looking at the people interaction patterns that emerge.

First off: The goal of making all interaction public and easy to follow for anyone is to attract more contributors. The richest source of contributors can be tapped into if your users are tech savvy as well. Being based on this assumption inner source works best when dealing with infrastructure software, or platform software where downstream users are developers themselves.

As a general rule of thumb: From 100 users, 10 will contribute, one will stick around and become a long term committer. This translates into a lot of effort for gaining additional hands for your project.

So - assuming what you want is a wildly successful open source project: You put your code on Github (or whatever the code hosting site of the day is), start some marketing effort - but still not magic happens, no stars, no unicorns, maybe there's 10 pull requests, but that's about it. What happened?

Architecting a community around an open source project is a long term investment: Over time you'll end up training numerous newbies, help people getting started and convince some of those to contribute back.

According to the speaker Michael Marth where that works best is for infrastructure projects: Where users can be turned into contributors and where projects can be turned into platform software that lasts for a decade and longer. In his opinion what is key are two factors: Enabling distributed decision making to let others participate, and a management style that lets the community take their own decisions instead of having one entity control the project. Usually what emerges from that is a distributed, peer-peer networked organisational structure with distributed teams, no calls, no standups, consensus based decision making.

In Michaels experience what works best it to adopt an open source working model from the very start. His recommendation for projects coming from comercial entities is to go to the Apache Software Foundation: There, proven principles and rules have been identified already. In addition going there gives the project much more credibility when it comes to convincing partners that decisions are actually being made in a distributed fashion without being controllable by one single entity. So telling a customer "We have to check this with the community first" as an answer to a feature request becomes much more credible.

The result of this approach are projects that under his guidance gained ten times as many people contributing to the project outside of the original entity than inside of it. The result Michael observed were partners that were much more likely to stick with the technology by means of co-owning it. Partners were participating in development. Also the project made for a lovely recruiting pipeline filled with people already familiar with the technology.

Note to self - slides for staying sane when maintaining a popular open source project

2017-05-26 07:00
For further reference - Simon MacDonald has a great collection of good advise on how to stay sane when running and maintaining a popular open source project. Link here: http://s.apache.org/sanity

Some things he mentioned:

Include a README. It should tell people what the project is about but also what the project is not about. It should have some sort of getting started guide. Potentially link to a CONTRIBUTING doc.

Contribution guidelines should outline social rules like a code of conduct, technical instructions like how to submit a pull request, a style guide, information on how to build the project and make changes etc.

Add a LICENSE file - any OSS license really, because by default it won't be open source in no jurisdiction. Add file headers to each file you publish.

Decide how to handle questions vs. issues: Either both in the issue tracker, or in separate venues.

Add an issue template that asks the user if they searched for the issue already, asks for expected behaviour, actual behaviour, logs, reproduction code, version number used. A note on issues: Having many issues is a good thing - it shows your project is popular. Having only many stale issues is a bad thing - nobody is caring for the project anymore.

Close issues that don't follow the template. Close issues that are duplicates. Close issues that are non active after asking for user input a while ago. Repeated issues asking for seamingly obvious things: Turn those into additional documentation. Asks for easy to add functionality: Let it sit for a while to give others a chance to do it and get involved. Same for bugs that are easy to fix.

Overall people are more difficult than code. Expect trolls to show up. Remain empathetic, respectful but firm in your communication. Don't be afraid to say no to external requests even if they are urgent for the requester.

Add a pull request template that asks for a description, related issue, type tag. Remember that you don't have to merge every pull request.

Build a community: Make it easy to contribute, identify beginner bugs, document the hell out of your project, turn contributors into maintainers, thank people for their effort.

Have tests but keep build times low.

Add documentation, at the very least a README file, a how to contribute file, break those files into a separate website once they grow too large.

As for releasing: Automate as much as you can. Three options: time based release schedule, release on every commit, release "when it's done".

Async decision making

2017-05-16 06:45
This is the second in a series of posts on inner source/open source. Bertrand Delacretaz gave an interesting talk on how to avoid meetings by introducing an async way of making decisions.

He started off with a little anecdote related to Paul Graham's maker's vs. manager's schedule: Bertrand's father was a carpenter. He was working in the same house that his family was living in, so joining the family for lunch was common for him. However there was one valid excuse for him to skip lunch: "I'm glueing." How's that? Glueing together a chair is a process that cannot be interrupted once started without ruining the entire chair - it's a classical maker task that can either be completed in one go, or not at all.

Software development is pretty similar to that: People typically need several hours of focus to get anything meaningful done, in my personal experience at least two (for smaller bugs) or four (for larger changes). That's the motivation to keep forced interruptions low for development teams.

Managers tend to be on a completely different schedule: Context switching all day, communicating all day adding another one hour meeting to the schedule doesn't make much of a difference.

The implication of this observation: Adding one hour of meeting time to an engineer's schedule comes with an enourmous cost if we factor the interruption into the equation. Adding to the equation that lots of meetings actually fail for various reasons (lack of preparation, lack of participants getting prepared, bad audio or video quality, missing participants, delayed start time, bad summarisation) it seems valid to ask if there is a way to reduce the number of face to face meetings while still remaining operational.

As communication still remains key to a functional organisation, one approach taken by open development at Adobe (as well as at the Apache Software Foundation really) is to differentiate between things that can only be discussed in person and decisions that can be taken in an asynchronous fashion. While this doesn't reduce the amount of time communicating (usually quite the contrary happens) it does allow for participants to participate pretty much on their own schedule thus reducing the number of forced interruptions.

How does that work in practice? In Bertrand's experience decision making tends to be a four step process: Coming from an open brainstorming sessions, options need to be condensed, consensus established and finally a decision needs to be made.

In terms of tooling in his experience what works best is to have one and only one shared communication medium for brainstorming. At Apache those are good old mailing lists. In addition there is a need for a structured issue tracker/ case management tool to make options and choices obvious, decisions visible and archived.

When looking at tooling we are missing one important ingredient though: Each meeting needs extensive preparation and thourough post processing. As an example lets take the monthly Apache board of directors' meeting: It's scheduled to last no longer than two hours. Given each of hundreds of projects are required to report on a quarterly basis, given that executive officers need to provide reports on a monthly basis, given that each month at least one major decision item comes up and given that there is still day to day decisions about personel, budget and the like to be taken: How does reducing that to two hours work? The secret sauce is a text file in svn + a web frontend called whimsy to it.

Directors will read through those reports ahead of the meeting. They will add comments to them (which will be mailed automatically to projects), often those comments are used by directors to communicate with each other as well. They will pre-approve reports, they will mark them for discussion if there is something fishy. Some people will check projects' lists to match that up with what's being discussed in the report, some will focus on community stuff, some will focus on seeing releases being mentioned. If a report gets enough pre-approvals an no mark "to be discussed" they are not being shown or touched in the real meeting.

That way most of the discussion happens before the actual meeting leaving time for those issues that are truely contentious. As the meeting is open for anyone in te foundation to attend questions raised beforehand that could not be resolved in writing can be answered in the voice call fairly quickly.

Speaking of call: How does the actual meeting proceed? All participants dial in via good old telephone. Everyone is on a telephone so the issue of "discussions among people in the same room are hard to understand for remote participants" doesn't occur. In addition to telephone there's an IRC backchannel for background discussion, chatter, jokes and less relevant discussion. All discussion that has to be archived and that relates to discussions is kept on the voice channel though. During the meeting the board's chair is moderating through the agenda. In addition the secretary will make notes of who attended, which discussions were made and which arguments exchanged. Those notes are being shared after the meeting, approved at the following month's meeting and published thereafter. If you want to dig deeper into any project's history, there's tooling to drill down into meeting minutes per project until the very beginning of the foundation.

Does the above make decision making faster? Probably not. However it enables an asynchronous work mode that fits best with a group of volunteers working together in a global, distributed community where participants do not only live in different geographies and timezones but are on different schedules and priorities as well.