Thursday, May 11, 2023

I just want to run a container...

 ...is a developer-centric point of view?

Generic data center image to exemplify a place many of us have never set foot into.

After a few hours spent on upgrading the toolchain that starts from Terraform, goes through a few AWS-maintained modules and reached the Elastic Kubernetes Service APIs, my team was entertaining the thought of how difficult it is to set up infrastructure. Infrastructure that, for this use case takes a container image of a web application that Docker generated and run it somewhere for users to access.

Thinking through this, either Kubernetes is the new IBM (no one ever get fired for choosing it), or there is more to a production web application than running a container which is what Kubernetes is often sold as: a tool to run containers in the cloud without the necessity to set up specific virtual machines, but treating them as anonymous, sacrificeable nodes where the aforementioned containers can be tightly packed and sharing CPU, memory and other resources.

What exactly is the infrastructure doing for us here? There are a few concerns about operations, the other side of that magic DevOps collaboration. For example:

  • the container image is exposing an HTTP service on port 80. This is not useful for a modern browser and its padlock icons, so to achieve a secure connection a combination of DNS and Let's Encrypt generated and automatically renews certificates after verifying proof of ownership of our domain name.
  • the container image produces logs in the form of JSON lines. Through part of the Grafana stack, these lines are annotated with a timestamp, the container that generated them and various labels such as an environment or the name of a component of the application (think frontend or prod or staging). After these lines are indexed, further software provides the capability to query these logs, zooming in on problems or prioritizing errors during investigation.
  • if we get too much of these logs at the level of error for a particular message, we'd like to receive an alert email that triggers us into action.
  • invariably applications benefit from scheduled processes that run periodically and independently from the request/response lifecycle. It's such a useful architectural pattern that Kubernetes even named a resource after the cron daemon, introduced in 1975.
  • it's also very useful for applications to maintain state and store news rows in a relational database. Provided turnkey by every cloud, a set of hostname, username and password allows access from any combination of programming language and library. No need to worry about rotating the logs of Postgres anymore, or that we are running out of space.
  • the data contained in the application and generated by user also lends itself to timely analysis, so that we know whether to kill a feature or to invest in it. This means somehow taking out recent updates (or whole tables) into a data pipeline that transforms it into something that can be analyzed by a data scientist. All with an acceptable speed.
  • Those credentials nevertheless need to be provided to the application itself, hopefully without accidentally disclosing them into chats, logs, or emails. No human should need to enter them into an interface.
  • Configuration is slightly easier to manage than credentials due to the lack of sensitive values, but we'd still like the capability to generate some of these values such as a canonical URL or to pass in different numbers for different environments.
  • And of course, whenever we commit, we'd like to deploy the change and start the latest version of our application, check that it is responding correctly to requests, and then stop the old, still running one.

This is all in the context of a single team, running operating a single product. Part of this is achieved through running off-the-shelf software on top of Kubernetes; part of it by paid-for cloud services; part of it by outsourcing to a specialized software as a service.

However, when we think about software development we don't necessarily think about software operations. Many of us started our careers by writing code, testing and releasing it without a further look to what was happening afterwards. DevOps as a philosophy meant bridging the gaps; You Build It You Run It is the easiest summary for me.

Yet what I see and what I hear from experienced people is that silos in infrastructure seriously slow teams down, and create fears of insurmountable problems. Abandoned projects, "finished" from a development point of view, but now having unclear ownership and running in production supporting the main revenue stream of an organization.

So I don't want to just run a container. I'd rather deploy many incrementally improved versions of a container, monitor its traffic and user activity, and close the feedback loop that links usage to the next development decision.

In other words: I can always build simple code if it doesn't have to run in production.

Thursday, January 20, 2022

A year of mob programming, part 5: methodology

I wouldn't call mob programming a methodology, but rather a practice that can be adopted. Like methodologies, however, it only surfaces problems and questions faster rather than providing solutions.

For example, a team might have a low bus factor due to the specialisms of its members, and only one person might be used to edit code in a certain language or application. Other team members might have gaps, such as not being set up with the tools and credentials to work on infrastructure and operations; or lacking the knowledge to perform screen reader testing for accessibility.

Besides exposing bus factors and gaps, mob programming brings design conflicts and opinions into the open. It takes little effort to turn the other way when looking at a pull request, or to switch to busy work or generic "refuctoring" when alone; disagreements and questions about value can often prop up under your nose inside a mob session.

However, that doesn't mean that the practice automatically gives the team the psychological safety to talk about those issues out loud, and to resolve them. That's like asking an hammer to walk to the shop and buy the right nails.

At the last part of this post, I thought I'd compare my mob programming experience with some of the theory; the theory being famous Agile software development methodologies and their values or principles.

The serene predictability of waterfall software development

Within Extreme Programming values:

  • Communication inside a mob is very high: synchronous conversations supported by a visual medium such as code or a digital whiteboard.
  • Simplicity is fostered by people in a diverse team reminding others that You Aren't Gonna Need It, but also needing to understand what's going on at all times.
  • Feedback from the rest of the team, or even from a customer proxy, is immediate.
  • Courage is needed to be a driver in an unfamiliar setting, or even a navigator asking the mob for help on a task we are uncomfortable with.
  • Respect is necessary to work face to face for prolonged periods of time.

Within Lean values, mob programming:

  • eliminates waste such as partially done work: there should ideally be no pull request open or branch, as code is produced and integrated in the mob continuously;
  • eliminates waste such as outdated features or gold plating: when you include a product owner in the mob, it's easy to steer and change the scope upon feedback rather than plowing on with the original idea;
  • eliminates waste such as handoffs: there is no packaging up of code for review, or of pixel-perfect Photoshop design to front end developers, or front end requests to back ends to be implemented, and so on.
  • reduces task switching: since everyone that is needed for progress is present, you often don't need to switch task because of waiting on someone, but can just carry on.
  • amplifies learning: ideas are tried out in code from the get go, and most of the team can learn from the results including the product owner.
  • optimizes the whole: most of all, we are not optimizing how much we can use everyone's time, but how fast we can flow a single piece from idea to customer value.

Image by Yann.

Saturday, January 08, 2022

A year of mob programming, part 4: the remoteness of it

I don't have a frame of reference for mob programming in a co-located environment, besides conference (or team exercise) workshops that didn't involve production code.

On a video call, the driver is still always explicitly defined as the person sharing their screen and tools. This is similar to the person physically having the keyboard when you are all sitting around a table.

The navigator can either be more implicit, move around or be strictly defined by a process. If an implicit navigator doesn't emerge for the current commit, it should be considered normal to quickly nominate one as needed. Over time, we normalize the phrase "can you navigate [that change]?" in response to a proposal.

Due to latency, in a video call environment there is a sort of Lorentz time where you may hear overlapping voices when someone else doesn't, and vice versa. It all depends on how much latency each video link is experiencing, and the adjustments that software makes can be jarring, such as video freezes or audio being delivered 10-20 seconds later.

"Sorry, two people talking over each other" "Really??"

I'm sure though everyone has already experienced people talking over each other in a physical room: it isn't exclusively a remote working problem and relies on team interactions more than technology.

We also want to communicate and especially give feedback through voice, as people's faces are not always visible to a driver looking at code; nodding doesn't necessarily help.

Perhaps a bigger problem is the fact that video calls don't allow side conversations to happen, as there is a single audio channel that our ears cannot separate into directions. This is likely to impact a massively parallel EventStorming around a whiteboard, less so an ensemble of 3-4 people trying to get to a consensus on their next move. The side conversations however include those famous water cooler talks, bumping into people in the kitchen or while on a break.

Your theory of mind might have more trouble cutting through the limited signal, and understand in what mental state other people in the team are; and whether today's problem is more due to Internet connection trouble, or a real conflict between team members. The daily retrospective helps to let people put tools down, and reflect on the day; possibly proposing experiments for the next session. 

Looking at the world around me, I'm quite sure I would be as depressed by remote work as many, if it consisted of me being alone for most of the day. So due to mob programming I have a different perspective on how incredibly engaging working from home can be.

Stay tuned for the next (and last) part of this post, Methodology.

Image by Duesentrieb.

Wednesday, November 17, 2021

A year of mob programming, part 3: a laboratory for team dynamics

Our implementation of mob programming consists in a permanent Google Meet videocall, with camera on, in which we rotate a developer sharing a screen containing IDE and browser. So there are two unusual practices to get used to: being in a group all the time, and being on a camera all the time.

I was originally surprised on how both practices went from exhausting to being a normal work day. This is my experience, and we have to consider neurodiversity and personal preferences in the team on having a long screen time and continuous social interaction. It's almost obvious to say, but sometimes people hate inconclusive meetings in which they have no say; not being together with other human beings.

Don't underestimate your capacity to adapt, but recognize that your energy is not going to always be the same; whether because you got a cold, or something is going on at home, or some task is particularly draining. 

The Flemish inscription at bottom reads: because feedback is perfidious, I will go code in my cave

Gold cards and spike branches give people the ability to work on their own when they request to. Mechanical work such as upgrade of dependencies or investigating logs can benefit from focused time from one person. If you were in an office, you could go to an isolated room; working remotely, it's even easier as it just consists in leaving the video call and coming back refreshed later.

The opposite can also happen, when someone like me can manifest Fear Of Missing Out and the mob making lots of decisions in their absence. As the team gets to a norming phase, we should expect fewer surprises for members coming back to the mob after some time off.

Being together with the rest of the team can make your motivation higher due to the group support in getting started or unstuck from a particular problem; but of course the whole group can struggle on occasion.

Every time there is a new composition, though, there are changes in how the mob is working and what it pays more attention to. And from the point of view of a growing team leader, being embedded in a mob environment means being positively inundated with information. The challenge is to make sense of what we see to understand where team members are struggling and what are they finding most effective.

"I wish I could see players only in 121 meetings" -- No football coach ever

This requires a lot of (cognitive) bandwidth, and I usually can't at the same time focus on technical architecture and on social roles. But maybe this is just an argument to work in a team where the different hats can be rotated; as much as there is always someone writing code, there can always be someone checking if we have started to talk over each other, or we are falling into a rabbit hole of an hour without committing.

Stay tuned for the next part of this post, The remoteness of it.

Images by Michael Kranewitter and in the public domain.

Tuesday, November 09, 2021

A year of mob programming, part 2: Collective Code Ownership

With respect to a team assigning tasks to developers by their function (frontend, backend, infrastructure, and so on), mob programming fosters Collective Code Ownership.

This is more in the sense that everyone should be able to contribute through the mob in any area, rather than everyone being able to prioritize on their own what needs to be improved. The pact is that our time belongs to the team, and the team decides what is important.

A result of this is an healthy prevention of knowledge silos, where you can exercise your own creativity just because no one else knows how to work in that area anymore.

Code review happens continuously in the mob on small changes, and there are no huge feature branches to go through. In large changes such as those, it's too late for review to suggest a valuable but completely different approach; there is too much investment of time and energy into code. If review dares to suggest groundbreaking changes, it causes extensive rework instead.

There is a larger picture on ownership: team members tend to keep track of what they care about, whether that is consistency in architecture, front end approaches to styling, performance issues, and so on.

Sometimes I found myself being talked out of a design I had in mind, counteracting bias I could have for the first or most familiar solution. I started working on this team with an object-oriented mindset, but we have transitioned to functional programming in TypeScript.

Sometimes, there are hills to die on: hard to reverse decisions on architecture, or programming language choices. It's a job of a psychologically safe team to discuss these choices collectively without drama or oversimplification like JavaScript being the the death of computer science. 

"I told you we should have written the code in English, not Latin!"

It tooks three different iterations to get a repeatable pattern for accepting Commands (as in CQRS). But it was more fruitful to focus on the Domain Events design, as a difficult choice to revisit, than on the shape of the code itself which can be refined at any time.

Working outside of the mob

The counterpart to the XP mantra of writing all production code in pairs (or a mob) is to allow team members to contribute when they are working alone due to other necessities, such as an unstable Internet connection or a flexible day where they can't align completely with the core hours of the team.

I have learned to ask the team to be commissioned something to do, or at least to give them a choice. This keeps the prioritization in the hands of the group, again reducing bias.

It's helpful to assign problems that have only a constrained possible solution such as renaming, or propagating a refactoring through the codebase for consistency's sake.

It also helps to report back what you have learned during a solo coding exercise when rejoining the mob; unexpected issues or decisions you had to take on the spot and you feel unsure about.

You're allowed to stop at a certain point, as making progress alone at all costs is valued less than maintaining collective code ownership and a shared understanding of how our application works.

Technical spikes can also be chosen by a single person to work on, either because they want to individually prioritize them to demonstrate an idea; or because the amount of investigation required makes it difficult or frustrating to collaborate.
Spikes can be built on throwaway branches, being optimized for learning rather than for delivering production quality code. Once an idea has been demonstrated and approved, the mob can implement it pretty quickly on the trunk, and if we believe in the practice, with an higher level of quality. I constantly find feature branches worked by a single person to be a dead end (especially if they are my own).

In the end, people are frequently in meetings, researching or even just on holiday for a given day. Hence there's always someone missing that will catch up on the progress when they come back into the mob. But the group itself never stops even as the components change, so some of that progress will happen every day, other things being equal.

Stay tuned for the next part of this post, A laboratory for team dynamics.

Image by Johann Jaritz.

Wednesday, October 20, 2021

A year of mob programming, part 1: metaphors

I've been practicing remote mob programming in my team for more than a year, writing more than 90% of production code inside a video call with multiple developers looking at the same screen.

A bit of context

I work in the scientific publishing domain, on web applications oriented to help scientists throughout their day.

My current team has been formed from the start as a remote team, and has been working with mob programming for more than a year. It is a cross-functional team that is including a product manager and often a designer into the (virtual room) for co-creation.

Our set of technologies includes TypeScript, lots of semantic HTML, and minimal client-side JavaScript. Container-based infrastructure and databases are pulled in as required by the evolution of the product.

This is an experience report, not advice that can be applied blindly to your team or organization.

Metaphorical definitions

All the brilliant people working on the same thing, at the same time, in the same space, and on the same computer -- Woody Zuill

Like pair programming, mob programming can be simplistically explained through the metaphor of driving a car in a rally. Or at least, that's how I always understood the driver and navigator pair.

  • A driver shares their screen and has sole access to the keyboard. They are an intelligent IDE capable of executing mechanical refactoring and fixes, but not of deciding a design direction for the code.
  • A navigator verbalizes what the driver should be doing, making decisions that reach down to what test or line of code to write.
  • The rest of the group is on the back seat of the car, and can speak at will, volunteering information to the navigator or answering questions. Are we there yet?

The roles of driver and navigator are rotated often, in fixed time or scope increments such as on every commit. The navigator can implicitly move continuously from one person to another, as long as there isn't more than one navigator at once.

Mob as a name

Speaking Italian as a first language, the term mob doesn't necessarily evoke an emotional reaction from me. But I can recognize a term that can be used to describe organized crime might not allow team members to feel comfortable. My understanding is that mob has to be intended as crowd rather than mafia.

"This is so wasteful. Why aren't they all in separate rooms playing 4 different songs?"

Nevertheless, we looked for more metaphors and terms that can be used to talk about the group of people that work together:

  • orchestra emphasizes the coordination required by a group of people, and their simultaneous presence. I'm not sure they need a conductor though.
  • ensemble is a similar musical metaphor, that focuses on a small group of complementary roles closely working together.
  • swarm refers to a group of bees all converging towards a single task to get it done.
  • the team itself ideally coincides with the mob, though there can be non-technical roles that find difficult to contribute all the time; the mob can also be temporarily reduced from the full set of developers in the team, or multiple mobs can appear in a relatively large team.
  • a womble is a fictional furry creature from children's books that helps the environment by cleaning up rubbish. It has no relation to groups whatsoever and is an unrelated word, with no unfortunate or fortunate connotations attached. We ended up using this word for a while for disambiguation.

These are all the metaphors that I've seen used to describe mobbing. Stay tuned for the next part of this post, Collective Code Ownership.

Image by Princess Ruto.

Sunday, February 03, 2019

How is software like cooking?

Time for a light-hearted post. After my move to the UK and having had my share of fish and chips, I have become by reaction more interested in Italian culinary history and practice. So I started diving into the science and the tradition of cooking, reading books such as the science of meat which combine chemistry and good taste, and I have now cooked enough lasagne to build a statistically significant sample.

Disclaimer: this post is full of meat references as that's culturally significant as a metaphor to transmit the concepts I have in mind. You may find this distasteful if you have chosen to follow a different path.

So here's 5 ways in which software development and cooking are alike...

Feedback loops

"to serve man"
There's a joke in a Futurama episode about Bender (being a robot) not having a sense of taste and hence playfully disgusting the humans in the team with too much salt. The joke works because in cooking you need a continuous feedback loop to conform to your taste, for example adding salt and pepper at the end of a preparation until it tastes right.

We are no strangers to this process in software development: most of the practices I preach about lead to getting working software in front of someone that will use it as soon as possible, to better steer future development with the feedback.

There are shorter, inner feedback loops than tasting: the speed with which meat is browning will make you adjust your source of heat for that phase of the preparation to avoid charring the exterior surface to a pitch black color. Not too different from your unit tests failing and informing you of an issue well before it gets to an actual customer that will send the steak back to the kitchen.

Quality is in the eye of the beholder

Cacio e pepe: simple&tasty or poor?
Taste has lots of different components, including not just what your tongue perceives but also smells, presentation, and expectations. But for most of these aspects, quality is in the eye of the beholder and we can't avoid coming to grips with the variety of people and cultures.

Therefore, despite how good you think your burgers are, some people just don't like fat, mince meat. I appreciate Indian curries but I have some physical limits on the spiciness levels that make me almost always choose mild korma. And imagine the cultural shock of discovering I have been wrongly putting lemon in tea all my life instead of milk as the only acceptable choice.
(I know, tea doesn't even grow in Europe, but I grew up with British tea as the standard.)

We all have good intentions in thinking hard about what a user will enjoy or be productive with; but we have to recognize there is a vast variety of users and we have to design (or cook) for each of them.

Control the process, rather than micromanaging the material

This cake was fluorescent on purpose
Convection ovens are a great example of a controlled process for cooking uniformly. In the context of large pieces of meat or fish, this mainly means getting them to a uniform, high-but-not-too-high temperature to avoid overcooking. The oven fan pushes hot air all around them, heating the surface evenly. Air transfers less heat than, for example, water; so there is time for the temperature to rise across your roasting chicken rather than overcooking the outside while leaving some parts dangerously raw.

Thus for large cuts this is pretty much impossible to achieve in a pan, unless you literally cut everything into slices thin enough that they can cook quickly. The oven-based process is much more convenient as you literally abandon your tray in there, checking from time to time if it's ready with a thermometer.

Generally speaking of enterprise applications and websites, I favor a process in which we catch bugs with multiple safety nets (up to user experimentation if possible) than overdesigning for every possible problem. While you can think of possible scenarios to test endlessly, bugs are always going to happen and it's more important to have a process in place for which they will be fixed and never regress because of new automated tests. That makes your software converge to a steady, stable state like a perfectly cooked chicken.

You need to measure

That's definitely cold, not just for meat
If you want to consistently cook meat to your liking, there is no escape from using a thermometer to understand when it's ready - its center reaching a set temperature that corresponds to medium-rare (for a steak) or well-cooked (for poultry) or something else. Looking at the external color? No relation with the inside. Checking how hard it has become? Too subjective. Roast for a certain amount of time? Ignores the variability of both the ingredients and the heat source.

When we perceive part of an application as slow, we need to use a profiler to find out what functions or methods are taking the most time to execute. Making as few assumptions as possible, we collect data to point us in the right direction. Opinions don't count: your browser timings and other metrics do (if collected correctly).

You can substitute ingredients, to a certain extent

Focaccia genovese
Cornstarch and flour are used in small quantities in many recipes, with the goal of thickening a liquid. This is due to their starch content, as this carbohydrate granules swell up with water creating enough friction to transform a liquid with the viscosity of water into something that feels like cream.

If you try to use cornstarch to make bread however, you won't be able to get an elastic product as it lacks the proteins that would build gluten. Even if you use the wrong flour (cake flour as opposed to bread flour, to keep it simple), this will greatly affect the result due to the smaller percentage of proteins that it contains. Baking, both for sweet and savoury goods, require much more precision.

In software development, we have grown up with Lego bricks as a metaphor and we continuously try to swap out pieces, hiding details behind a useful abstraction that sometimes leaks. Nowadays relational databases can be queried interchangeably if you stick to standard SQL queries. But the data types for columns can be pretty different in the range they support, especially if they are somewhat more exotic like JSON and XML fields rather than integer and strings. A wise decision is still required to understand when substituting components is possible, or where some combinations will never work.

And here are 5 ways in which software development and cooking are very different...

Cooking is a repeatable process

Lots of Dutch cheese around Amsterdam
Recipes (at least the good ones) are literally the codification of a process that should be robust to external variations to get a consistent result. It's the mark of a good cook to be able to deal with variations in ingredients or tools, but unless you are up on a mountain water boils at pretty much the same temperature, and the physical transformation that your carrots undertake when they are heated is well established.

In software, every new feature is a new design to make rather than the execution of a plan. Even porting software or reimplementing it bears surprises as the platform it is running on is now different. And no one understands how long it took to produce the original version, no matter give an estimation for the new one that is being created. We have processes for understanding what a feature should do, and safely implementing it and rolling it out; but there are always land mines waiting on the path.

Cooking has some precise physical quantities you can rely on

Peppered pork fillet steak
Understood: measurement is needed in both fields. But as much as your oven oscillates around its target temperature, it is still much more precise than a developer's effort. Even without meetings and other time variables, how fast and precise we are in a certain day varies: humans aren't robots. Just how knowledgeable we are about a technology influences greatly the designing and testing phases. The Mythical Man-Month remains, well, mythical.

In the food industry, the right tools can even measure the strength of a flour, to check whether it's good for the bread you want to obtain. If you look at a technology team, measuring how many tasks per week we have completed is probably as good as it gets. There's humans involved and applying social science to a very small group probably doesn't get you very far in terms of collecting data and drawing inferences.

You can still measure other times objectively, like time to deploy: how long it takes for a commit on master to reach the production environment. We partly do this because it's important but also because it's feasible to measure. What most project managers would care about will be time from idea to complete implementation instead. But that requires estimating the length of a queue that changes all the time, and is just the first step of a creative development process with its own variations.

Determinism of the digital world

Blackberries from the garden
It's pretty difficult to get the same tomatoes, courgettes or grapes as last week, and pretty much impossible to get the same ones in-season and off-season. You can ship them in from South Africa or Australia but travel time and refrigeration can modify their contents, and thus their taste.

If you look at a physical server, it's much more similar to laboratory equipment than to a living product: you can run programs and see them always taking a similar amount of time to complete, controlling the randomness of the operating system around it. This gets eroded a bit in the cloud, where performance may be affected by your neighbors due to co-tenancy.

Timing in the kitchen

Very unstable crochembouche
Whether it is simply changing the temperature of a meat joint, or a more complex transformation like baking a cake, timing should be one of the concerns if you want to obtain a good result. I formalize this concept by thinking that it's not possible to stop time, in many cases.

Cooks know tricks like cooking eggs or rice to a certain degree, than cooling it down and finish the process later when the food has to be served; or simply reheat it if fully cooked. This works for various categories of products, but it's an ad-hoc process.

Consider the power we have in a digital world: firing up a debugger literally stops execution at some point in the life of the program, allowing us to take a look at what we want in the right context. Since the state of the program is the Matrix, we can slow it down, speed it up, and change things causing a déjà vu to your objects.

If you want to reproduce some computation, you have the tools available to build a Docker image containing all sorts of dependencies and store it for future usage. If you want to reproduce your perfect croissants, the only tools you have are a recipe and your own memories. Add the variation of ingredients and even temperature and humidity in your kitchen, and you can understand why scientific exploration needs a laboratory with its controlled conditions to be able to make progress.

Cooking equipment makes a difference

Now, grate parmesan without this...
Besides basic tools like appropriately shaped knives, a pressure cooker would make you able to reach certain results that would take a long time with an ordinary pot of boiling water. A temperature bath (I don't own one of these) can help cooking meat evenly only to then finish the process with a 2-minute searing. Even a scale is just necessary for baking, as measuring ingredients like flour by volume has a 50% margin of error due to its compressibility.

Consider how you can write code on your old laptop from the beach instead. You target an open source interpreter, and the end product will run on the same server that could accept strictly regulated banking software. As long as you can literally string bytes together, you can produce running software: everything else helps. The ephemeralization of software tools due to virtualization and the large availability of open source platforms make digital startups a reality, whereas opening a restaurant remains a capital-intensive operation.

But there's more...

The power of metaphors

Metaphors can foster understanding of a new system, or lead us ashtray. They are powerfully transmitting a mental model, but that model has its limitations and may even be less precise than a more formal model like a math analogy. But especially in complicated fields like cryptography, terms such as key and signature have popularized concepts to generation of students that would have otherwise found them very hard to think about.

I wrote this post for fun, but I stand behind most of the comparisons: that's all for now. You'll find me using an Helm to ship my containers...

Thursday, January 10, 2019

Practical Helm in 5 minutes

https://helm.sh/
Yet another ship-themed name

Containerization is increasingly a powerful way to deploy applications on anonymous infrastructure, such as a set of many identical virtual machines run by some cloud provider. Since container images ship a full OS, there is no need to manage packages for the servers (a PHP or Python interpreter), but there are still other environment-specific choices that need to be provided to actually run the application: configuration files and environment variables, ports, hostnames, secrets.

In an environment like Kubernetes, you would create all of this declaratively, writing YAML files describing each Pod, ConfigMap, Service and so on. Kubernetes will take these declarations and apply them to its state to reach what is desired.

As soon as you move outside of a demo towards multiple environments, or towards updating one, you will start to see Kubernetes YAML resources not directly as code to be committed into a repository, but as an output of a generation process. There are many tweaks and customizations that need to be performed in each environment, from simple hostnames (staging--app.example.com vs app.example.com) to entire sections being present or not (persistence and replication of application instances).

The problem you need to solve then is to generate Kubernetes resources from some sort of templates: you could choose any template engine for this task, and execute kubectl apply on the result. To avoid reinventing the wheel, Helm and other competitors were created to provide an higher abstraction layer.

Enter Helm

Helm provides templating for Kubernetes .yaml file; as part of this process, it extracts the configuration values for Kubernetes resources into a single, hierarchical data source.

Helm doesn't stop there however: it aims to be a package manager for Kubernetes, hence it won't just create resources such as a Deployment, but it will also:
  • apply the new resources on the Kubernetes cluster
  • tag the Deployment with metadata and labels
  • list everything that is installed in terms of applications, rather then Deployments and ConfigMaps
  • find older versions of the Deployment to be replaced or removed
The set of templates, helpers, dependencies and default values Helm uses to deploy an application is called a chart whereas every instance of a chart created on a cluster is called a release. Therefore, Helm keeps track of objects in terms of releases and allows you to update a release and all its contents, or to remove it and replace it with a new one.

Folder structure

The minimal structure of an Helm chart is simply a folder on your filesystem, whose name must be the name of the chart. As an example, I'll use green-widgets as a name, a fictional web application for ordering green widgets online.

This is what you'll see inside a chart:
  • Chart.yaml: metadata about the chart such as name, description and version.
  • values.yaml: configuration values that may vary across releases. At a bare minimum the image name and tag will have defaults here, along with ports to expose.
  • the templates/ subfolder: contains various YAML templates that will be rendered as part of the process of creating a new release. There is more in this folder like a readme for the user and some helper functions for generating common snippets.
Apart from this minimal setup, there may also be a requirements.yaml file and a charts/ subfolder to deal with other charts to use as dependencies; for example, to install a database through an official chart rather than setting up PostgreSQL replication on your own. These can be safely ignored until you need these features though.

Once you have the helm binary on your system, you can generate a new chart with helm create green-widgets.

Cheatsheet

You can download a helm binary for your platform from the project's releases page on Github. The helm init command will use your kubectl configuration (and authentication) to install tiller, the server-side part of Helm, onto a cluster's system namespace.

Once this is setup, you will be able to execute helm install commands against the cluster, using charts on your local filesystem. For real applications, you can install official charts that are automatically discovered from the default Helm repositories.

The command I prefer to use to work on a chart however is:
helm upgrade --install --set key=value green-widgets--test green-widgets/

The mix of upgrade and install means this command is idempotent and will work for the first installation as well as for updates. Normally you would issue a new release for a change to the chart, but this approach allows you to test out a chart while it's in development, using a 0.0.1 version.
There is no constraint on the release name green-widgets--test, and Helm can even generate random names for you. I like to use the application name and its environment name as a team convention, but you should come up with your own design choices.

A final command to keep in mind is helm delete green-widgets--test which will delete the release and all the resources created by your templates. This is enough to stop using CPU, memory and IP addresses, but it's not enough to completely remove all knowledge of the release from Tiller's archive. To do so (and free the release name allowing its re-creation) you should use add the --purge flag.

Caveats

This 5-minute introduction makes it all seem plain and simple, but it should be clear that simply downloading Helm and installing it is not a production-ready setup. I myself have only rolled out this setup to testing environment at the time of writing.

I can certainly see several directions to explore, that I either cut from the scope in order to get these environments up and running for code review; or investigated and used but not included in this post. For example:
  • requirements.yaml allows to include other charts as dependencies. This is very powerful for off-the-shelf open source software such as databases, caches and queues; it needs careful choices for the configuration values being passed to these dependencies, and your mileage may vary with the quality of the chart you have chosen.
  • chart repositories are a good way to host stable chart versions rather than copying them onto a local filesystem. For example, you could push tarballs to S3 and have a plugin regenerate the index.
  • the whole Helm and Tiller setup arguably needs to be part of a Infrastructure as Code apporach like the rest of the cluster. For example, I am creating a EKS cluster using Terraform and that would need to include also the installation and configuration of Tiller to provide a turnkey solution for new clusters.

Wednesday, January 02, 2019

The path from custom VM to VM with containers

https://commons.wikimedia.org/wiki/File:Kanda_container.jpg
Image of a single container being transported by OiMax
Before the transition to Docker containers started at eLife, a single service deployment pipeline would pick up the source code repository and deploy it to one or more virtual machines on AWS (EC2 instances booted from a standard AMI). As the pipeline went across the environments, it repeated the same steps over and over in testing, staging and production. This is the story of the journey from a pipeline based on source code for every stage, to a pipeline deploying an immutable container image; the goal pursued here being the time savings and the reduced failure rate.

The end point is seen as an intermediate step before getting to containers deployed into an orchestrator, as our infrastructure wasn't ready to accept a Kubernetes cluster when we started the transition, nor Kubernetes itself was trusted yet for stateful, old-school workloads such as running a PHP applications that writes state on the filesystem. Achieving containers-over-EC2 allows developers to target Docker as the deployment platform, without realizing yet cost savings related to the bin packing of those containers onto anonymous VMs.

Starting state

A typical microservice for our team would consist of a Python or PHP codebase that can be deployed onto a usually tiny EC2 instance, or onto more than one if user-facing. Additional resources that are usually not really involved in the deployment process are created out of band (with Infrastructure as Code) for this service, like a relational database (outsourced to RDS), a load balancer, DNS entries and similar cloud resources.

Every environment replicates this setup, whether it is a ci environment for testing the service in isolation, or an end2end one for more large-scale testing, or even a sandbox for exploratory, manual testing. All these environments try to mimic the prod one, especially end2end which is supposed to be a perfect copy on fewer resources.

A deployment pipeline has to go through environments as a new release is promoted from ci to end2end and prod. The amount of work that has to be repeated to deploy from source on each of the instances is sizable however:

  • ensure the PHP/Python interpreter is correctly setup and all extensions are installed
  • checkout the repository, which hopefully isn't too large
  • run scripts if some files need to be generated (from CSS to JS artifacts and anything similar)
  • installing or updating the build-time dependencies for these tasks, such as a headless browser to generate critical CSS
  • run database migrations, if needed
  • import fixture data, if needed
  • run or update stub services to fill in dependencies, if needed (in testing environments)
  • run or update real sidecar services such as a queue broker or a local database, if present
These ever-expanding sequence of operations for each stage can be optimized, but in the end the best choice is not to repeat work that only needs to be performed once per release.

There is also a concern about the end result of a deploy being different across environments. This difference could be in state, such as a JS asset served to real users being different from what you tested; but also in outcome, as a process that can run perfectly in testing may run into a APT repository outage when in production, failing your deploy halfway through, only on one of the nodes. Not repeating operations leads not just to time savings but to a simpler system in which fewer operations can fail just because there are fewer of them in general.

Setting a vision

I've automated before builds that generated a set of artifacts from the source code repository and then deploy that across environments, for example zipping all the PHP or Python code into an archive or in some other sort of package. This approach works well in general, and it is what compiled languages naturally do since they can't get away with recompiling in every environment. However, artifacts do not take into account OS level dependencies like the Python or PHP version with their configuration, along with any other setup outside of the application folder: a tree of directories for the cache, users and groups, deb packages to install.

Container images promise to ship a full operating system directory tree, which will run in any environment only sharing a kernel with its host machine. Seeing docker build as the natural evolution of tar -cf ... | bzip2, I set out to port the build processes of the VMs into portable container images per each service. We would then still be deploying these images as the only service on top an EC2 virtual machine, but each deployment stage should just be consisting of pulling one or more images and starting them with a docker-compose configuration. The stated goal was to reduce the time from commit to live, and the variety of failures that can happen along the way.

Image immutability and self-sufficiency

To really save on deployment time, the images being produced for a service must be the same across environments. There are some exceptions like a ci derivative image that adds testing tools to the base one, but all prod-like environment should get the same artifact; this is not just for reproducibility but primarily for performance.

The approach we took was to also isolate services into their own containers, for example creating two separate fpm and nginx images (wsgi and nginx for Python); or to use a standard nginx image where possible. Other specialized testing images like our own selenium extended image can still be kept separate.

The isolation of images doesn't just make them smaller than a monolith, but provides Docker specific advantages like leveraging independent caching of their layers. If you have a monolith image and you modify your composer.json or package.json file, you're in for a large rebuild. But segregating responsibilities leads instead to only one or two of the application images being rebuilt: never having to reinstall those packages for Selenium debugging. This can also be achieved by embedding various targets (FROM ... AS ...) into a single Dockerfile, and having docker-compose build one of them at a time with the build.target option.

When everything that is common across the environments is bundled within them, what remains is configuration in the form of docker-compose.yml and other files:
  • which container images should be running and exposing which ports
  • which commands and arguments the various images should be passed when they are started
  • environment variables to pass to the various containers
  • configuration files that can be mounted as volumes
Images would typically have a default configuration file in the right place, or be able to work without one. A docker-compose configuration can then override that default with a custom configuration file, as needed.

One last responsibility of portable Docker images is their definition of a basic HEALTHCHECK. This means an image has to ship enough basic tooling to, for example, load a /ping path on its own API and verify a 200 OK response is coming out. In the case of classic containers like PHP FPM or a WSGI Python container, this implies some tooling will be embedded into the image to talk to the main process through that protocol rather than through HTTP.

It's a pity to reinvent the lifecycle management of the container (being started, then healthy or unhealthy after a series of probes), whereas we can define a simple command that both docker-compose or actual orchestrators like Kubernetes can execute to detect the readiness of the new containers after deploy. I used to ship smoke tests with the configuration files to use, but these have largely been replaced by polling for an health status on the container itself.

Image size

Multi-stage builds are certainly the tool of choice to keep images small: perform expensive work in separate stages, and whenever possible only copy files into the final stage rather than executing commands that use the filesystem and bloat the image with their leftover files.

A consolidated RUN command is also a common trick to bundle together different processes like apt-get update and rm /var/lib/apt/lists/* so that no intermediate layers are produced, and temporary files can be deleted before a snapshot is taken.

To find out where this optimization is needed however, some introspection is needed. You can run docker inspect over a locally built image to check its Size field and then docker history to see the various layers. Large layers are hopefully being shared between one image and the next if you are deploying to the same server. Hence it pays to verify that if the image is big, most of its size should come from the ancestor layers and they should seldom change.

A final warning about sizes is related to images with many small files, like node_modules/ contents. These images may exhaust the inodes of the host filesystem well before they fill up the available space. This doesn't happen when deploying source code to the host directly as files can be overwritten, but every new version of a Docker image being deployed can easily result in a full copy of folders with many small files. Docker's prune commands often help by targeting various instance of containers, images and other leftovers, whereas df -i (as opposed to df -h) diagnoses inodes exhaustion.

Underlying nodes

Shipping most of the stack in a Docker image makes it easier to change it as it's part of an immutable artifact that can be completely replaced rather than a stateful filesystem that needs backward compatibility and careful evolution. For example, you can just switch to a new APT repository rather than transition from one to another by removing the old one; only install new packages rather than having to remove the older ones.

The host VMs become leaner and lose responsibilities, becoming easier to test and less variable; you could almost say all they have to run is a Docker daemon and very generic system software like syslog, but nothing application-specific apart from container dependencies such as providing a folder for config files to live on. Whatever Infrastructure as Code recipes you have in place for building these VMs, they will become easier and faster to test, with the side-effect of also becoming easier to replace, scale out, or retire.

An interesting side effect is that most of the first stages of projects pipelines lost the need for a specific CI instance where to deploy. In a staging environment, you actually need to replicate a configuration similar to production like using a real database; but in the first phases, where the project is tested in isolation, the test suite can effectively run on a generic Jenkins node that works for all projects. I wouldn't run multiple builds at the same time on such a node as they may have conflicts on host ports (everyone likes to listen on localhost:8080), but as long as the project cleans up after failure with docker-compose down -v or similar, a new build of a wholly different project can be run with practically no interaction.

Transition stages

After all this care in producing good images and cleaning up the underlying nodes, we can look at the stages in which a migration can be performed.

A first rough breakdown of the complete migration of a service can be aligned on environment boundaries:
  1. use containers to run tests in CI (xUnit tools, Cucumber, static checking)
  2. use containers to run locally (e.g. mounting volumes for direct feedback)
  3. roll out to one or more staging environments
  4. roll out to production
This is the path of least resistance, and correctly pushes risk first to less important environments (testing) and only later to staging and production; hence you are free to experiment and break things without fear, acquiring knowledge of the container stack for later on. I think it runs the risk of leaving some projects halfway, where the testing stages have been ported but production and staging still run with the host-checks-out-source-code approach.

A different way to break this down is perform the environment split by considering the single processes involved. For example, consider an application with a server listening on some port, a cli interface and a long-running process such as a queue worker:
  1. start building an image and pulling it on each enviroment, from CI to production
  2. try running CLI commands through the image rather than the host
  3. run the queue worker to the image rather than the host
  4. stop old queue worker
  5. run the server, using a different port
  6. switch the upper layer (nginx, a load balancer, ...) to use the new container-based server
  7. stop old server
  8. remove source code from the host
Each of these slices can go through all the environments as before. You will be hitting production sooner, which means Docker surprises will propagate there (it's still not as stable as Apache or nginx); but issues that can only be triggered in production will happen on a smaller part of your application, rather than as a big bang of the first production deploy of these container images.

If you are using any dummy project, stub or simulator, they are also good candidates for being switched to a container-based approach first. They usually won't get to production however, as they will only be in use in CI and perhaps some of the other testing environments.

You can also see how this piece-wise approach lets you run both versions of a component in parallel, move between one and the other via configuration and finally remove the older approach when you are confident you don't need to roll back. At the start using a Docker image doesn't seem like a huge change, but sometimes you end up with 50 modified files in your Infrastructure as Code repository, and 3-4 unexpected problems to get them through all the environments. This is essentially Branch by Abstraction applied to Infrastructure as Code: a very good idea for incremental migrations applied to an area that normally needs to move at a slower pace than application code.

Friday, December 28, 2018

Delivery pipelines for CDNs

https://www.fastly.com/network-map
In the last couple of years I have integrated Content Delivery Networks into various eLife applications, managing objects ranging from static files and images to dynamic HTML. These projects mainly consisted of:
  • implementing Infrastructure as Code for these CDNs inside the Github repositories we already use for all other cloud resources (AWS and GCP)
  • effectively authorize HTTPS on the CDN side, which will be impersonating your origin servers
  • create instances of the same CDN services, first in testing and then in production environments, keeping them in parity with each other
  • expand end-to-end testing (the tip of the pyramid) to cover also the CDNs rather than just covering the applications involved
  • integrate logging in order to catch any problem happening between the user and the origin servers
  • finally phase in the new CDNs with new geotagged DNS entries
Our first implementation from 2016 was widely integrated into AWS and as such CloudFront was the chosen solution. We subsequently switched to Fastly for all ordinary traffic, experiencing a general increase in features, customization and expenses. What follows is a comparison that isn't just meant to orient the reader between CloudFront and Fastly, but also against the third option of not using a CDN at all. In fact, there are many concerns that may be glossed upon but that you need to take into account seriously when you move your web presence from a few origin servers to a global network of shared, locked down servers managed by an external organization.

Infrastructure as Code

Our AWS-based setup is making a large use of CloudFormation, the native service for declaratively specifying resources such as servers, load balancers and disks. The simple setup has been augmented over the years by a code generation layer for the CloudFormation templates; this Python code reduces duplication between the various templates by starting from standard EC2/ELB/EBS resources that can be customized in size and other parameters.
If we start from a simple single-server setup for a microservice (this was before Docker containers got stable enough), we are looking at a template containing at least an EC2 instance and a DNS entry pointing to it. With multiple servers, we expand this with a load balancer that pulls in a TLS certificate provided to IAM by an administrator.
To configure CloudFront via CloudFormation, an additional resource for the CDN distribution is introduced. All the configuration you need will be visible in this resource, a JSON dictionary or XML tag respecting a certain schema.
Since CloudFormation can only manage AWS resources and nothing outside that tended garden, Fastly was the reason for introducing Terraform alongside it. Whereas almost anything AWS-specific still goes through CloudFormation, Terraform has opened up new roads such as Infrastructure as Code implementations for Google Cloud Platform (storage buckets and BigQuery tables).
Applying changes in this context is not trivial as you may inadvertently reboot or destroy a server while believing you were only changing a minor setting. Yet Infrastructure as Code is about making the current state of infrastructure and all changes visible, easy to review and safe to rollout across multiple environments. It is imperative therefore to maintain testing environments created with the same tooling as production, and to use them to ultimately integration test all changes.
The caveat of using multiple tools in lockstep for the same instance of a project (including servers, cloud resources and CDNs) is that they can't declare dependencies between resources managed by different tools. For example, since we manage DNS in CloudFormation and Fastly CDNs in Terraform, we can both at the same time but can't couple together the existence of a DNS and the CDN it points to, or impose a creation or update order that is different from the general order we run the tools in.
The most glaring difference in updates rollout between the various options is that, to rollout a CDN configuration change, it takes:
  • no deployment time if you don't use a CDN (obviously)
  • 10s of seconds for Fastly
  • 10s of minutes (up to 1 hour was common) for CloudFront
This means Fastly opens up the possibility for experimentation, even if with a slower feedback that your local TDD cycle. With CloudFront this is painful and haphazard as you decide on a change, start applying it and come back one hour later to check its effects, after having already switched to another task.
Still, minutes of update and/or creation time make Fastly unavailable for inclusion in the CI environments where the tests of a single service are run. You could in theory create a Fastly service on the fly when the build of the service runs, but this will add minutes to your build _and_ also promote coupling to the CDN itself. Fast forward this a bit and you'll see an application unable to be run locally anymore for exploration because of the missing CDN layer. Therefore, like cloud services the CDN is treated like a long-lived resource, with its regression testing performed into a shared environment on every new application commit, but after merge.

Logging

Within a web service, you usually have some kind of access log being generated by nginx or Apache. These logs can sit on a single server or can be uploaded to some aggregation point, whether it is a local Logstash or an external platform that can index them.
Even load balancing doesn't change this picture very much as the load balancer(s) logs should be identical to the ones of the application servers if everything is working well. But with a CDN, large-scale caching is introduced and so it's plausible that you will stop directly seeing a large percentage of your traffic. Statistics or monitoring based on access logs may get skewed; or worse, Japan may be cut off from your website for a while because the health checks from the CDN points of presence there have a timeout of a few milliseconds too low to get to your servers in us-east-1 (of course this never happened).
Hence, to understand what's going on in those few hundred servers you have no access to, you need a way to stream them to some outsourced service; this can be storage as a service (S3 or GCS) or directly some log infrastructure provider. The latency with which logs can get in the right place is a key metric of the feedback loop from changes.
Since we are striving for Infrastructure as Code, all the logging configuration should be kept under version control together with hostnames and caching policies. We got to a standard logging format (JSON Lines with certain fields) and frequency, along with GCS bucket where to put new entries, bucket names following conventions. This was later expanded into BigQuery tables providing queries over the same data, after the Terraform Fastly provided started supporting this delivery mechanism.
The main difficulty in integration was credentials management: you aren't told much if credentials are not correct or not authorized to perform certain actions like writing to BigQuery. Moreover, you can't just commit a bunch of private keys for anyone to see, especially since Infrastructure as Code repositories tend to be made very visible to as many people as possible.
We ended up putting GCP credentials and similar secrets in Vault, running on the same server as the Salt master (same thing as Puppet master). The GCP Service Account itself and its permissions to write to the bucket needed some special permissions to set up (it's turtles all the way down) so couldn't put it directly into Infrastructure as Code but had an admin manually creating it instead. The ideal thing would be for Vault to generate credentials by itself, following the pattern of periodically rotating them. But then it would need to push these credentials somehow into the Fastly configuration and I'm here to provide efficient delivery pipelines, not make cloud giants wrestle.

Flexibility

Your own application is usually highly customizable, with a certain cost associated. You have to write some code in your favorite programming language, possibly following some framework conventions and calling your classes Middleware or EventListener.
CDNs work on shared servers, so they have limits on what can be safely run in that sandboxed environment. Nevertheless, Fastly provides the possibility to customize the VCL that runs each service with your own snippets and macros.
This is very flexible, perhaps even too much: you can introduce headers with random values, write conditionals and implement loops by restarting requests. It feels similar to working in nginx configurations but with a more predictable language.
The main problem with this form of customization is that there is no way to run it or test it on your own. The best feedback loop we found is the Fastly Fiddle (similar to JS Fiddle) where you test out bits of code, hit a save button and see it propagated to servers around the world for you to test.
The fact that this even exists is impressive, but you can imagine how well it works for actual development. Once you get past experimenting, you can't integrate a Fiddle with your own Infrastructure as Code approach (e.g. Terraform templates) nor easily port code from one to another besides copying and pasting. You can run integration-only tests in some other window, but the feedback loop can't be shorter than the deployment time; unit tests are not a thing. You can't even use your IDE as much as you may love it. In the end, Fastly's Varnish diverged from the open source one 4 major versions ago; hence, this VCL is a proprietary language and you'll feel the same as writing stored procedures in Oracle's PL/SQL.
I tend to see VCL and other intermediate declarative templates (such as Terraform .tf files) as a generation target for Infrastructure as Code to compile to. This lets you unit test that your tools generate a certain output for these templates; use dummy inputs in tests and check dummy expected outputs; all of this will still need to be integration tested with the application itself in a real environment, but some of the responsibilities can be developed in the tool itself and reused across many applications.

Integration testing

We have understood by now that to keep the ensemble of servers, code, cloud services and CDNs we need some automated integration testing in place that touches all the different pieces. We don't want many scenarios to be tested at this level because it's slow and brittle to do so, but we need a tracer bullet that goes through everything, if only to verify all configurations are correct.
In the general context of outsourcing of responsibilities to a service or a library, you still own it as a dependency of your application and still need to verify the emergent behavior of custom code and borrowed architecture.
Therefore, I always put at least a staging environment in place replicating production where automated tests can run. This doubles as the place where to try and roll out infrastructure updates that are risky (which are risky? If you have to ask, all of them; just roll out everything through staging).
As we have seen, creating too many different, ad-hoc environments to test pull requests doesn't scale; this will reach death by feature branch as all of your Jenkins nodes are waiting for yet one more RDS node or CloudFront distribution to be created.
A common example of a coupled, integration-related feature to test is the forwarding of Host and other headers; these go through so many layers: a couple of CDN servers, a load balancer, an nginx daemon and finally the application. Some headers don't just have to be forwarded, but have to be rewritten or renamed or added (X-Forwarded-For). All of this can in theory be specified for every single layer but testing the whole architecture probably makes for easier long-term maintenance.

Why?

In various projects you always have to ask yourself why you are doing something (especially complex things) and what value you want to get out of it. CDNs are one of the go-to solution for web performance, their killer feature being huge caches for slow-changing HTML and assets across the world so that even a casual Indian reader can load your homepage in one second. Moreover, if done right the load on your origin servers will also be greatly reduced with respect to not using caching layers.
On the other hand, you can see the complexity, observability and maintenance needs that every additional layer introduces. When asking whether a CDN should do something or your application should do something, it's the same decision as for a database or a cloud service: how can you effectively store and update its configuration in multiple environments? Do you want to oursource that responsibility? How will you know when something's wrong? Do you feel comfortable writing stored procedures in a language you can't run on your laptop? All of these are architectural questions to go through when evaluating various CDNs, or no CDN.

ShareThis