escapewindow: escape window (Default)

(Continuing the blogging blitz: here is pooling, part 3.)

The build pool consists of a number of identical build machines that can handle all builds of a certain category, across branches.

Builds on checkin

Pooling lends itself to building on checkin: each checkin triggers a full set of builds.

This gives much more granular information about each checkin: does it build on every platform? Does it pass all tests? This saves many hours of "who broke the build" digging. As a result, the tree should stay open longer.

The tradeoff is wait times. During peak traffic, checkins can trigger more build requests than there are available build slaves. As builds begin to queue, new build requests sit idle for longer and longer before build slaves are available to handle those requests.

You can combat wait times via queue collapsing: Once builds queue, the master can combine multiple checkins in the next build. However, this negatively affects granular per-checkin information.

Another solution to wait times is adding more build slaves to the pool.


Dynamic allocation

As long as there are available build slaves, the pool is dynamically allocated to where it's needed. If one branch is especially busy, more build slaves can be temporarily allocated to that branch. Or if the debug build takes twice as long, more build slaves can be allocated to keep it from falling behind.

(At Mozilla, this happens within Buildbot and requires no manual intervention beyond the initial configuration.)

This is in direct contrast to the tinderbox model, where busier branches or longer builds would always mean more changes per build.

Dynamic allocation adds a certain amount of fault tolerance. In the tinderbox model, a single machine going down could cause tree closure. In the pooling model, a number of build machines in the pool could fall over, and the builds would continue at a slower rate.

The main drawback to dynamic allocation is that an extremely long build or an overly busy branch can starve the other builds/branches of available build machines.


Self-testing process

In the tinderbox model, one of the weaknesses was machine setup documentation. This can be assuaged with strict change management and VM cloning, but there's no real ongoing test to verify that the documentation is up to date.

Since pooled slaves jump from build to build and from branch to branch, it's easier to detect whether breakage is build slave- or code/branch- specific. This isn't perfect, especially with heisenbugs, but it's definitely an improvement.

In addition, every time you set up a new build slave, that tests the documentation and process. This happens much, much more often than spinning up new tinderboxes in the tinderbox model.


Spinning up a new branch or build

Since the pool of slaves can handle any existing branch or build, it's relatively easy to spin up a new, compatible branch or build type. It's even possible to do so by merely updating the master config files, with none of the "spin up N new tinderbox machines" work.

However, new branches and build types do add more load to the pool; it's important to keep capacity and wait times in mind. As the full set of builds show, it's easy to lose track of just how much your build pool is responsible for.

Still, I think it's clear that this is a big Win for pooling, as the number of active branches and builds at Mozilla are as high as I've seen anywhere.


The tyranny of the single config

It's very, very powerful to have a single configuration that works for all builds across all branches. However, this is also a very strict limitation.

In the tinderbox model, a change could be made to a single machine without affecting the stability of any other builds or branches. Once that one build goes green, you're golden.

In the pooling model, the change needs to propagate across the entire pool, and it affects all builds across all branches. As the number of branches and build types grow, the testing matrix for config changes grows as well.

And at some point, new, incompatible requirements rear their ugly head -- maybe an incompatible new toolchain that can't coexist with the previous one, or a whole new platform. At that point, you need to create a new pool. And ramping that up from zero can be a time consuming process.


I hope the above helps illustrate the pooling model and some of its benefits and drawbacks.

We don't just have a single build pool here, however; we have multiple, and the number is growing. This was partially by design, and partially to deal with growing pains as we scale larger and larger.

I'll illustrate where we are today in the next segment: split pools.

escapewindow: escape window (Default)

(Continuing the blogging blitz: here is pooling, part 2.
This illustrates how the Tinderbox model can quickly become a headache to maintain on multiple branches, and what problems the pooling model is trying to solve.)


Since each column is its own build machine, if trunk has 12 columns (and you want to have the same coverage on the new branch), you need to spin up 12 new tinderbox machines with similar configurations for the new branch.

Let's reexamine the benefits and drawbacks of the Tinderbox model, with multiple branches in mind:

[i] Anyone can spin up a new builder.

If anyone wants to start working on a new project, platform, or branch, they can run their own tinderbox and send the results to the tinderbox server on their own schedule. This meant that developers could have the coverage they wished, and community members could add ports for the platforms they cared about.

After these ran for a while, they were often "donated" to the Release team to maintain.

This worked fairly well, but donated tinderboxen often came undocumented, resulting in maintenance headaches down the road. Many, many machines were labeled "Don't Touch!" because no one knew if any changes would break anything, and no one knew how to rebuild them if anything catastrophic happened.

[ii] It's relatively simple to make changes to a single build.

If a particular branch needs a different toolchain or setting, it's not difficult to set up that branch's build machine with that. In fact, when we wanted to, say, change compilers on a single branch, we usually spun up a new build machine with that new compiler, and ran it in parallel with the old one until it was reliably green.

These inconsistencies also made it difficult to determine why changes worked on one branch but not another. Was it the new compiler? Or a hidden environment variable? Were the patch/service pack levels the same? Did it matter that one tinderbox was running win2k when the other ran NT?

[iii] Consistent wait times [for a single build].

No matter how many checkins happen on any (or all) branches, wait times stay consistent.

On the other hand, if a flurry of checkins happen on trunk, and the branches lie idle, all of those changes are picked up by the trunk builders. The branch builders continue building the latest revision on those idle branches or lie idle.

The drawbacks stay the same, although amplified with each additional machine and build type to administer and maintain.

I wasn't at Mozilla at the time, but as I understand it, a little more than two years ago, the tree would regularly be held closed whenever a single build machine went down -- unscheduled downtimes on a fairly consistent basis. In addition to the tree closures required to figure out who broke the build.

These were among the reasons for the move to Buildbot pooling, which I'll cover in part 3.

escapewindow: escape window (Default)

(Continuing the blogging blitz: here is pooling, part 1.
This illustrates how builds were set up at one point in Mozilla and Netscape's past, mainly to contrast with how they're set up currently.)


(from here)

There were many variations* of the old Tinderbox model of continuous integration. The basic concept involved a single machine running a single type of build (e.g., Win32 Opt Depend) on a 24/7 basis; when the previous build finished, the next build would start.

When we needed more build types (e.g. adding MacOSX coverage), we added more machines, one for each new build type. They would each be represented by their own column on the Tinderbox page, color coded green for success, red for failure, orange for test failed.

There are inherent benefits to such a model:

[i] Anyone can spin up a new builder.

This is partially due to the delivery of logs via mail (and later, in Tinderbox 2, via ftp), but also because each machine and tinderbox client is standalone. Anyone with a spare machine can spin up a Tinderbox builder.

[ii] It's relatively simple to make changes to a single build.

Need a new compiler? A different SDK? A whole new toolchain? Track down the machine running that build and make those changes, and you're done.

(You documented that, right?)

[iii] Consistent wait times [for a single build].

The maximum wait time for one build type to pick up your change is a little less than one full build cycle (if you happen to check in immediately after a build cycle starts, you need to wait for the next cycle). If a full build takes one hour, the longest end-to-end time is a little less than 2 hours. This is true whether one person checked in or five hundred people checked in.

(Later, people started running two of the same build and staggering them so that the longest wait time was a little less than 1/2 a full build cycle.)


Any drawbacks?

[i] The tree has many single points of failure.

Most of these build machines are unique. If something happens to one machine, that column goes perma-red or drops from the waterfall. If it's measuring something critical (and most of them are), that means tree closure.

[ii] It's easy to lose track of build [script|machine] changes.

It is simple to make changes to the build toolchain, scripts, or environment on individual tinderboxen. Unfortunately, it's also simple to make those changes without properly documenting or checking in those changes. It's only a matter of time before this becomes a problem.

Missing or faulty documentation might only be discovered after massive hardware failure, long after the people responsible for those changes have moved on. If you're unfortunate enough to not have a recent clone or full backup of that machine, you may be looking at a possible multi-day tree closure.

This also affects spinning up a new build machine or making changes to an existing one. If there are settings you're unaware of, troubleshooting the problem can eat up valuable time.

[iii] It's hard to track down who broke the tree.

Since each build cycle can pick up multiple checkins, it can be difficult to tell which checkin broke a particular build or test. This can become a protracted session of finger pointing, involving multiple developers and the reliability of the build machine(s) in question.

This was exacerbated by the old CVS problem of figuring out which build actually picked up your checkin. Also, the fact that each build machine (tinderbox column) has different-length cycles means that builds start at different times, and each build picks up different combinations of new checkins. Those can each break in new and exciting ways, for different reasons.


Don't get me wrong; I have a fondness for Tinderbox that it seems few people share. But I can be objective about its strengths and weaknesses, and one of its weaknesses is that it doesn't scale very well. At least not Scale with a capital Scale. (And scaling is a major factor in our decisions today.)

I'll illustrate that a bit more in the next segment: the tinderbox model on multiple branches.


* (We did have depend tinderboxen that spit out clobber or release builds at certain times of day or when a certain file was touched. We also had machines that cycled through several different build types -- these exceptions tended to occur on side projects that had fewer developers or less hardware. But for the most part, it was a single machine for a single build type.)

escapewindow: escape window (Default)

As part of RelEng's Blogging Blitz, I'm going to write a bit about [build slave] pooling concepts, differentiating between the old Tinderbox model and the Buildbot pool-of-slaves model.

The topics covered will be:

  • The tinderbox model on a single branch.
  • The tinderbox model on multiple branches.
  • The pooling model on multiple branches.
  • Split pools.
  • Some new approaches.
escapewindow: escape window (Default)

[brainstorm]:

I keep running into the same questions, project after project, company after company. How do I see who broke the build? How do I know if this bug has been fixed in this codeline? How do I see the difference between these two builds? And how can we make this all happen faster? Smoother? Easier? And you revisit the same questions as new tangles and complexities of scale are added to the equation.

I joined Netscape in 2001, partially to play with a bunch of weird unices, partially to see How Build Engineering is Done Properly. I had already tackled LXR/ViewCVS + Bonsai + Tinderbox elsewhere, and that toolchain has loomed large at every place I've been, at least in concept. Source browsing + repository querying + continuous integration, no matter which specific products you use.

Here I am, back at Mozilla after a number of years away, and I'm amused (and pleased) to see that the current state of things is surprisingly analogous to how I designed our build system elsewhere. We had the db with history and archived configurations; buildbot has the daemon with real-time log views; but otherwise things are fairly close. Both systems are in similar stages of development, ready to take that next step.

Here are some thoughts about the direction that next step could go. Please keep in mind that I'm trying to ignore how much work this will all be to implement for the moment, so I can write down my ideas without cringing.


Is hgweb a strong bonsai replacement? ) Tinderbox: waterfall vs dashboard ) Buildbot: strengths and weaknesses ) Pull, not push ) Still a ways out )

July 2014

S M T W T F S
  12345
6789101112
13141516171819
2021222324 2526
2728293031  

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Aug. 20th, 2014 02:35 pm
Powered by Dreamwidth Studios