escapewindow: escape window (Default)
[personal profile] escapewindow

(Continuing the blogging blitz: here is pooling, part 1.
This illustrates how builds were set up at one point in Mozilla and Netscape's past, mainly to contrast with how they're set up currently.)


(from here)

There were many variations* of the old Tinderbox model of continuous integration. The basic concept involved a single machine running a single type of build (e.g., Win32 Opt Depend) on a 24/7 basis; when the previous build finished, the next build would start.

When we needed more build types (e.g. adding MacOSX coverage), we added more machines, one for each new build type. They would each be represented by their own column on the Tinderbox page, color coded green for success, red for failure, orange for test failed.

There are inherent benefits to such a model:

[i] Anyone can spin up a new builder.

This is partially due to the delivery of logs via mail (and later, in Tinderbox 2, via ftp), but also because each machine and tinderbox client is standalone. Anyone with a spare machine can spin up a Tinderbox builder.

[ii] It's relatively simple to make changes to a single build.

Need a new compiler? A different SDK? A whole new toolchain? Track down the machine running that build and make those changes, and you're done.

(You documented that, right?)

[iii] Consistent wait times [for a single build].

The maximum wait time for one build type to pick up your change is a little less than one full build cycle (if you happen to check in immediately after a build cycle starts, you need to wait for the next cycle). If a full build takes one hour, the longest end-to-end time is a little less than 2 hours. This is true whether one person checked in or five hundred people checked in.

(Later, people started running two of the same build and staggering them so that the longest wait time was a little less than 1/2 a full build cycle.)


Any drawbacks?

[i] The tree has many single points of failure.

Most of these build machines are unique. If something happens to one machine, that column goes perma-red or drops from the waterfall. If it's measuring something critical (and most of them are), that means tree closure.

[ii] It's easy to lose track of build [script|machine] changes.

It is simple to make changes to the build toolchain, scripts, or environment on individual tinderboxen. Unfortunately, it's also simple to make those changes without properly documenting or checking in those changes. It's only a matter of time before this becomes a problem.

Missing or faulty documentation might only be discovered after massive hardware failure, long after the people responsible for those changes have moved on. If you're unfortunate enough to not have a recent clone or full backup of that machine, you may be looking at a possible multi-day tree closure.

This also affects spinning up a new build machine or making changes to an existing one. If there are settings you're unaware of, troubleshooting the problem can eat up valuable time.

[iii] It's hard to track down who broke the tree.

Since each build cycle can pick up multiple checkins, it can be difficult to tell which checkin broke a particular build or test. This can become a protracted session of finger pointing, involving multiple developers and the reliability of the build machine(s) in question.

This was exacerbated by the old CVS problem of figuring out which build actually picked up your checkin. Also, the fact that each build machine (tinderbox column) has different-length cycles means that builds start at different times, and each build picks up different combinations of new checkins. Those can each break in new and exciting ways, for different reasons.


Don't get me wrong; I have a fondness for Tinderbox that it seems few people share. But I can be objective about its strengths and weaknesses, and one of its weaknesses is that it doesn't scale very well. At least not Scale with a capital Scale. (And scaling is a major factor in our decisions today.)

I'll illustrate that a bit more in the next segment: the tinderbox model on multiple branches.


* (We did have depend tinderboxen that spit out clobber or release builds at certain times of day or when a certain file was touched. We also had machines that cycled through several different build types -- these exceptions tended to occur on side projects that had fewer developers or less hardware. But for the most part, it was a single machine for a single build type.)

(will be screened)
(will be screened if not validated)
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

If you are unable to use this captcha for any reason, please contact us by email at support@dreamwidth.org

November 2022

S M T W T F S
  12345
67 89101112
13141516171819
20212223242526
27282930   

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 25th, 2025 03:07 am
Powered by Dreamwidth Studios