escapewindow: escape window (Default)

Tl;dr: I just shipped scriptworker 0.8.1 (changelog) (github) (pypi) and scriptworker 0.7.1 (changelog) (github) (pypi)
These are patch releases, and are currently the only versions of scriptworker that work.

scriptworker 0.8.1

The json, embedded in the Azure XML, now contains a new property, hintId. Ideally this wouldn't have broken anything, but I was using that json dict as kwargs, rather than explicitly passing taskId and runId. This means that older versions of scriptworker no longer successfully poll for tasks.

This is now fixed in scriptworker 0.8.1.

scriptworker 0.7.1

Scriptworker 0.8.0 made some non-backwards-compatible changes to its config format, and there may be more such changes in the near future. To simplify things for other people working on scriptworker, I suggested they stay on 0.7.0 for the time being if they wanted to avoid the churn.

To allow for this, I created a 0.7.x branch and released 0.7.1 off of it. Currently, 0.8.1 and 0.7.1 are the only two versions of scriptworker that will successfully poll Azure for tasks.

escapewindow: escape window (Default)

Tl;dr: I just shipped scriptworker 0.8.0 (changelog) (RTD) (github) (pypi).
This is a non-backwards-compatible release.


By design, taskcluster workers are very flexible and user-input-driven. This allows us to put CI task logic in-tree, which means developers can modify that logic as part of a try push or a code commit. This allows for a smoother, self-serve CI workflow that can ride the trains like any other change.

However, a secure release workflow requires certain tasks to be less permissive and more auditable. If the logic behind code signing or pushing updates to our users is purely in-tree, and the related checks and balances are also in-tree, the possibility of a malicious or accidental change being pushed live increases.

Enter scriptworker. Scriptworker is a limited-purpose taskcluster worker type: each instance can only perform one type of task, and validates its restricted inputs before launching any task logic. The scriptworker instances are maintained by Release Engineering, rather than the Taskcluster team. This separates roles between teams, which limits damage should any one user's credentials become compromised.

scriptworker 0.8.0

The past several releases have included changes involving the chain of trust. Scriptworker 0.8.0 is the first release that enables gpg key management and chain of trust signing.

An upcoming scriptworker release will enable upstream chain of trust validation. Once enabled, scriptworker will fail fast on any task or graph that doesn't pass the validation tests.

escapewindow: escape window (Default)

Tl;dr: I reached 100% test coverage (as measured by coverage) on several of my projects, and I'm planning on continuing that trend for the important projects I maintain. We were chatting about this, and I decided to write a blog post about how to get to 100% test coverage, and why.

Why 100% test coverage?

Find unreachable code

Back in 2014, Apple issued a critical security update for iOS, to patch a bug known as #gotofail.

static OSStatus
SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams,
                                 uint8_t *signature, UInt16 signatureLen)
    OSStatus err;

    if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
goto fail; goto fail;
if ((err =, &hashOut)) != 0) goto fail; ... fail: SSLFreeBuffer(&signedHashes); SSLFreeBuffer(&hashCtx); return err; }

The second highlighted goto fail caused SSLVerifySignedServerKeyExchange calls to jump to the fail block with a successful err. This bypassed the actual signature verification and returned a blanket success. With this bug in place, malicious servers would have been able to serve SSL certs with mismatched or missing private keys, and still look legit.

One might argue for strict indentation, using curly braces for all if blocks, better coding practices, or just Better Reviews. But 100% test coverage would have found it.

"For the bug at hand, even the simplest form of coverage analysis, namely line coverage, would have helped to spot the problem: Since the problematic code results from unreachable code, there is no way to achieve 100% line coverage.

"Therefore, any serious attempt to achieve full statement coverage should have revealed this bug."

Even though python has meaningful indentation, that doesn't mean it's foolproof against unreachable code. Simply marking each line as visited is enough to protect against this class of bug.

And even when there isn't a huge security issue at stake, unreachable code can clutter the code base. At best, it's dead weight, extra lines of code to work around, to refactor, to have to read when debugging. At worst, it's confusing, misleading, and hides serious bugs. Don't do unreachable code.

Uncovering bugs while writing tests

My first attempt at 100% test coverage was on a personal project, on a whim. I knew all my code was good, but full coverage was something I had never seen. It was a lark. A throwaway bit of effort to see if I could do it, and then something I could mention offhand. "Oh yeah, and by the way, it has 100% test coverage."

Imagine my surprise when I uncovered some non-trivial bugs.

This has been the norm. Every time I put forward a sustained effort to add test coverage to a project, I uncover some things that need fixing. And I fix them. Even if its as small as a confusing variable name or a complex bit of logic that could use a comment or two. And often it's more than that.

Certainly, if I'm able to find and fix bugs while writing tests, I should be able to find and fix those bugs when they're uncovered in the wild, no? The answer is yes. But. It's a difference of headspace and focus.

When I'm focused on isolating a discrete chunk of code for testing, it's easier to find wrong or confusing behavior than when when I'm deploying some large, complex production process that may involve many other pieces of software. There are too many variables, too little headspace for the problem, and often too little time. And Murphy says those are the times these problems are likely to crop up. I'd rather have higher confidence at those times. Full test coverage helps reduce those variables when the chips are down.

A helpful guide for future changes

Software is the proverbial shark: it's constantly moving, or it dies. Small patches are pretty easy to eyeball and tell if they're good. Well written and maintained code helps here too, as does documentation, mentoring, and code reviews. But only tests can verify that the code is still good, especially when dealing with larger patches and refactors.

Tests can help new contributors catch problems before they reach code review or production. And sometimes after a long hiatus and mental context switch, I feel like a new contributor to my own projects. The new contributor you are helping may be you in six months.

Sometimes your dependencies release new versions, and it's nice to have a full set of tests to see if something will break before rolling anything out to production. And when making large changes or refactoring, tests can be a way of keeping your sanity.

Momentum is contagious

There's the social aspect of codebase maintenance. If you want to ask a new contributor to write tests for their code, that's an easier ask when you're at 100% coverage than when you're at, say, 47%.

This can also carry over to your peers' and coworkers' projects. Enough momentum, and you can build an ecosystem of well-tested projects, where the majority of all bugs are caught before any code ever lands on the main repo.

Technical wealth

Have you read Forget Technical Debt - Here's How to Build Technical Wealth? It's well worth the read.

Automated testing and self-validation loom large in that post. She does caution against 100% test coverage as an end in itself in the early days of a startup, for example, but that's when time to market and profitability are opportunity costs. When developing technical wealth at an existing enterprise, the long term maintenance benefits speak for themselves.

How did you reach 100% test coverage [in python]?

Clean architecture

If you haven't watched Clean Architecture in Python, add it to your list. It's worth the time. It's the Python take on Uncle Bob Martin's original talk. Besides making your code more maintainable, it makes your code more testable. When I/O is cleanly separated from the logic, the logic is easily testable, and the I/O portions can be tested in mocked code and integration tests.


Some people swear by click. I'm not quite sold on it, yet, but it's easy to want something other than optparse or argparse when faced with a block of option logic like in this main() function. How can you possibly sanely test all of that, especially when right after the option parsing we go into the main logic of the script?

I pulled all of the optparse logic into its own function, and I called it with sys.argv[:1]. That let me start testing optparse tests, separate from my signing tests. Signtool is one of those projects where I haven't yet reached 100% coverage, but it's high on my list.

Toss if __name__ == '__main__':

if __name__ == '__main__': is one of the first things we learn in python. It allows us to mix script and library functionality into the same file: when you import the file, __name__ is the name of the module, and the block doesn't execute, and when you run it as a script, __name__ is set to __main__, and the block executes. (More detailed descriptions here.)

There are ways to skip these blocks in code coverage. That might be the easiest solution if all you have under if __name__ == '__main__': is a call to main(). But I've seen scripts where there are pages of code inside this block, all code-coverage-unfriendly.

I forget where I found my solution. This answer suggests __name__ == '__main__' and main(). I rewrote this as

def main(name=None):
    if name in (None, '__main__'):


Either way, you don't have to worry about covering the special if block. You can mock the heck out of main() and get to 100% coverage that way. (See the mock and integration section.)

Rewrite the logic

mimetypes.guess_type('foo.log') returns 'text/plain' on OSX, and None on linux. I fixed it like this:

content_type, _ = mimetypes.guess_type(path)
if content_type is None and path.endswith('.log'):
    content_type = 'text/plain'

And that works. You can get full coverage on linux: a "foo.log" will enter the if block, and a "foo.unknown_extension" will skip it. But on OSX you'll never satisfy the conditional; the content_type will never be None for a foo.log.

More ugly mocking, right? You could. But how about:

content_type, _ = mimetypes.guess_type(path)
if path.endswith('.log'):
    content_type = content_type or 'text/plain'

Coverage will mark that as covered on both linux and OSX.


I used to use nosetests for everything, just because that's what I first encountered in the wild. When I first hacked on PGPy I had to get accustomed to pytest, but that was pretty easy. One thing that drew me to it was @pytest.mark.parametrize (which is an alternate spelling.)

With @pytest.mark.parametrize, you can loop through multiple inputs with the same logic, without having to write the loops yourself. Like this. Or, for the PGPy unicode example, this.

This is doable in nosetests, but pytest encourages it by making it easy.


In nosetests, I would often write small functions to give me objects or data structures to test against. That, or I'd define constants, and I'd be careful to avoid changing any mutable constants, e.g. dicts, during testing.

pytest fixtures allow you to do that, but more simply. With a @pytest.fixture(scope="function"), the fixture will get reset every function, so even if a test changes the fixture, the next test will get a clean copy.

There are also built-in fixtures, like tmpdir, mocker, and event_loop, which allow you to more easily and succintly perform setup and cleanup around your tests. And there are lots of additional modules which add fixtures or other functionality to pytest.

Here are slides from a talk on advanced fixture use.

mock and integration

It's certainly possible to test the I/O and loop-laden main sections of your project. For unit tests, it's a matter of mocking the heck out of it. Pytest's mocker fixture helps here, although it's certainly possible to nest a bunch of with mock.patch blocks to avoid mocking that code past the end of your test.

Even if you have to include some I/O in a function, that doesn't mean you have to use mock to test it. Instead of

def foo():

you could do

def foo(request_function=requests.get):

Then when you unit test, you can override request_function with something that raises the exception or returns the value that you want to test.

Finally, even after you get to 100% coverage on unit tests, it's good to add some integration testing to make sure the project works in the real world. For scriptworker, these tests are marked with @pytest.mark.skipif(os.environ.get("NO_TESTS_OVER_WIRE"), reason=SKIP_REASON), so we can turn them off in Travis.

Use pytest!

Besides parameterization and the mocker fixture, pytest is easily superior to nosetests.

There's parallelization with pytest-xdist. The fixtures make per-test setup and cleanup a breeze. Pytest is more succinct: assert x == y instead of self.assertEquals(x, y) ; because we don't have the self to deal with, the tests can be functions instead of methods. The following

with pytest.raises(OSError):
    function(*args, **kwargs)

is a lot more legible than self.assertRaises(OSError, function, *args, **kwargs) . You've got @pytest.mark.asyncio to await a coroutine; the event_loop fixture is there for more asyncio testing goodness, with auto-open/close for each test.

Miscellaneous hints

Use tox, coveralls, and travis or taskcluster-github for per-checkin test and coverage results.

Use coverage>=4.2 for async code. Pre-4.2 can't handle the async.

Use coverage's branch coverage by setting branch = True in your .coveragerc.

I haven't tried hypothesis or other behavior- or property- based testing yet. But hypothesis does use pytest. There's an issue open to integrate them more deeply.

It looks like I need to learn more about mutation testing.

What other tricks am I missing?

escapewindow: escape window (Default)
Mozilla RelEng uses a tool called signtool for its signing. Historically, this has been part of the build/tools repo, which we've been wanting to move away from.

I recently ported build-tools to py3, essentially porting the libraries and unittests but ignoring most of the scripts. Signing in a py3 scriptworker was the impetus behind this. However, after the port, I noticed signtool stopped working; I suspect I had hacked my virtualenv to debug, and something about that got it working in py3.

After getting permission to hard fork signtool, I ported it to use requests, and refactored it a bit to make it a little easier to test. There are still some overly complex functions and I only have 65% test coverage right now, but it's working in py3.

The code is here and the latest package is here.
escapewindow: escape window (Default)
Back in 2015 I tried porting configman to python 3, but then I hit a wall with the DotDict __getattr__ hack. I wrote about that here.

For some reason I took another stab at it last week, after leveling up my python3 porting skills. I got the tests green, submitted a PR, which is now merged and released. Woot!
escapewindow: escape window (Default)

discovering pip tools

Last week, I wrote a long blog draft about python package pinning. Then I found this. It's well written, and covers many of the points I wanted to make. The author perfectly summarizes the divide between package development and package deployment to production:

Don't pin by default when you're building libraries! Only use pinning for end products.

So I dug deeper. pip-tools #303 perfectly describes the problem I was trying to solve:

Given a minimal dependency list,
  1. generate a full, expanded dependency list, and
  2. pin that full dependency list by version and hash.

Fixing pip-tools #303 and upstreaming seemed like the ideal solution.

However, pip-tools is broken with pip 8.1.2. The Python Packaging Authority (PyPA) states that pip's public API is the CLI, and pip-tools could potentially break with every new pip patch release. This is solvable by either using pypa/packaging directly, or switching pip-tools to use the CLI. That's considerably more work than just integrating hashing capability into pip-tools. ([EDIT] pip-tools now works with pip 8.1.2, but shoehorning hashes into it is a non-trivial task. I do hope someone tackles it though.)

But I had already whipped up a quick'n'dirty python script that used the pip CLI. (I had assumed that bypassing the internal API was a hack, but evidently this is the supported way of doing things.) So, back to the original blog post, but much shorter:

dephash gen

dephash gen takes a minimal requirements file, and generates an expanded dependency list, pinned by version and hash.

$ cat requirements-dev.txt


$ dephash gen requirements-dev.txt > requirements-prod.txt

$ cat requirements-prod.txt

# Generated from +
arrow==0.8.0 \
python-dateutil==2.5.3 \
    --hash=sha512:413b935321f0a65fd8e8ba49990acd5... \
    --hash=sha512:d8e28dad57ea85663962f4518faea0e... \
requests==2.10.0 \
    --hash=sha512:e5b7d20c4d692b2655c13fa177b8462... \
six==1.10.0 \
    --hash=sha512:a41b40b720c5267e4a47ffb98cdc792... \

Developers can work against requirements-dev.txt, with the latest available dependencies. At the same time, production can be pinned against specific package versions+hashes for stability and security.

dephash outdated

dephash outdated PATH checks whether PATH contains outdated packages. PATH can be a requirements file or virtualenv.

$ cat requirements-outdated.txt


$ dephash outdated requirements-outdated.txt

Found outdated packages in requirements-outdated.txt:
six (1.9.0) - Latest: 1.10.0 [wheel]


$ virtualenv -q venv

$ venv/bin/pip install -q -r requirements-outdated.txt

$ dephash outdated venv

Found outdated packages in venv:
six (1.9.0) - Latest: 1.10.0 [wheel]

This just uses pip list --outdated on the backend. I'm tentatively thinking a whitelist of known-outdated dependencies might help here, but I haven't written it yet.


I still think the glorious future involves fixing pip-tools #303 and getting pip-tools pointed at a supported pypa API. And/or getting hashin or pip-tools upstreamed into pip. But in the meantime, there's dephash.

(I'm leaving my package vendoring musings and python package wishlist for future blogpost(s).)

escapewindow: escape window (Default)
  1. tl;dr
  2. internal benefits to generically packaged tools
  3. learning from history: tinderbox
  4. mozharness and scriptharness
  5. treeherder and taskcluster
  6. other tools


One of the reasons I'm back at Mozilla is to work in-depth with some exciting new tools and infrastructure, at scale. However, I wish these tools could be used equally well by employees and non-employees. I see some efforts to improve this. But if this is really important to us, we need to make it a point of emphasis.

If we do this, we can benefit from a healthier, extended community. There are also internal benefits to making our tools packaged in a generic way. I'll go into these in the next section.

I did start to contact some tool maintainers, and so far the response is good. I'll continue doing so. Hopefully I can write a followup blog post about how efforts are under way to make generically packaged tools a reality.

internal benefits to generically packaged tools

Besides the strengthened community, there are other internal benefits.

  • upgrades

    Once installation is packaged and automated, an upgrade to a service might be:

    • spin up a new service
    • test it
    • send over some traffic (applicable if the service is load balanced)
    • go/no-go
    • cut over to the appropriate service and turn off the other one.

    This entire process can be fully automated. Once this process is smooth enough, upgrading a service can be seamless and relatively worry free.

  • disaster recovery

    If a service is only installable manually, a disaster recovery scenario might involve people working around the clock to reinstall a service.

    Once the installation is automated and configurable, this changes. A cold backup solution might be similar to the above upgrade scenario. If disaster strikes, have someone install a new one from the automation, or have a backup instance already installed, ready for someone to switch over.

    A hot backup solution might involve having multiple load balanced services running across regions, with automatic failovers. The automated install helps guarantee that each node in the cluster is configured correctly, without human error.

  • good first bugs

    (or intern projects, or GSOC projects, or...)

    The more special-snowflake and Mozilla-specific our tools are, the more likely the tool will be tied closely to other Mozilla-specific services, so a seemingly simple change might require touching many different codebases. These types of tools are also more likely to require VPN or special LDAP access that present barriers to new contributors.

    If a new contributor is able to install tools locally, that guarantees that they can work on standalone bugs/projects involving those tools. And successful good first bugs and intern/GSOC type projects directly lead to a stronger contributor base.

  • special projects

    At various team work weeks years past, we brainstormed being able to launch entire chunks of infrastructure up in self-contained units. These could handle project branch type work. When the code was merged back into trunk, we could archive the data and shut down the instances used.

    This concept also works for special projects: things that don't fit within the standard workflow.

    If we can spin up services in a separate, network isolated area, riskier or special-requirement work (whether in terms of access control, user permissions, partner secrets, etc) could happen there without affecting production.

  • self-testing

    Installing the package from scratch is the test for the generic packaging feature. The more we install it, the smaller the window of changes we need to inspect for installation bustage. This is the same as any other software feature.

    Having an install test for each tool gives us reassurances that the next time we need to install the service (upgrade, disaster recovery, etc.) it'll work.

learning from history: tinderbox

In 2000, a developer asked me to install tinderbox, a continuous integration tool written and used at Netscape. It would allow us see the state of the tree, and minimize bustage.

One of the first things I saw was this disclaimer:

This is not very well packaged code.  It's not packaged at all.  Don't
come here expecting something you plop in a directory, twiddle a few
things, and you're off and using it.  Much work has to be done to get
there.  We'd like to get there, but it wasn't clear when that would be,
and so we decided to let people see it first.

Don't believe for a minute that you can use this stuff without first
understanding most of the code.

I managed to slog through the steps and get a working tinderbox/bonsai/mxr install up and running. However, since then, I've met a number of other people who had tried and given up.

I ended up joining Netscape in 2001. (My history with tinderbox helped me in my interview.) An external contributor visited and presented tinderbox2 to the engineering team. It was configurable. It was modular. It removed Netscape-centric hardcodes.

However, it didn't fully support all tinderbox1 features, and certain default behaviors were changed by design. Beyond that, Netscape employees already had fully functional, well maintained instances that worked well for us. Rather than sinking time into extending tinderbox2 to cover our needs, we ended up staying with the disclaimered, unpackaged tinderbox1. And that was the version running at, until its death in May 2014.

For a company focused primarily on shipping a browser, shipping the tools used to build that browser isn't necessarily a priority. However, there were some opportunity costs:

  • Tinderbox1 continued to suffer from the same large barrier of entry, stunting its growth.
  • I don't know how widely tinderbox2 was used, but I imagine adoption at Netscape would have been a plus for the project. (I did end up installing tinderbox2 post-Netscape.)
  • A larger, healthier community could have result in upstreamed patches, and a stronger overall project in the long run.
  • People who use the same toolset may become external contributors or employees to the project in general (like me). People who have poor impressions of a toolset may be less interested in joining as contributors or employees.

mozharness and scriptharness

In my previous stint at Mozilla, I wrote mozharness, which is a python framework for scripts.

I intentionally kept mozilla-specific code under mozharness.mozilla and generic mozharness code under mozharness.base. The idea was to make it easier for external users to grab a copy of mozharness and write their own scripts and modules for a non-Mozilla project. (By "non-Mozilla" and "external user", I mean anyone who wants to automate software anywhere.)

However, after I left Mozilla, I didn't use mozharness for anything. Why not?

  • There's a non-trivial learning curve for people new to the project, and the benefits of adopting mozharness are most apparent when there's a certain level of adoption. I was working at time scales that didn't necessarily lend themselves to this.
  • I intentionally kept mozharness clone-and-run. I think this was the right model at the time, to lower the barrier for using mozharness until it had reached a certain level of adoption. Clone-and-run made it easier to use mozharness in buildbot, but makes it harder to install or use just the mozharness.base module.
  • We did our best to keep Mozilla-isms out of mozharness.base via review. However, this would have been more successful with either an external contributer speaking up before we broke their usage model, or automated tests, or both.

So even with the best intentions in mind, I had ended up putting roadblocks in the way of external users. I didn't realize their scope until I was fully in the mindset of an external user myself.

I wrote scriptharness to try to address these problems (among others):

  • I revisited certain decisions that made mozharness less approachable. The mixins and monolithic Script object. Requiring a locked config. Missing docstrings and tests.
  • I made scriptharness resources available at standard locations (generic packages on pypi, source at github, full docs at readthedocs).
  • Since it's a self-contained package, it's usable here or elsewhere. Since it's written to solve a generic problem rather than a Mozilla-specific problem, it's unencumbered by Mozilla-specific solutions.
I'd like to backport some of the better ideas from scriptharness to mozharness, to address some of these issues.

treeherder and taskcluster

After I left Mozilla, on several occasions we wanted to use other Mozilla tools in a non-Mozilla environment. As a general rule, this didn't work.

  • Continuous Integration (CI) Dashboard

    We had multiple Jenkins servers, each with a partial picture of our set of build+test jobs. Figuring out the state of the code base was complex and a specialized skill. Why couldn't we have one dashboard showing a complete view?

    I took a look at Treeherder. It has improved upon the original TBPL, but is designed to work specifically with Mozilla's services and workflows. I found it difficult to set up outside of a Mozilla environment.

  • CI Infrastructure

    We were investigating other open source CI solutions. There are many solutions for server-side apps, or linux-only solutions, or cross-platform at small- to medium- scale. TaskCluster is the only one I know of that's cross-platform at massive scale.

    When we looked, all the tutorials and docs had to do with using the existing Mozilla production instance, which required a email address at the time. There are no docs for setting up TaskCluster itself.

    (Spoiler: I hear it may be a 2H project :D :D :D )

  • Single Sign-On

    An open source, trusted SSO solution sounded like a good thing to implement.

    However, we found out Persona has been EOL'd. We didn't look too closely at the implementation details after that.

(These are just the tools I tried to use in my 1 1/2 years away from Mozilla. There are more tools that might be useful to the outside world. I'll go into those in the next section.)

There are reasons behind each of these, and they may make a lot of sense internally. I'm not trying to place any blame or point fingers. I'm only raising the question of whether our future plans include packaging our tools for outside use, and if not, why not?

We're facing a similar decision to Netscape's. What's important? We can say we're a company that ships a browser, and the tools only exist for that purpose. Or we can put efforts towards making these tools useful in many environments. Generically packaged, with documentation that doesn't start with a disclaimer about how difficult they are to set up.

It's easy to say we'd like to, but we're too busy with ______. That's the gist of the tinderbox disclaimer. There are costs to designing and maintaining tools for use outside of one's subset of needs. But as long as packaging tools for outside use is not a point of emphasis, we'll maintain the status quo.

other tools

The above were just the tools that we tried to install. I asked around and built a list of Mozilla tools that might be useful to the outside world. I'm not sure if I have all the details correct; please correct me if I'm wrong!

  • mach - if all the mozilla-central-specific functions were moved to libraries, could this be useful for others?
  • bughunter - I don't know enough to say. This looks like a crash/assertion finder, tying into Socorro and bugzilla.
  • balrog - this now has docker support, which is promising for potential outside use.
  • marionette (already used by others)
  • reftest (already used by others)
  • pulse - this is a taskcluster dep.
  • Bugzilla - I've seen lots of instances successfully used at many other companies. Its installation docs are here.
  • I also hear that Socorro is successfully used at a number of other companies.

So we already have some success here. I'd love to see it extended -- more tools, and more use cases, e.g. supporting bugzilla or jira as the bug db backend when applicable.

I don't know how much demand there will be, if we do end up packaging these tools in a way that others can use them. But if we don't package them, we may never know. And I do know that there are entire companies built around shipping tools like these. We don't have to drop any existing goals on the floor to chase this dream, but I think it's worth pursuing in the future.

escapewindow: escape window (Default)

A few people have suggested I look at other packages for config solutions. I thought I'd record some of my thoughts on the matter. Let's look at requirements first.


  1. Commandline argument support. When running scripts, it's much faster to specify some config via the commandline than always requiring a new config file for each config change.

  2. Default config value support. If a script assumes a value works for most cases, let's make it default, and allow for overriding those values in some way.

  3. Config file support. We need to be able to read in config from a file, and in some cases, several files. Some config values are either too long and unwieldy to pass via the commandline, and some config values contain characters that would be interpreted by the shell. Plus, the ability to use diff and version control on these files is invaluable.

  4. Multiple config file type support. json, yaml, etc.

  5. Adding the above three solutions together. The order should be: default config value -> config file -> commandline arguments. (The rightmost value of a configuration item wins.)

  6. Config definition and validation. Commandline options are constrained by the options that are defined, but config files can contain any number of arbitrary key/value pairs.

  7. The ability to add groups of commandline arguments together. Sometimes familes of scripts need a common set of commandline options, but also need the ability to add script-specific options. Sharing the common set allows for consistency.

  8. The ability to add config definitions together. Sometimes families of scripts need a common set of config items, but also need the ability to add script-specific config items.

  9. Locking and/or logging any changes to the config. Changing config during runtime can wreak havoc on the debugability of a script; locking or logging the config helps avoid or mitigate this.

  10. Python 3 support, and python 2.7 unicode support, preferably unicode-by-default.

  11. Standardized solution, preferably non-company and non-language specific.

  12. All-in-one solution, rather than having to use multiple solutions.

Packages and standards


Argparse is the standardized python commandline argument parser, which is why configman and scriptharness have wrapped it to add further functionality. Its main drawbacks are lack of config file support and limited validation.

  1. Commandline argument support: yes. That's what it's written for.

  2. Default config value support: yes, for commandline options.

  3. Config file support: no.

  4. multiple config file type support: no.

  5. Adding the above three solutions together: no. The default config value and the commandline arguments are placed in the same Namespace, and you have to use the parser.get_default() method to determine whether it's a default value or an explicitly set commandline option.

  6. Config definition and validation: limited. It only covers commandline option definition+validation, and there's the required flag but not a if foo is set, bar is required type validation. It's possible to roll your own, but that would be script-specific rather than part of the standard.

  7. Adding groups of commandline arguments together: yes. You can take multiple parsers and make them parent parsers of a child parser, if the parent parsers have specified add_help=False

  8. Adding config definitions together: limited, as above.

  9. The ability to lock/log changes to the config: no. argparse.Namespace will take changes silently.

  10. Python 3 + python 2.7 unicode support: yes.

  11. Standardized solution: yes, for python. No for other languages.

  12. All-in-one solution: no, for the above limitations.


Configman is a tool written to deal with configuration in various forms, and adds the ability to transform configs from one type to another (e.g., commandline to ini file). It also adds the ability to block certain keys from being saved or output. Its argparse implementation is deeper than scriptharness' ConfigTemplate argparse abstraction.

Its main drawbacks for scriptharness usage appear to be lack of python 3 + py2-unicode-by-default support, and for being another non-standardized solution. I've given python3 porting two serious attempts, so far, and I've hit a wall on the dotdict __getattr__ hack working differently on python 3. My wip is here if someone else wants a stab at it.

  1. Commandline argument support: yes.

  2. Default config value support: yes.

  3. Config file support: yes.

  4. Multiple config file type support: yes.

  5. Adding the above three solutions together: not as far as I can tell, but since you're left with the ArgumentParser object, I imagine it'll be the same solution to wrap configman as argparse.

  6. Config definition and validation: yes.

  7. Adding groups of commandline arguments together: yes.

  8. Adding config definitions together: not sure, but seems plausible.

  9. The ability to lock/log changes to the config: no. configman.namespace.Namespace will take changes silently.

  10. Python 3 support: no. Python 2.7 unicode support: there are enough str() calls that it looks like unicode is a second class citizen at best.

  11. Standardized solution: no.

  12. All-in-one solution: no, for the above limitations.


Docopt simplifies the commandline argument definition and prettifies the help output. However, it's purely a commandline solution, and doesn't support adding groups of commandline options together, so it appears to be oriented towards relatively simple script configuration. It could potentially be added to json-schema definition and validation, as could the argparse-based commandline solutions, for an all-in-two solution. More on that below.


This looks very promising for an overall config definition + validation schema. The main drawback, as far as I can see so far, is the lack of commandline argument support.

A commandline parser could generate a config object to validate against the schema. (Bonus points for writing a function to validate a parser against the schema before runtime.) However, this would require at least two definitions: one for the schema, one for the hopefully-compliant parser. Alternately, the schema could potentially be extended to support argparse settings for various items, at the expense of full standards compatiblity.

There's already a python jsonschema package.

  1. Commandline argument support: no.

  2. Default config value support: yes.

  3. Config file support: I don't think directly, but anything that can be converted to a dict can be validated.

  4. Multiple config file type support: no.

  5. Adding the above three solutions together: no.

  6. Config definition and validation: yes.

  7. Adding groups of commandline arguments together: no.

  8. Adding config definitions together: sure, you can add dicts together via update().

  9. The ability to lock/log changes to the config: no.

  10. Python 3 support: yes. Python 2.7 unicode support: I'd guess yes since it has python3 support.

  11. Standardized solution: yes, even cross-language.

  12. All-in-one solution: no, for the above limitations.

scriptharness 0.2.0 ConfigTemplate + LoggingDict or ReadOnlyDict

Scriptharness currently extends argparse and dict for its config. It checks off the most boxes in the requirements list currently. My biggest worry with the ConfigTemplate is that it isn't fully standardized, so people may be hesitant to port all of their configs to it.

An argparse/json-schema solution with enough glue code in between might be a good solution. I think ConfigTemplate is sufficiently close to that that adding jsonschema support shouldn't be too difficult, so I'm leaning in that direction right now. Configman has some nice behind the scenes and cross-file-type support, but the python3 and __getattr__ issues are currently blockers, and it seems like a lateral move in terms of standards.

An alternate solution may be BYOC. If the scriptharness Script takes a config object that you built from somewhere, and gives you tools that you can choose to use to build that config, that may allow for enough flexibility that people can use their preferred style of configuration in their scripts. The cost of that flexibility is familiarity between scriptharness scripts.

  1. Commandline argument support: yes.

  2. Default config value support: yes, both through argparse parsers and script initial_config.

  3. Config file support: yes. You can define multiple required config files, and multiple optional config files.

  4. Multiple config file type support: no. Mozharness had .py and .json. Scriptharness currently only supports json because I was a bit iffy about execfileing python again, and PyYAML doesn't always install cleanly everywhere. It's on the list to add more formats, though. We probably need at least one dynamic type of config file (e.g. python or yaml) or a config-file builder tool.

  5. Adding the above three solutions together: yes.

  6. Config definition and validation: yes.

  7. Adding groups of commandline arguments together: yes.

  8. Adding config definitions together: yes.

  9. The ability to lock/log changes to the config: yes. By default Scripts use LoggingDict that logs runtime changes; StrictScript uses a ReadOnlyDict (sams as mozharness) that prevents any changes after locking.

  10. Python 3 and python 2.7 unicode support: yes.

  11. Standardized solution: no. Extended/abstracted argparse + extended python dict.

  12. All-in-one solution: yes.

Corrections, additions, feedback?

As far as I can tell there is no perfect solution here. Thoughts?

escapewindow: escape window (Default)

I've been getting some good feedback about scriptharness 0.1.0; thank you. I listed the 0.2.0 highlights and changes in the 0.2.0 Release Notes, but wanted to mention a few things here.

First, mozharness' config had the flexibility of accepting any arbitrary key/value pairs from various sources (initial_config, commandline options, config files...). However, it wasn't always clear what each config variable was for, or if it was required, or if the config was valid. I filed bug 699343 back in 2011, but didn't know how to tackle it then. I believe I have the solution now, with ConfigTemplates.

Second, 0.1.0 was definitely lacking a run_command() and get_output_from_command() analogs. 0.2.0 has Command for just running+logging a command, ParsedCommand for parsing the output of a command, and Output for getting the output from a command, as well as run(), parse(), get_output(), and get_text_output() shortcut functions to instantiate the objects and run them for you. (Docs are here.) Each of these supports cross-platform output_timeouts and max_timeouts in both python 2.7 and python3, thanks to the multiprocessing module. As a bonus, I was able to add context line support to the ErrorLists for ParsedCommand. This was also a want since 2011.

I fleshed out some more documentation and populated the scriptharness issues with my todo list.

I think I know what I have in mind for 0.3.0, but feedback is definitely welcome!

escapewindow: escape window (Default)

I found myself missing mozharness at various points over the past 10 months. Several things kept me from using it at my then-new job:

  • Even though we had kept mozharness.base.* largely non-Mozilla-specific, the mozharness clone-and-run model meant there was a lot of Mozilla-specific code that came along with it.

  • The monolithic BaseScript + mixins model had a very steep barrier of entry. There's a significant learning curve, and scripts need to be fully ported to mozharness to take advantage of its features.

I had wanted to address these issues for years, but never had time to devote fully to harness-specific development.

Now I do.

Introducing scriptharness 0.1.0:

I'm proud of this. I'm also aware it's not mature [yet], and it's currently missing some functionality.

There are some ideas I'd love to explore before 1.0.0:

  • multiple Script objects with threading and separate logs

  • Config Type Definitions

  • rethink how to enable/disable actions. I could keep mozharness' --add-action clobber --no-upload structure, or play with --actions +clobber -upload or something. (The - before the upload in the latter might cause argparse issues?)

  • also, explore Maven-style actions (all actions before target action are enabled) and actions with dependencies on other actions. I prefer keeping each action independent, idempotent, and individually targetable, but I can see someone wanting the other behavior for certain scripts.

  • I've already split out strings from code in a number of places, for unit testing. I'm curious what it would take to make scriptharness localizable, and if there would be demand for it.

  • ahal suggested adding structured logging; I'd love to investigate that.

I already have 0.2.0 on the brain. I'd love any feedback or patches.

escapewindow: escape window (Default)

Five years ago today, I landed the first mozharness commit in my user repo. (github)

starting something, or wasting my time. + a scratch trunk_nightly.json

The project had three initial goals:

  • First and foremost, I was tasked with building a multi-locale Fennec on Maemo. This was a more complex task than could sanely fit in a buildbot factory.

  • The Mozilla Releng team was already discussing pulling logic out of buildbot factories and into client-side scripts. I had been wanting to write a second version of my script framework idea. The first version was closed-source, perl, and very company- and product-specific. The second would improve on the first in every way, while keeping its three central principles of full logging, flexible config, and modular actions.

  • Finally, at that point I was still a Perl developer learning Python. I tend learn languages by writing a project from scratch in that new language; this was my opportunity.

Multi-locale Fennec became a reality, and then we started adding projects to mozharness, one by one.

As of last July, mozharness was the client-side engine for the majority of Mozilla's CI and release infrastructure. I still see plenty of activity in bugmail and IRC these days. I'll be the first to point out its shortcomings, but I think overall it has been a success.

Happy birthday, mozharness!

escapewindow: escape window (Default)

Today's my last day at Mozilla. It wasn't an easy decision to move on; this is the best team I've been a part of in my career. And working at a company with such idealistic principles and the capacity to make a difference has been a privilege.

Looking back at the past five-and-three-quarter years:

  • I wrote mozharness, a versatile scripting harness. I strongly believe in its three core concepts: versatile locking config; full logging; modularity.

  • I helped FirefoxOS (b2g) ship, and it's making waves in the industry. Internally, the release processes are well on the path to maturing and stabilizing, and b2g is now riding the trains.

    • Merge day: Releng took over ownership of merge day, and b2g increased its complexity exponentially. I don't think it's quite that bad :) I whittled it down from requiring someone's full mental capacity for three out of every six weeks, to several days of precisely following directions.

    • I rewrote vcs-sync to be more maintainable and robust, and to support gecko-dev and gecko-projects. Being able to support both mercurial and git across many hundreds of repos has become a core part of our development and automation, primarily because of b2g. The best thing you can say about a mission critical piece of infrastructure like this is that you can sleep through the night or take off for the weekend without worrying if it'll break. Or go on vacation for 3 1/2 weeks, far from civilization, without feeling guilty or worried.

  • I helped ship three mobile 1.0's. I learned a ton, and I don't think I could have gotten through it by myself; John and the team helped me through this immensely.

    • On mobile, we went from one or two builds on a branch to full tier-1 support: builds and tests on checkin across all of our integration-, release-, and project- branches. And mobile is riding the trains.

    • We Sim-shipped 5.0 on Firefox desktop and mobile off the same changeset. Firefox 6.0b2, and every release since then, was built off the same automation for desktop and mobile. Those were total team efforts.

    • I will be remembered for the mobile pedalboard. When we talked to other people in the industry, this was more on-device mobile test automation than they had ever seen or heard of; their solutions all revolved around manual QA.

      (full set)

    • And they are like effin bunnies; we later moved on to shoe rack bunnies, rackmounted bunnies, and now more and more emulator-driven bunnies in the cloud, each numbering in the hundreds or more. I've been hands off here for quite a while; the team has really improved things leaps and bounds over my crude initial attempts.

  • I brainstormed next-gen build infrastructure. I started blogging about this back in January 2009, based largely around my previous webapp+db design elsewhere, but I think my LWR posts in Dec 2013 had more of an impact. A lot of those ideas ended up in TaskCluster; mozharness scripts will contain the bulk of the client-side logic. We'll see how it all works when TaskCluster starts taking on a significant percentage of the current buildbot load :)

I will stay a Mozillian, and I'm looking forward to see where we can go from here!

escapewindow: escape window (Default)

[stating the problem]

Mozharness currently handles a lot of complexity. (It was designed to be able to, but the ideal is still elegantly simple scripts and configs.)

Our production-oriented scripts take (and sometimes expect) config inputs from multiple locations, some of them dynamic; and they contain infrastructure-oriented behavior like clobberer, mock, and tooltool, which don't apply to standalone users.

We want mozharness to be able to handle the complexity of our infrastructure, but make it elegantly simple for the standalone user. These are currently conflicting goals, and automating jobs in infrastructure often wins out over making the scripts user friendly. We've brainstormed some ideas on how to fix this, but first, some more details:

[complex configs]

A lot of the current complexity involves config inputs from many places:

We want to lock the running config at the beginning of the script run, but we also don't want to have to clone a repo or make external calls to web resources during __init__(). Our current solution has been to populate runtime configs during one of our script actions, but then to support those runtime configs we have to check multiple config locations for our script logic. (self.buildbot_config, self.test_config, self.config, ...)

We're able to handle this complexity in mozharness, and we end up with a single config dict that we then dump to the log + to a json file on disk, which can then be reused to replicate that job's config. However, this has a negative effect on humans who need to either change something in the running configs, or who want to simplify the config to work locally.

[in-tree vs out-of-tree]

We also want some of mozharness' config and logic to ride the trains, but other portions need to be able to handle outside-of-tree processes and config, for various reasons:

  • some processes are volatile enough that they need to change across the board across all trees on a frequent basis;
  • some processes act across multiple trees and revisions, like the bumper scripts and vcs-sync;
  • some infrastructure-oriented code needs to be able to change across all processes, including historical-revision-based processes; and
  • some processes have nothing to do with the gecko tree at all.

[brainstorming solutions]

Part of the solution is to move logic out of mozharness. Desktop Firefox builds and repacks moving to mach makes sense, since they're

  1. configurable by separate mozconfigs,
  2. tasks completely shared by developers, and
  3. completely dependent on the tree, so tying them to the tree has no additional downside.

However, Andrew Halberstadt wanted to write the in-tree test harnesses in mozharness, and have mach call the mozharness scripts. This broke some of the above assumptions, until we started thinking along the lines of splitting mozharness: a portion in-tree running the test harnesses, and a portion out-of-tree doing the pre-test-run machine setup.

(I'm leaning towards both splitting mozharness and using helper objects, but am open to other brainstorms at this point...)

[splitting mozharness]

In effect, the wrapper, out-of-tree portion of mozharness would be taking all of the complex inputs, simplifying them for the in-tree portion, and setting up the environment (mock, tooltool, downloads+installs, etc.); the in-tree portion would take a relatively simple config and run the tests.

We could do this by having one mozharness script call another. We'd have to fix the logging bug that causes us to double-log lines when we instantiate a second BaseScript, but that's not an insurmountable problem. We could also try execing the second script, though I'd want to verify how that works on Windows. We could also modify our buildbot ScriptFactory to be able to call two scripts consecutively, after the first script dynamically generates the simplified config for the second script.

We could land the portions of mozharness needed to run test harnesses in-tree, and leave the others out-of-tree. There will be some duplication, especially in the mozharness.base code, but that's changing less than the scripts and mozharness.mozilla modules.

We would be able to present a user-friendly "inner" script with limited inputs that rides the trains, while also allowing for complex inputs and automation-oriented setup beforehand in the "outer" script. We'd most likely still have to allow for automation support in the inner script, if there's some reporting or error checking or other automation task that's needed after the handoff, but we'd still be able to limit the complexity of that inner script. And we could wrap that inner script in a mach command for easy developer use.

[helper objects]

Currently, most of mozharness' logic is encapsulated in self. We do have helper objects: the BaseConfig and the ReadOnlyDict self.config for config; the MultiFileLogger self.log_obj that handles all logging; MercurialVCS for cloning, ADBDeviceHandler and SUTDeviceHandler for mobile device wrangling. But a lot of what we do is handled by mixins inherited by self.

A while back I filed a bug to create a LocalLogger and BaseHelper to enable parallelization in mozharness scripts. Instead of cloning 90 locale repos serially, we could create 10 helper objects that each clone a repo in parallel, and launch new ones as the previous ones finish. This would have simplified Armen's parallel emulator testing code. But even if we're not planning on running parallel processes, creating a helper object allows us to simplify the config and logic in that object, similar to the "inner" script if we split mozharness into in-tree and out-of-tree instances, which could potentially also be instantiated by other non-mozharness scripts.

Essentially, as long as the object has a self.log_obj, it will use that for logging. The LocalLogger would log to memory or disk, outside of the main script log, to avoid parallel log interleaving; we would use this if we were going to run the helper objects in parallel. If we wanted the helper object to stream to the main log, we could set its log_obj to our self.log_obj. Similarly with its config. We could set its config to our self.config, or limit what config we pass to simplify.

(Mozharness' config locking is a feature that promotes easier debugging and predictability, but in practice we often find ourselves trying to get around it somehow. Other config dicts, self.variables, editing self.config in _pre_config_lock() ... Creating helper objects lets us create dynamic config at runtime without violating this central principle, as long as it's logged properly.)

Because this "helper object" solution overlaps considerably with the "splitting mozharness" solution, we could use a combination of the two to great efficacy.

[functions and globals]

This idea completely alters our implementation of mozharness, by moving self.config to a global config, directly calling logging methods (or wrapped logging methods). By making each method a standalone function that's only slightly different from a standard python function, it lowers the bar for contribution or re-use of mozharness code. It does away with both the downsides and benefits of objects.

The first, large downside I see is this solution appears incompatible with the "helper objects" solution. By relying on a global config and logging in our functions, it's difficult to create standalone helpers that use minimized configs or alternate logging configurations. I also think the global logging may make the double-logging bug more prevalent.

It's quite possible I'm downplaying the benefit of importing individual functions like a standard python script. There are decorators to transform functions into class methods and vice versa, which might allow for both standalone functions and object-based methods with the same code.

[related links]

  • Jordan Lund has some ideas + wip patches linked from bug 753547 comment 6.
  • Andrew Halberstadt's Sharing code not always a good thing and How to deal with IFFY requirements
  • My mozharness core principles example scripts+configs and video
  • Lars Lohn's Crouching Argparse Hidden Configman. Afaict configman appears to solve similar problems to mozharness' BaseConfig, but Argparse requires python 2.7 and mozharness locks the config.
  • Gaia Try

    May. 6th, 2014 11:12 am
    escapewindow: escape window (Default)

    We're now running Gaia tests on TBPL against Gaia pull requests.

    John Ford wrote a commit hook in bug 989131 to push an update to the gaia-try repo that looks like this. The buildbot test scheduler master is polling this repo; when it sees a change, it triggers tests against the latest b2g desktop builds. The test jobs download the pre-built desktop binaries, and clone the appropriate pull-request Gaia repo and revision on top, then run the tests. The buildbot work was completed in bug 986209.

    This should allow us to verify our code doesn't break anything before it's merged to Gaia proper, saving human time and reducing code churn.

    Armen pointed out how gaia code changes currently show up in the push count via bumper processes, but that only reflects merged pull requests, not these un-reviewed pull requests. Now that we've turned on gaia-try, this is equivalent to another mozilla-inbound (an additional 10% of our push load, iirc). Our May pushes should see a significant bump.

    escapewindow: escape window (Default)

    This is a long overdue blost post to say that gecko-dev and gecko-projects are fully live and cut over (and renamed, for those who have been looking at the now-defunct links in my previous blog posts).

    Gecko-dev is on and on github; gecko-projects is on github only until it's clear that we won't need regular branch and repo resets. We have retired the following github repos, which we will probably remove after a period of time:

    The mapfiles are still in, which is not the final location. There's a combined mapfile in here for convenience, but the gecko-mapfile is the canonical mapfile.

    I've been working on a db-based mapper app, which, given a git or hg sha, would allow us to query for the corresponding sha. It also allows for downloading the full mapfile for a repo (gecko.git, gecko-dev, gecko-projects) or the combined full mapfile of two or more of those repos. It also allows for downloading the shas inserted since a particular datetime. Screenshots of my development instance are here.

    Bear with me; I've got lots on my plate and not enough time. Mapper is in the works, however, and I think this should smooth over some of the rough edges when dealing with both hg and git. Additional eyeballs and/or patience would be appreciated!

    My db-based mapper repo is here.
    The db-based mapper bug is here.
    The permanent upload location bug is here.

    escapewindow: escape window (Default)
    The most long-term success tends to come from reducing, to the greatest extent possible, the need for agreement and consensus.

    I think we could rephrase "reducing the need for agreement and consensus" as, "increasing our ability to agree-to-disagree and still work well together". I'm going to talk a bit about modularity, forks, and shared APIs.


    Catlee was pushing for LWR modularity, to distance us from buildbot's monolithic app architecture. I was as well, to a degree, with standalone clusters and delegating pooling decisions to a SlaveAPI/Mozpool analog. If you take that a step further, you reach Catlee's view of it (as I understand it):

    • minion1 allocation module: this could simply iterate down a list of minions for smaller clusters, or ask SlaveAPI/Mozpool for an appropriate minion in the gecko cluster.
    • the job launching module could use sshd on the minion pool if the scripts have their own logging and bootstrapping; another version could talk to a custom minion-side daemon.
    • The graph-generation and graph-processing pieces could be modular, allowing for different graph- and job- definition formats between clusters.
    • the pending jobs piece, the graphs db + api piece, the event api, and configs pieces as well

    By making these modular, we reduce the monolithic app to compatible LEGO® pieces that can be mixed and matched and forked if necessary. Plus, it allows for easier development in parallel.

    Similarly, we had been talking about versioning each API and the job- and graph- definitions, so we wouldn't always be locked in to being backwards-compatible. That would allow us to make faster decisions early on that wouldn't necessarily have major long-term implications. It's easier to fail early + fail often if decisions are easily reversible.

    This versioning would also allow us to potentially have different workflows or job+graph definitions between LWR clusters, should we need it. The only requirement here would be that each LWR cluster be internally consistent. This could allow for simpler workflows to operate without the overhead of being fully compatible with a complex workflow; similarly, we wouldn't have to shoehorn a complex workflow onto a simpler one.

    1 Earlier I mentioned that I (and others) wanted to move away from the buildbot master/slave terminology. I think we've settled on minions for the compute farm machines. We're a bit mixed on the server side: Overlord? Professor? Mastermind? I'd prefer to think of the server cluster as more of a traffic cop than anything overarchingly powerful and omniscient, but I'm not too picky here.

    cross-cluster communication

    As a bit of [additional] background: Catlee sees LWR as event-based. We would define types of events, and what happens when an event occurs. And I mentioned wanting to support graphs-of-graphs, and hinted at potentially supporting cross-project or cross-corporation architectures here and here 2.

    Why not allow LWR clusters to talk to each other? They certainly would be able to, with events. If events involve hitting a specific server on a specific port, that may be tricky if there's a firewall in the way, but that might be solvable with listeners on the DMZ.

    Why not allow graphs-of-graphs to reference graphs in another LWR cluster? (If that's feasible.) With this workflow, once the dependencies had all finished, we could trigger a subgraph on another LWR cluster, either via an event or directly in the graph API.

    If we're able to read the other cluster's status and/or request a callback event at the end of that subgraph, this could work, and allow for loose coupling between related LWR clusters without requiring their internals be identical.

    2 While re-reading this post, I noticed that I was writing about next-gen, post-buildbot build infrastructure at Mozilla back in January 2009. My picture back then was a lot more RDBMS- based then, but a lot of the concepts still hold.

    We put that project on the back burner so we could help Mozilla launch a 1.0. Four 1.0's (or equivalent: Maemo Fennec, Android Fennec, Android native Fennec, B2G) and nearly five years later, it looks like we're finally poised to tackle it, together.

    moving faster

    I tend to want to tackle the hardest piece of a problem first. It's easier to keep the simpler problems in mind when designing solutions for the hardest problem, than vice versa. In LWR's case, the hardest workflow is the existing Gecko builds and tests.

    The initial LWR phase 1 may now involve a smaller scoped, Gaia-oriented, pull request autoland system. This worried me: if I'm concerned about forward-compatibility with Gecko phase 1, will I be forced into a stop-energy role during the Gaia phase 1 brainstorming? That sounds counterproductive for everyone involved. That's until I realized modularity and cross-cluster communication could help us avoid requiring future-proof decisions at the outset. If we don't have to worry that decisions made during phase 1 are irreversible, we'll be able to take more chances and move faster.

    If we wanted the Gaia workflow to someday be a part of the Gecko dependency graphs, that can happen by either moving those jobs into the Gecko LWR cluster, or we could have the Gecko cluster drive the Gaia cluster.

    Forked LWR clusters aren't ideal. But blocking Gaia LWR progress on hand-wavy ideas of what Gecko might need in the future isn't helpful. The option of avoiding the tyranny of a single shared workflow or manifest definition frees us to brainstorm the best solution for each, and converge later if it makes sense.

    In part 1, I covered where we are currently, and what needs to change to scale up.
    In part 2, I covered a high level overview of LWR.
    In part 3, I covered some hand-wavy LWR specifics, including what we can roll out in phase 1.
    In part 4, I drilled down into the dependency graph.
    In part 5, I covered some thoughts on the jobs and graphs db.
    We met with the A-team about this, a couple times, and are planning on working on this together!
    Now I'm going to take care of some vcs-sync tasks, prep for our next meeting, and start writing some code.

    escapewindow: escape window (Default)
    [10:57] <catlee>    so one thing - I think it may be premature to look at db models before looking at APIs

    I think I agree with that, especially since it looks like LWR may end up looking very different than my initial views of it.

    However, as I noted in part 3's preface, I'm going to go ahead and get my thoughts down, with the caveat that this will probably change significantly. Hopefully this will be useful in the high-level sense, at least.

    jobs and graphs db

    At the bare minimum, this will need graphs and jobs. But right now I'm thinking we may want the following tables (or collections, depending what db solution we end up choosing):

    graph sets

    Graph sets would tie a set of graphs together. If we wanted to, say, launch b2g builds as a separate graph from the firefox mobile and firefox desktop graphs, we could still tie them together as a graph set. Alternately, we could create a big graph that represents everything. The benefit of keeping the separate graphs is it's easier to reference and retrigger jobs of a type: retrigger all the b2g jobs, retrigger all the Firefox desktop win32 PGO talos runs.

    graph templates

    I'm still in the brainstorm mode for the templates. Having templates for graphs and jobs would allow us to generate graphs from the web app, allowing for a faster UI without having to farm out a job to the graph generation pool to clone a repo and generate a graph. These would have templatized bits like branch name that we can fill out to create a graph for a specific branch. It would also be one way we could keep track of changes in customized graphs or jobs (by diffing the template against the actual job or graph).


    This would be the actual dependency graphs we use for scheduling, built from the templates or submitted from external requests.

    job templates

    This would work similar to the graph templates: how jobs would look, if you filled in the branch name and such, that we can easily work with in the webapp to create jobs.


    These would have the definitions of jobs for specific graphs, but not the actual job run information -- that would go in the "job runs" table.

    job runs

    I thought of "job runs" as separate from "jobs" because I was trying to resolve how we deal with retriggers and retries. Do we embed more and more information inside the job dictionary? What happens if we want to run a new job Y just like completed job X, but as its own entity? Do we know how to scrub the previous history and status of job X, while still keeping the definition the same? (This is what I meant by "volatile" information in jobs). The "job run" table solves this by keeping the "jobs" table all about definitions, and the "job runs" table has the actual runtime history and status. I'm not sure if I'm creating too many tables or just enough here, right now.

    If you're wondering what you might run, you might care about the graph sets, and {graph,job} templates tables. If you're wondering what has run, you would look at the graphs and job runs tables. If you're wondering if a specific job or graph were customized, you would compare the graph or job against the appropriate template. And if you're looking at retriggering stuff, you would be cloning bits of the graphs and jobs tables.

    (I think Catlee has the job runs in a separate db, as the job queue, to differentiate pending jobs from jobs that are blocked by dependencies. I think they're roughly equivalent in concept, and his model would allow for acting on those pending jobs faster.)

    I think the main thing here is not the schema, but my concerns about retries and retriggers, keeping track of customization of jobs+graphs via the webapp, and reducing the turnaround time between creating a graph in LWR and showing it on the webapp. Not all of these things will be as important for all projects, but since we plan on supporting a superset of gecko's needs with LWR, we'll need to support gecko's workflow at some point.

    dependency graph repo

    This repo isn't a mandatory requirement; I see it as a piece that could speed up and [hopefully] streamline the workflow of LWR. It could allow us to:

    • refer to a set of job+graph definitions by a single SHA. That would make it easier to tie graphs- and jobs- templates to the source.
    • easily diff, say, mozilla-inbound jobs against mozilla-central. You can do that if the jobs and graphs definitions live in-tree for each, but it's easier to tell if one revision differs from another revision than if one directory tree in a revision differs from that directory tree in another revision. Even in the same repo: it's hard to tell if m-c revision 2's job and graphs have changed from m-c revision 1, without diffing. The job-and-graph repo would [generally] only have a new revision if something has changed.
    • pre-populate sets of jobs and graph templates in the db that people can use without re-generating them.

    There's no requirement that the jobs+graph definitions live in this repo, but having it would make the webapp graph+job creation a lot easier.

    We could create a branch definitions file in-tree (though it could live elsewhere; its location would be defined in LWR's config). The branch definitions could have trychooser-like options to turn parts of the graphs on or off: PGO, nightlies, l10n, etc. These would be referenced by the job and graph definitions: "enable this job if nightlies are enabled in the branch definitions", etc. So in the branch definitions, we could either point to this dependency graph repo+revision, or at a local directory, for the job+graph definitions. In the gecko model, if someone adds a new job type, that would result in a new jobs+graphs repo revision. A patch to point the branch-definitions file at the new revision would land in Try, first, then one of the inbound branches. Then it would get merged to m-c and then out to the other inbound and project branches; then it would ride the trains.

    (Of course, in the above model, there's the issue of inbound having different branch definitions than mozilla-central, which is why I was suggesting we have overrides by branch name. I'm not sure of a better solution at the moment.)

    The other side of this model: when someone pushes a new jobs+graphs revision, that triggers a "generate new jobs+graphs templates" job. That would enter new jobs+graphs for the new SHA. Then, if you wanted to create a graph or job based on that SHA, the webapp doesn't have to re-build those from scratch; it has them in the db, pre-populated.

    chunks and customizations

    • For chunked jobs (most test jobs, l10n), we were brainstorming about having dynamic numbers of chunks. If only a single machine is free, it would grab the first chunk, finish, then ask LWR if there are more chunks to run. If only a handful of machines are available, LWR could lean towards starting min_chunks jobs. If there are many machines idle, we can trigger max_chunks jobs in parallel. I'm not sure how plausible this is, but this could help if it doesn't add too much complexity.
    • I think that while users should be able to adjust graph- or job-priority, we should have max priorities set per-branch or per-user, so we can keep chemspill release builds at the highest priority.

      Similarly, I think we should allow customization of jobs via the webapp on certain branches, but

      1. we should mark them as customized (by, say, a flag, plus the diff between the template and the job), and
      2. we need to prevent customizing, say, nightly builds that get sent to users, or release builds.

      This causes interesting problems when we want to clone a job or graph: do we clone a template, or the customized job or graph that contains volatile information? (I worked around this in my head by creating the schema above.)

      Signed graphs and jobs, per-branch restrictions, or separate LWR clusters with different ACLs, could all factor into limiting what people can customize.

    retries and retriggers

    For retries, we need to track max [auto] retries, as well as job statuses per run. I played with this in my head: do we keep track of runs separately from job definitions? Or clone the jobs, in which case volatile status and customizing jobs become more of an issue? Keeping the job runs separate, and specifying whether they were an auto-retry or a user-generated retrigger, could help in preventing going beyond max-auto-retries.

    Retriggers themselves are somewhat problematic if we mark jobs as skipped due to dependencies: if a job that was unsuccessful is retriggered and exits successfully, do we revisit previously skipped jobs? Or do we clone the graph when we retrigger, and keep all downstream jobs pending-blocked-by-dependencies? Does this graph mark the previous graph as a parent graph, or do we mark it as part of the same graph set?

    (This is less of a problem currently, since build jobs run sendchanges in that cascading-waterfall type scheduling; any time you retrigger a build, it will retrigger all downstream tests, which is useful if that's what you want. If you don't want the tests, you either waste test machine time or human time in cancelling the tests. Explicitly stating what we want in the graph is a better model imo, but forces us to be more explicit when we retrigger a job and want the downstream jobs.)

    Another possibility: I thought that instead of marking downstream jobs as skipped-due-to-dependencies, we could leave them pending-blocked-by-dependencies until they either see a successful run from their upstream dependencies, or hit their TTL (request timeout). This would remove some complexity in retriggering, but would leave a lot of pending jobs hanging around that could slow down the graph processing pool and skew our 15 minute wait time SLA metrics.

    I don't think I have the perfect answers to these questions; a lot of my conclusions are based on the scope of the problem that I'm holding in my head. I'm certain that some solutions are better for some workflows and others are better for others. I think, for the moment, I came to the soft conclusion of a hand-wavy retriggering-portions-of-the-graph-via-webapp (or web api call, which does the equivalent).

    A random semi-tangential thought: whichever method of pending- or skipped- solution we generate will probably significantly affect our 15 minute wait time SLA metrics, anyway; they may also provide more useful metrics like end-to-end-times. After we move to this model, we may want to revisit our metric of record.

    (This doesn't really have anything to do with the db, but I had this thought recently, and didn't want to save it for a part 6.)

    While discussing this with the Gaia, A-Team, and perf teams, it became clear that we may need a solution for other projects that want to run simple jobs that aren't ported to mozharness. Catlee was thinking an equivalent process to the buildbot buildslave process: this would handle logging, uploads, status notifications, etc., without requiring a constant connection to the master. Previously I had worked with daemons like this that spawned jobs on the build farm, but then moved to launching scripts via sshd as a lower-maintenance solution.

    The downsides of no daemon include having to solve the mach context problem on macs, having to deal with sshd on windows, and needing a remote logging solution in the scripts. The upsides include being able to add machines to the pool without requiring the daemon, avoiding platform-specific issues with writing and maintaining the daemon(s), and script changes are faster to roll out (and more granular) than upgrading a daemon across an entire pool.

    if we create a mozharness/scripts/ that takes a set of commands to run against a repo+revision (or set of repos+revisions), with pre-defined metrics for success and failure, then simpler processes don't need their own mozharness script; we could wrap the commands in And we'd get all the logging, error parsing, and retry logic already defined in mozharness.

    I don't think we should rule out either approach just yet. With the idea, both approaches seem viable at the moment.

    In part 1, I covered where we are currently, and what needs to change to scale up.
    In part 2, I covered a high level overview of LWR.
    In part 3, I covered some hand-wavy LWR specifics, including what we can roll out in phase 1.
    In part 4, I drilled down into the dependency graph.
    We met with the A-team about this, a couple times, and are planning on working on LWR together!
    Now I'm going to take care of some vcs-sync tasks, prep for this next meeting, and start writing some code.

    escapewindow: escape window (Default)

    I already wrote a bit about the dependency graph here, and :catlee wrote about it here. While I was writing part 2, it became clear that

    1. I had a lot more ideas about the dependency graph, enough for its own blog post, and
    2. since I want to tackle writing the dependency graph first, solidifying my ideas about it beforehand would be beneficial to writing it.

    I've been futzing around with graphviz with :hwine's help. Not half as much fun as drawings on napkins, but hopefully they make sense. I'm still thinking things through.

    jobs and graphs

    A quick look at TBPL was enough to convince me that the dependency graph would be complex enough just describing the relationships between jobs. The job details should be separate. Per-checkin, nightly, and periodic-PGO dependency graphs trigger overlapping sets of jobs, so avoiding duplicate job definitions is a plus.

    We'll need to submit both the dependency graph and the associated job definitions to LWR together. More on how I think jobs and graphs could work in the db below in part 5.

    For phase 1, I think job definitions will only cover enough to feed into buildbot and have them work.

    dummy jobs

    • In my initial dependency graph thoughts, I mentioned breakpoint jobs as a throwaway idea, but it's stuck with me.

      We could use these at the beginning of graphs that we want to view or edit in the web app before proceeding. Or if we submit an experimental patch to Try and want to verify the results after a specific job or set of jobs before proceeding further. Or if we want to represent QA signoff in a release graph, and allow them to continue the release via the web app.

      I imagine we would want a request timeout on this breakpoint, after which it's marked as timed out, and all child jobs are skipped. I imagine we'd also want to set an ACL on at least a subset of these, to limit who can sign off on releases.

      Also in releases, we have simple notification jobs that send email when the release has passed certain milestones. We could later potentially support IRC pings and bug comments.

      A highly simplified representation of part of a release:

      We currently continue the release via manual Release Engineering intervention, after we see an email "go". It would be great to represent it in the dependency graph and give the correct group of people access. Less RelEng bottleneck.
    • We could also have timer jobs that pause the graph until either cancelled or the timeout is hit. So if you want this graph to run at 7pm PST, you could schedule the graph with an initial timer job that marks itself successful at 7, triggering the next steps in the graph.
    • In buildbot, we currently have a dummy factory that sleeps 5 and exits successfully. We used this back in the dark ages to skip certain jobs in a release, since we could only restart the release from the beginning; by replacing long-running jobs with dummy jobs, we could start at the beginning and still skip the previously successful portions of the release.

      We could use dummy jobs to:

      1. simplify the relationships between jobs. In the above graph, we avoided a many-to-many relationship by inserting a notification job in between the linux jobs and the updates.
      2. trigger when certain groups of jobs finish (e.g. all linux64 mochitests), so downstream jobs can watch for the dummy job in Pulse rather than having to know how many chunks of mochitests we expect to run, and keep track as each one finishes.
      3. quickly test dependency graph processing: instead of waiting for a full build or test, replace it with a dummy job. For instance, we could set all the jobs of a type to "success" except one "timed out; retry" to test max retry limits quickly. This assumes we can set custom exit statuses for each dummy job, as well as potentially pointing at pre-existing artifact manifest URLs for downstream jobs to reference.

    Looking at this list, it appears to me that timer and breakpoint jobs are pretty close in functionality, as are notification and dummy (status?) jobs. We might be able to define these in one or two job types. And these jobs seem simple enough that they may be runnable on the graph processing pool, rather than calling out to SlaveAPI/MozPool for a new node to spawn a script on.


    At first glance, it's probably easiest to reuse the set of TBPL statuses: success, warning, failure, exception, retry. But there are also the grey statuses 'pending' and 'running'; the pink status 'cancelled'; and the statuses 'timed out', and 'interrupted' which are subsets of the first five statuses.

    Some statuses I've brainstormed:

    • inactive (skipped during scheduling)
    • request cancelled
    • pending blocked by dependencies
    • pending blocked by infrastructure limits
    • skipped due to coalescing
    • skipped due to dependencies
    • request timed out
    • running
    • interrupted due to user request
    • interrupted due to network/infrastructure/spot instance interrupt
    • interrupted due to max runtime timeout
    • interrupted due to idle time timeout (no output for x seconds)
    • completed successful
    • completed warnings
    • completed failure
    • retried (auto)
    • retried (user request)

    The "completed warnings" and "completed failure" statuses could be split further into "with crash", "with memory leak", "with compilation error", etc., which could be useful to specify, but are job-type-specific.

    If we continue as we have been, some of these statuses are only detectable by log parsing. Differentiating these statuses allows us to act on them in a programmatic fashion. We do have to strike a balance, however. Adding more statuses to the list later might force us to revisit all of our job dependencies to ensure the right behavior with each new status. Specifying non-useful statuses at the outset can lead to unneeded complexity and cruft. Perhaps 'state' could be separated from 'status', where 'state' is in the set ('inactive', 'pending', 'running', 'interrupted', 'completed'); we could also separate 'reasons' and 'comments' from 'status'.

    Timeouts are split into request timeouts or runtime timeouts (idle timeouts, max job runtime timeouts). If we hit a request timeout, I imagine the job would be marked as 'skipped'. I also imagine we could mark it as 'skipped successful' or 'skipped failure' depending on configuration: the former would work for timer jobs, especially if the request timeout could be specified by absolute clock time in addition to relative seconds elapsed. I also think both graphs and jobs could have request timeouts.

    I'm not entirely sure how to coalesce jobs in LWR, or if we want to. Maybe we leave that to graph and job prioritization, combined with request timeouts. If we did coalesce jobs, that would probably happen in the graph processing pool.

    For retries, we need to track max [auto] retries, as well as job statuses per run. I'm going to go deeper into this below in part 5.


    For the most part, I think relationships between jobs can be shown by the following flowchart:

    If we mark job 2 as skipped-due-to-dependencies, we need to deal with that somehow if we retrigger job 1. I'm not sure if that means we mark job 2 as "pending-blocked-by-dependencies" if we retrigger job 1, or if the graph processing pool revisits skipped-due-to-dependencies jobs after retriggered jobs finish. I'm going to explore this more in part 5, though I'm not sure I'll have a definitive answer there either.

    It should be possible, at some point, to block the next job until we see a specific job status:

    • don't run until this dependency is finished/cancelled/timed out
    • don't run unless the dependency is finished and marked as failure
    • don't run unless the dependency is finished and there's a memory leak or crash

    For the most part, we should be able to define all of our dependencies with this type of relationship: block this job on (job X1 status Y1, job X2 status Y2, ...). A request timeout with a predefined behavior-on-expiration would be the final piece.

    I could potentially see more powerful commands, like "cancel the rest of the [downstream?] jobs in this graph", or "retrigger this other job in the graph", or "increase the request timeout for this other job", being potentially useful. Perhaps we could add those to dummy status jobs. I could also see them significantly increasing the complexity of graphs, including the potential for infinite recursion in some constructs.

    I think I should mark any ideas that potentially introduce too much complexity as out of scope for phase 1.

    branch specific definitions

    Since job and graph definitions will be in-tree, riding the trains, we need some branch-specific definitions. Is this a PGO branch? Are nightlies enabled on this branch? Are all products and platforms enabled on this branch?

    This branch definition config file could also point at a revision in a separate, standalone repo for its dependency graph + job definitions, so we can easily refer to different sets of graph and job definitions by SHA. I'm going to explore that further in part 5.

    I worry about branch merges overwriting branch-specific configs. The inbound and project branches have different branch configs than mozilla-central, so it's definitely possible. I think the solution here is a generic branch-level config, and an optional branch-named file. If that branch-named file doesn't exist, use the generic default. (e.g. generic.json, mozilla-inbound.json) I know others disagree with me here, but I feel pretty strongly that human decisions need to be reduced or removed at merge time.

    graphs of graphs

    I think we need to support graphs-of-graphs. B2G jobs are completely separate from Firefox desktop or Fennec jobs; they only start with a common trigger. Similarly, win32 desktop jobs have no real dependencies on macosx desktop jobs. However, it's useful to refer to them as a single set of jobs, so if graphs can include other graphs, we could create a superset graph that includes the appropriate product- and platform- specific graphs, and trigger that.

    If we have PGO jobs defined in their own graph, we could potentially include it in the per-checkin graph with a branch config check. On a per-checkin-PGO branch, the PGO graph would be included and enabled in the per-checkin graph. Otherwise, the PGO graph would be included, but marked as inactive; we could then trigger those jobs as needed via the web app. (On a periodic-PGO branch, a periodic scheduler could submit an enabled PGO graph, separate from the per-checkin graph.)

    It's not immediately clear to me if we'll be able to depend on a specific job in a subgraph, or if we'll only be able to depend on the entire subgraph finishing. (For example: can an external graph depend on the linux32 mochitest-2 job finishing, or would it need to wait until all linux32 jobs finish?) Maybe named dummy status jobs will help here: graph1.start, graph1.end, graph1.builds_finished, etc. Maybe I'm overthinking things again.

    We need a balancing act between ease of reading and ease of writing; ease of use and ease of maintenance. We've seen the mess a strong imbalance can cause, in our own buildbot configs. The fact that we're planning on making the final graph easily viewable and testable without any infrastructure dependencies helps, in this regard.

    I think, our [to be written] dependency graph generator, may need to cover several use cases:

    • Create a graph in an api-submittable format. This may be all we do in phase 1, but the others are tempting...
    • Combine graphs as needed, with branch-specific definitions, and user customizations (think TryChooser and per-product builds).
    • Verify that this is a well-formed graph.
    • Run other graph unit tests, as needed.
    • Potentially output graphviz files for user-friendly local graph visualization?
    • It's unclear if we want it to also do the graph+job submitting to the api.

    I think the per-checkin graph would be good to build first; the nightly and PGO graphs, as well as the branch-specific defines, might also be nice to have in phase 1.

    I have 4 more sections I wrote skeletons for. Since those sections are more db-oriented, I'm going to move those into a part 5.

    In part 1, I covered where we are currently, and what needs to change to scale up.
    In part 2, I covered a high level overview of LWR.
    In part 3, I covered some hand-wavy LWR specifics, including what we can roll out in phase 1.
    In part 5, I'm going to cover some dependency graph db specifics.
    Now I'm going to meet with the A-team about this, take care of some vcs-sync tasks, and start writing some code.

    escapewindow: escape window (Default)


    Since I wrote my previous blog post on LWR, I've found/been sent/been reminded of a few links:

    • Automating away operations in production deployments:
    • Other job scheduling systems, which I plan on digging into later. These may or may not work for us:
      • From comments in my previous blog post, JobScheduler, which may or may not scale to the degree we would need it to.
      • And this article that talks about Google Borg and Apache Mesos based solutions, which definitely can scale. It's not immediately clear to me if the compute cluster model lends itself to a heterogeneous compute farm where it's mandatory we be able to target specific nodes for specific tasks (as opposed to finding free resources of any type). It's also unclear at first blush whether they would lend themselves to Windows, OSX, ARM device, or other hardware-based nodes, or are explicitly linux/cloud specific.

    I'm going to keep writing this third LWR blog post as planned, since I think we failed to explain things clearly to our newer team members during the team week. Also, recording our recent brainstorming may help us decide if these other scheduling systems would work for us, and help guide any customizations we may want to make, should we choose to use them.

    whiteboard schematics

    the drawing is either really important, or just a brainstorm first draft. i'm not sure which yet.

    event api
    This would allow specific events to trigger pre-defined behavior. New push, new graph, job start, job finish, timer.
    lwr config
    This would specify the above pre-defined behavior, as well as some timers for nightly/periodic/scheduled tasks.
    graph api
    This would allow for reading/inserting/updating dependency graphs into the system.
    event processor
    This would trigger graph generation and graph processing jobs during phase 1.
    web app
    As described here; most likely only a subset of those features would exist in phase 1.
    graphs db
    This would hold the graphs and jobs. (We might want a different name than 'graphs' for this db, to avoid confusion with graphserver.)
    graph generation pool
    This would generate the graphs using pre-defined defaults. We'd like the defaults for the build/test graphs to live in-tree. We would potentially need TryChooser, per-product (only kick off certain builds depending on which files have been changed), and other customization logic here.
    graph processing pool
    This would read the graphs and determine the next steps. Dependency status checking, triggering next jobs when appropriate, marking the rest of the graph as skipped/timed out/etc if needed.

    We'd like to keep the server-side as dumb as possible. The fewer changes that need to be made there, the more stable it will be. By moving logic and configs off the server, we can make more complex and granular changes without touching the servers. We've already seen the effects of keeping all the logic, all the configs, all the scheduling on the buildbot masters, and we want the opposite.

    We'd like a small server to be runnable on a single machine, so we can test and debug changes on our laptops. Versioned- or backwards-compatible APIs may allow us to upgrade half a production cluster, bring that live, then upgrade the second half. If we're easily able to spin up a new standalone cluster, we can easily support different workflows/audiences like staging-vs-production, standalone project branch "pods", or experimental small project support.

    slaveapi, mozpool, and network logging

    Currently, buildbot has a list of buildslaves per builder, and keeps track of which ones are currently connected to the buildbot master. :catlee then tweaks the nextSlave logic to prefer faster buildslaves over slower, or spot instances over reserved instances (or to use reserved instances if the same job was run on an interrupted spot instance), or the last buildslave that successfully ran that particular job (to improve depend build times). Buildbot doesn't have any concept of how healthy the buildslave is, or how to maintain the buildslaves, and requires that we make any pooling or nextSlave decisions ahead of time and load them into the running buildbot masters via a reconfig. Plus, the master<->buildslave communication requires an uninterrupted network connection, which gives us streaming logs, but adds network fragility.

    SlaveAPI is designed to handle some of the above issues: determine the health of a slave, reboot a slave, or mark it as disabled. In the future it may allow us to spin up and spin down AWS instances, and reimage hardware slaves.

    MozPool is currently limited to Android Pandas, and allows for health checks on Pandas, rebooting Pandas, and IIRC reimaging Pandas. A job that requires a Panda would be able to request [a healthy] one from the pool, run the job, and return it to the pool or mark it as bad.

    With a combination of the two, LWR could request a node with certain properties (with tags, maybe?): any machine; any linux/osx machine; the fastest linux machine available with build tools; or specifically by hostname. If LWR also passed the history and details of the job along with the request, the SlaveAPI/MozPool analog could make decisions like spot instance or reserved instance; fast or slow; most recently successfully built that tree so a depend build would be fast. And we might be able to keep that complexity out of LWR.

    We'd like to be able to spawn the job and detach, to remove the need for an uninterrupted network connection. For status, it might be nice to be able to tail the log on demand, and/or add network logging support to mozharness (via a MultiNetworkLogger class, perhaps). This would probably require a network log cluster of some sort. Someone else suggested that we be able to toggle the network logging on at will, but otherwise default to off, to reduce network traffic. I'm not entirely sure how to do this, but given a trigger we could replay the log from disk over the network, and continue to stream the log as it came in, once the network logging had caught up.

    We could also take this opportunity to move away from the buildbot master/slave terminology, to... perhaps server/node? farmer/cow? :) Technically this wouldn't matter at all. Semantically it does.

    artifact manifests

    Currently, we upload various things from build jobs: installers, crashsymbols, test zips. The build jobs then guess which binary is the installer and sendchange the installer and test zip urls, triggering tests. The buildbot master then uploads the build logs, sometimes to a completely different directory than the installer, which can cause issues with TBPL and other downstream consumers. The test jobs take the installer and test zip urls from the sendchange, and use those to download and install the binary, extract the tests, and run them. Sometimes they need other files: crashsymbols, the robocop apk, so we apply a regex to the installer url to guess the other urls, causing all sorts of fun when this doesn't work. In a similar vein, we download previous MARs to generate partial updates. However, the mar files contain version numbers, causing more fun when we try to guess the filename after a version bump.

    Call me a buzzkill, but I'd like to eliminate all this fun guesswork in favor of a boring and predictable solution. I'd like an artifact manifest. A structured artifact manifest, with versioned manifest formats, so we know how to read them. And while I think it's ok to have a section of the manifest for dumping random blobs of information, if portions of those become generally useful, we should probably put those in the structured area in the next version of the manifest format.

    The manifest would definitely contain naming information for the various artifacts uploaded, as well as what they are. If mozharness jobs uploaded their own logs, they would more predictably live with the other artifacts, and be specified in the manifest. We could also include job status and uid and other such information. Dependent jobs could then act on all of that information without guessing, given only the location of the manifest. This also reduces the amount of information that LWR has to transfer between jobs... and may satisfy :sfink's request for more structured schema for downstream jobs.

    phase one

    We can't write and roll all of this out at once. Besides the magnitude of work represented by this handful of blog posts, we also have existing dependencies on buildbot that we don't yet have replacements for. For phase one, we're picturing the graph processing pool sending jobs into buildbot, probably via the db.

    First we should build the dependency graph for our existing build and test jobs. If I were to tackle one piece first, this would be it, because it's a single script with no infrastructure dependencies. It's easy to verify afterwards, by comparing the output to our existing TBPL runs. Normalizing the builder names would help here.

    Then we could feed that graph into self serve, potentially allowing us to more easily trigger individual builds (we currently use regexes, iirc). Tests and repacks may be trickier, since they expect additional information to come via sendchange and buildbot properties, but that's a start.

    Next we could start writing the server pieces -- trigger polling, graph generation, iterate through the graph. Any web app work could start here. This isn't strictly blocked by the self-serve implementation, so if more people chipped in we could work on those in parallel.

    We could then start feeding the jobs from the graph into buildbot, and disable the respective buildbot polling and scheduling.

    Once we got this far, we could also look into moving certain jobs that are already ported to mozharness out of buildbot and into a pure LWR implementation. That may depend on a streaming log solution or artifact manifest solution. This might belong in phase 2.

    I've been both excited and nervous writing about LWR. Excited, since I'm bursting with ideas for the project. Nervous, because so much of it extends outside of my domain of expertise; because it's a huge project; because portions of it are still nebulous concepts in my head. I think we have the team(s) to build it, though. And since I think best about projects when I write [about] them, these blog posts have helped focus my ideas. To get a first draft down that we can revise later.

    In part 1, I covered where we are currently, and what needs to change to scale up.
    In part 2, I covered a high level overview of LWR.
    In part 4, I'm going to drill down into the dependency graph.
    Then I'm going to meet with the A-team about this, and start writing some code.

    escapewindow: escape window (Default)

    compute farm

    I think of all the ideas we've brainstormed, the one I'm most drawn to is the idea that our automation infrastructure shouldn't just be a build farm feeding into a test farm. It should be a compute farm, capable of running a superset of tasks including, but not restricted to, builds and tests.

    Once we made that leap, it wasn't too hard to imagine the compute farm running its own maintenance tasks, or doing its own dependency scheduling. Or running any scriptable task we need it to.

    This perspective also guides the schematics; generic scheduling, generic job running. This job only happens to be a Firefox desktop build, a Firefox mobile l10n repack, or a Firefox OS emulator test. This graph only happens to be the set of builds and tests that we want to spawn per-checkin. But it's not limited to that.

    dependency graph (cf.)

    Currently, when we detect a new checkin, we kick off new builds. When they successfully upload, they create new dependent jobs (tests), in a cascading waterfall scheduling method. This works, but is hard to predict, and it doesn't lend itself to backfilling of unscheduled jobs, or knowing when the entire set of builds and tests have finished.

    Instead, if we create a graph of all builds and tests at the beginning, with dependencies marked, we get these nice properties:

    • Scheduling changes can be made, debugged, and verified without actually needing to hook it up into a full system; the changes will be visible in the new graph.
    • It becomes much easier to answer the question of what we expect to run, when, and where.
    • If we initially mark certain jobs in the graph as inactive, we can backfill those jobs very easily, by later marking them as active.
    • We are able to create jobs that run at the end of full sets of builds and tests, to run analyses or cleanup tasks. Or "smoketest" jobs that run before any other tests are run, to make sure what we're testing is worth testing further. Or "breakpoint" jobs that pause the graph before proceeding, until someone or something marks that job as finished.
    • If the graph is viewable and editable, it becomes much easier to toggle specific jobs on or off, or requeue a job with or without changes. Perhaps in a web app.

    web app

    The dependency graph could potentially be edited, either before it's submitted, or as runtime changes to pending or re-queued jobs. Given a user-friendly web app that allows you to visualize the graph, and drill down into each job to modify it, we can make scheduling even more flexible.

    • TryChooser could go from a checkin-comment-based set of flags to a something viewable and editable before you submit the graph. Per-job toggles, certainly (just mochitest-3 on windows64 debug, please, but mochitest-2 through 4 on the other platforms).
    • If the repository + revision were settable fields in the web app, we could potentially get rid of the multi-headed Try repository altogether (point to a user repo and revision, and build from there).
    • Some project branches might not need per-checkin or nightly jobs at all, given a convenient way to trigger builds and tests against any revision at will.
    • Given the ability to specify where the job logic comes from (e.g., mozharness repo and revision), people working on the automation itself can test their changes before rolling them out, especially if there are ways to send the output of jobs (job status, artifact uploads, etc.) to an alternate location. This vastly reduces the need for a completely separate "staging" area that quickly falls out of date. Faster iteration on automation, faster turnaround.

    community job status

    One feature we lost with the Tinderbox EOL was the ability for any community member to contribute job status. We'd like to get it back. It's useful for people to be able to set up their own processes and have them show up in TBPL, or other status queries and dashboards.

    Given the scale we're targeting, it's not immediately clear that a community member's machine(s) would be able to make a dent in the pool. However, other configurations not supported by the compute farm would potentially have immense value: alternate toolchains. Alternate OSes. Alternate hardware, especially since the bulk of the compute farm will be virtual. Run your own build or test (or other job) and send your status to the appropriate API.

    As for LWR dependency graphs potentially triggering community-run machines: if we had jobs that are useful in aggregate, like a SETI at home communal type job, or intermittent test runner/crasher type jobs, those could be candidates. Or if we wanted to be able to trigger a community alternate-configuration job from the web app. Either a pull-not-push model, or a messaging model where community members can set up listeners, could work here.

    Since we're talking massive scale, if the jobs in question are runnable on the compute farm, perhaps the best route would be contributing scripts to run. Releng-as-a-Service.


    Release Engineering is a bottleneck. I think Ted once said that everyone needs something from RelEng; that's quite possibly true. We've been trying to reverse this trend by empowering others to write or modify their own mozharness scripts: the A-team, :sfink, :gaye, :graydon have all been doing so. More bandwidth. Less bottleneck.

    We've already established that compute load on a small subset of servers doesn't work as well as moving it to the massively scalable compute farm. This video on leadership says the same thing, in terms of people: empowering the team makes for more brain power than bottlenecking the decisions and logic on one person. Similarly, empowering other teams to update their automation at their own pace will scale much better than funneling all of those tasks into a single team.

    We could potentially move towards a BYOS (bring your own script) model, since other teams know their workflow, their builds, their tests, their processes better than RelEng ever could. :catlee's been using the term Releng-as-a-Service for a while now. I think it would scale.

    I would want to allow for any arbitrary script to run on our compute farm (within the realms of operational-, security-, and fiscal- sanity, of course). Comparing talos performance numbers looking for regressions? Parsing logs for metrics? Trying to find patterns in support feedback? Have a whole new class of thing to automate? Run it on the compute farm. We'll help you get started. But first, we have to make it less expensive and complex to schedule arbitrary jobs.

    This is largely what we talked about, on a high level, both during our team week and over the years. A lot of this seems very blue sky. But we're so much closer to this as a reality than we were when I was first brainstorming about replacing buildbot, 4-5 years ago. We need to work towards this, in phases, while also keeping on top of the rest of our tasks.

    In part 1, I covered where we are currently, and what needs to change to scale up.
    In part 3, I'm going to go into some hand-wavy LWR specifics, including what we can roll out in phase 1.
    In part 4, I'm going to drill down into the dependency graph.
    Then I'm going to start writing some code.

    October 2016

    S M T W T F S
    9101112 131415
    1617 1819202122


    RSS Atom

    Most Popular Tags

    Style Credit

    Expand Cut Tags

    No cut tags
    Page generated Oct. 24th, 2016 05:05 am
    Powered by Dreamwidth Studios