generically packaged tools
Feb. 18th, 2016 01:24 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
- tl;dr
- internal benefits to generically packaged tools
- learning from history: tinderbox
- mozharness and scriptharness
- treeherder and taskcluster
- other tools
tl;dr
One of the reasons I'm back at Mozilla is to work in-depth with some exciting new tools and infrastructure, at scale. However, I wish these tools could be used equally well by employees and non-employees. I see some efforts to improve this. But if this is really important to us, we need to make it a point of emphasis.
If we do this, we can benefit from a healthier, extended community. There are also internal benefits to making our tools packaged in a generic way. I'll go into these in the next section.
I did start to contact some tool maintainers, and so far the response is good. I'll continue doing so. Hopefully I can write a followup blog post about how efforts are under way to make generically packaged tools a reality.
internal benefits to generically packaged tools
Besides the strengthened community, there are other internal benefits.
-
upgrades
Once installation is packaged and automated, an upgrade to a service might be:
- spin up a new service
- test it
- send over some traffic (applicable if the service is load balanced)
- go/no-go
- cut over to the appropriate service and turn off the other one.
This entire process can be fully automated. Once this process is smooth enough, upgrading a service can be seamless and relatively worry free.
-
disaster recovery
If a service is only installable manually, a disaster recovery scenario might involve people working around the clock to reinstall a service.
Once the installation is automated and configurable, this changes. A cold backup solution might be similar to the above upgrade scenario. If disaster strikes, have someone install a new one from the automation, or have a backup instance already installed, ready for someone to switch over.
A hot backup solution might involve having multiple load balanced services running across regions, with automatic failovers. The automated install helps guarantee that each node in the cluster is configured correctly, without human error.
-
good first bugs
(or intern projects, or GSOC projects, or...)
The more special-snowflake and Mozilla-specific our tools are, the more likely the tool will be tied closely to other Mozilla-specific services, so a seemingly simple change might require touching many different codebases. These types of tools are also more likely to require VPN or special LDAP access that present barriers to new contributors.
If a new contributor is able to install tools locally, that guarantees that they can work on standalone bugs/projects involving those tools. And successful good first bugs and intern/GSOC type projects directly lead to a stronger contributor base.
-
special projects
At various team work weeks years past, we brainstormed being able to launch entire chunks of infrastructure up in self-contained units. These could handle project branch type work. When the code was merged back into trunk, we could archive the data and shut down the instances used.
This concept also works for special projects: things that don't fit within the standard workflow.
If we can spin up services in a separate, network isolated area, riskier or special-requirement work (whether in terms of access control, user permissions, partner secrets, etc) could happen there without affecting production.
-
self-testing
Installing the package from scratch is the test for the generic packaging feature. The more we install it, the smaller the window of changes we need to inspect for installation bustage. This is the same as any other software feature.
Having an install test for each tool gives us reassurances that the next time we need to install the service (upgrade, disaster recovery, etc.) it'll work.
learning from history: tinderbox
In 2000, a developer asked me to install tinderbox, a continuous integration tool written and used at Netscape. It would allow us see the state of the tree, and minimize bustage.
One of the first things I saw was this disclaimer:
This is not very well packaged code. It's not packaged at all. Don't come here expecting something you plop in a directory, twiddle a few things, and you're off and using it. Much work has to be done to get there. We'd like to get there, but it wasn't clear when that would be, and so we decided to let people see it first. Don't believe for a minute that you can use this stuff without first understanding most of the code.
I managed to slog through the steps and get a working tinderbox/bonsai/mxr install up and running. However, since then, I've met a number of other people who had tried and given up.
I ended up joining Netscape in 2001. (My history with tinderbox helped me in my interview.) An external contributor visited and presented tinderbox2
to the engineering team. It was configurable. It was modular. It removed Netscape-centric hardcodes.
However, it didn't fully support all tinderbox1 features, and certain default behaviors were changed by design. Beyond that, Netscape employees already had fully functional, well maintained instances that worked well for us. Rather than sinking time into extending tinderbox2 to cover our needs, we ended up staying with the disclaimered, unpackaged tinderbox1. And that was the version running at tinderbox.mozilla.org, until its death in May 2014.
For a company focused primarily on shipping a browser, shipping the tools used to build that browser isn't necessarily a priority. However, there were some opportunity costs:
- Tinderbox1 continued to suffer from the same large barrier of entry, stunting its growth.
- I don't know how widely tinderbox2 was used, but I imagine adoption at Netscape would have been a plus for the project. (I did end up installing tinderbox2 post-Netscape.)
- A larger, healthier community could have result in upstreamed patches, and a stronger overall project in the long run.
- People who use the same toolset may become external contributors or employees to the project in general (like me). People who have poor impressions of a toolset may be less interested in joining as contributors or employees.
mozharness and scriptharness
In my previous stint at Mozilla, I wrote mozharness, which is a python framework for scripts.
I intentionally kept mozilla-specific code under mozharness.mozilla
and generic mozharness code under mozharness.base
. The idea was to make it easier for external users to grab a copy of mozharness and write their own scripts and modules for a non-Mozilla project. (By "non-Mozilla" and "external user", I mean anyone who wants to automate software anywhere.)
However, after I left Mozilla, I didn't use mozharness for anything. Why not?
- There's a non-trivial learning curve for people new to the project, and the benefits of adopting mozharness are most apparent when there's a certain level of adoption. I was working at time scales that didn't necessarily lend themselves to this.
-
I intentionally kept mozharness clone-and-run. I think this was the right model at the time, to lower the barrier for using mozharness until it had reached a certain level of adoption. Clone-and-run made it easier to use mozharness in buildbot, but makes it harder to install or use just the
mozharness.base
module. -
We did our best to keep Mozilla-isms out of
mozharness.base
via review. However, this would have been more successful with either an external contributer speaking up before we broke their usage model, or automated tests, or both.
So even with the best intentions in mind, I had ended up putting roadblocks in the way of external users. I didn't realize their scope until I was fully in the mindset of an external user myself.
I wrote scriptharness to try to address these problems (among others):
- I revisited certain decisions that made mozharness less approachable. The mixins and monolithic Script object. Requiring a locked config. Missing docstrings and tests.
- I made scriptharness resources available at standard locations (generic packages on pypi, source at github, full docs at readthedocs).
- Since it's a self-contained package, it's usable here or elsewhere. Since it's written to solve a generic problem rather than a Mozilla-specific problem, it's unencumbered by Mozilla-specific solutions.
treeherder and taskcluster
After I left Mozilla, on several occasions we wanted to use other Mozilla tools in a non-Mozilla environment. As a general rule, this didn't work.
-
Continuous Integration (CI) Dashboard
We had multiple Jenkins servers, each with a partial picture of our set of build+test jobs. Figuring out the state of the code base was complex and a specialized skill. Why couldn't we have one dashboard showing a complete view?
I took a look at Treeherder. It has improved upon the original TBPL, but is designed to work specifically with Mozilla's services and workflows. I found it difficult to set up outside of a Mozilla environment.
-
CI Infrastructure
We were investigating other open source CI solutions. There are many solutions for server-side apps, or linux-only solutions, or cross-platform at small- to medium- scale. TaskCluster is the only one I know of that's cross-platform at massive scale.
When we looked, all the tutorials and docs had to do with using the existing Mozilla production instance, which required a
mozilla.com
email address at the time. There are no docs for setting up TaskCluster itself.(Spoiler: I hear it may be a 2H project :D :D :D )
-
Single Sign-On
An open source, trusted SSO solution sounded like a good thing to implement.
However, we found out Persona has been EOL'd. We didn't look too closely at the implementation details after that.
(These are just the tools I tried to use in my 1 1/2 years away from Mozilla. There are more tools that might be useful to the outside world. I'll go into those in the next section.)
There are reasons behind each of these, and they may make a lot of sense internally. I'm not trying to place any blame or point fingers. I'm only raising the question of whether our future plans include packaging our tools for outside use, and if not, why not?
We're facing a similar decision to Netscape's. What's important? We can say we're a company that ships a browser, and the tools only exist for that purpose. Or we can put efforts towards making these tools useful in many environments. Generically packaged, with documentation that doesn't start with a disclaimer about how difficult they are to set up.
It's easy to say we'd like to, but we're too busy with ______. That's the gist of the tinderbox disclaimer. There are costs to designing and maintaining tools for use outside of one's subset of needs. But as long as packaging tools for outside use is not a point of emphasis, we'll maintain the status quo.
other tools
The above were just the tools that we tried to install. I asked around and built a list of Mozilla tools that might be useful to the outside world. I'm not sure if I have all the details correct; please correct me if I'm wrong!
- mach - if all the mozilla-central-specific functions were moved to libraries, could this be useful for others?
- bughunter - I don't know enough to say. This looks like a crash/assertion finder, tying into Socorro and bugzilla.
- balrog - this now has docker support, which is promising for potential outside use.
- marionette (already used by others)
- reftest (already used by others)
- pulse - this is a taskcluster dep.
- Bugzilla - I've seen lots of instances successfully used at many other companies. Its installation docs are here.
- I also hear that Socorro is successfully used at a number of other companies.
So we already have some success here. I'd love to see it extended -- more tools, and more use cases, e.g. supporting bugzilla or jira as the bug db backend when applicable.
I don't know how much demand there will be, if we do end up packaging these tools in a way that others can use them. But if we don't package them, we may never know. And I do know that there are entire companies built around shipping tools like these. We don't have to drop any existing goals on the floor to chase this dream, but I think it's worth pursuing in the future.
Yeah
Date: 2016-02-19 12:57 am (UTC)In principle I think there is very little in Treeherder that is Mozilla specific, aside from the ridiculous buildbot-job-parsing code which will hopefully be going away at some point in the near future. There are no immediate plans to make this happen, but I don't think it would be that much work. I think it would take an experienced contributor with 100-or-so hours to spare to disentangle it from our Mozilla workflow. To motivate this work, such a developer would probably want to have some kind of specific project they'd want to use it for...
Re: Yeah
Date: 2016-02-19 01:12 am (UTC)I think the 4hrs json and relengapi calls were the biggies... there's nothing that prevents someone from setting those up (consume jenkins data, and build a 4hrs json, etc.), but that makes the process of installing the dashboard significantly longer and more difficult than install-configure-run.
So perhaps for treeherder, Mozilla-specific isn't the correct description, but "environment must look a lot like Mozilla's" might be accurate. It's great to hear that it might not be that far away.
And yes, I think that's where we're at with a lot of tools: we can do it, but there's a cost, and right now it's not clear if it's the right place to put our efforts. I'm hoping we start leaning that way, and that a few success stories help us realize the benefits.