Oct. 17th, 2013

escapewindow: escape window (Default)

live

As of about 7pm PDT last Friday (October 11), gecko-dev (née beagle) and gecko-projects are live. Here's the newsgroup post. Here's gecko-dev on github and git.mozilla.org. Here's gecko-projects (with a README.md on how to use it) on github. The logs, repo_update.json files, and mapfiles are temporarily living here. The bug is here.

Both of these repositories are RelEng-supported, and have SHAs that match gecko.git.

If you use git for your gecko development, please start using these repos and make sure they work for you. AreWeFastYet and some developers have already switched over just fine. If you hit any problems, please let us know.

coming shortly

what's next

Beagle was definitely the largest piece in RelEng's vcs-sync puzzle, but it's not the final piece.

  • gecko.git (hg->git): I already have configs and a test repo. Once we're confident that gecko-dev and gecko-projects are solid and solve our developers' needs, we can switch our partner-oriented repo over from the legacy system to being converted by this new production system. This switchover does not require changing any SHAs, and should hopefully be an invisible, seamless cutover.
  • l10n (hg->git) I already have configs and have tested this locally. The workflow I went with here involves reading the b2g/locales/all-locales and languages_dev.json files on various branches, and building the list of repos-to-sync dynamically that way. Along with the ability to create new repos on git.mozilla.org, this allows us to sync new locale repos on demand, rather than requiring manual Release Engineering + IT intervention every time.
  • git->git sync support. We have a large number of b2g github repos that need to be populated in git.mozilla.org. If I follow the l10n model, we would dynamically create this list from b2g-manifests, rather than require manual Release Engineering + IT intervention.
  • git->hg sync support. This is needed for gaia currently.
  • hg->hg sync support. We've had some, but limited, use of the repos mirrored on bitbucket.com

As we add support for these, we can cut over legacy processes over to the new process; stay tuned for announcements for each of these switchovers.

Also, I plan to write a few shorter blog posts about some of what went into this project.

escapewindow: escape window (Default)

(This is the first of several shorter blog posts about features in the new vcs sync process.)

Back when the vcs sync project was first dropped in my lap, I quickly decided the initial implementation was going to push to a local repo. On disk, not on a network server. This has the nice properties of

  1. ruling out any server-permission- or network- related issues,
  2. allowing for development without an active network connection,
  3. speeding up the testing process to a small degree,
  4. and allowing for immediate inspection of the pushed repo's contents.

I named this a test_push, though I'm waffling on that name.

When it became clearer that non-fastforward pushes and deletions would be an issue for downstream partners, we were looking for ways to prevent that.

Ideally we prevent this at the RoR (repository of record), with pre-commit hooks. However, not all of our upstream repos have pre-commit hooks (see github), so this can't be a blanket solution. (I think our single-head hook on our release branches catches a lot of this, but there might be more we can do on hg.m.o).

Next, it would be good to have these denied at the partner repository level (we've done this on gecko.git and gaia.git). This is less ideal than the pre-commit hook, because a deletion or non-fast-forward commit can land upstream, and then the sync process has nowhere to go. Also, this is the last place we can catch this issue -- if this is set incorrectly, or missed, or if others have administrative rights to the repo and unset this, then we're in a bad position. (hg debugsetparents can fix non-fastforward commits. I don't know how to recover from deletions, other than track that revision down somewhere and re-push; luckily deletions look to be difficult to do in mercurial.)

I finally decided to add receive.denyNonFastForwards to the local test_push repo, via the --shared=true option in git init. If the test_push happens before any network pushes, and test_push failure prevents any network pushes for that branch, we get another layer of safety. It's still not as good a solution as preventing that change in the first place, but it's something we can control locally.

It looks like I'm missing receive.denyDeletes from the test_push... I added that to my development branch so we get that check in each test_push soon as well.

escapewindow: escape window (Default)

(This is one of several shorter blog posts about features in the new vcs sync process.)

Focusing on beagle first turned out to be the right call (thanks :joduinn) -- I severely underestimated the time it would take to solve the initial mozilla-central cvs-prepend step in an automated, repeatable fashion, as noted here and here. However, this meant that after all my beagle-specific testing, I had to refactor to support the other vcs-sync processes, and re-test.

One consequence: my ~6 minute estimate ballooned to ~9+ minutes for each conversion job when I changed from a single conversion_dir push to a push-per-source-repo. With each job cron'ed every 5 minutes, a commit could take up to 20-some minutes to show up in git (if it happened right after the previous conversion started).

:nbp had given me a shell script to look at, and :hwine had suggested we use hg incoming to check for any changes before proceeding with the pull/convert/push loop. The latter seemed simple to add, so I did.

The average no-op conversion time dropped from ~400-550 seconds to ~12. This includes rsyncing the updated status json and ~600 log files to an upload server. (That number of logfiles will go down exponentially when we have dated upload dirs, so we don't have to keep so many backups in the same directory.) This is dependent on hg.mozilla.org load, and can spike up to ~40 seconds.

The average conversion time dropped from ~400-550 seconds to a little over a minute, and additional repos don't add much more time.. sometimes ~2 minutes for 4 repos' worth of conversion. I also bumped the cron job frequency up to once per minute, so on average mercurial commits should show up in git within a minute or three. Plus, it's harder to find multiple repos' worth of changes within a single minute, so it keeps the incoming changes down. The longest delays tend to come when we hit hg corruption (uncommon), and even then we're auto-fixing within 8-9 minutes (see a later blog post about this). I still want to get more built-in parallelization support into mozharness, but with these numbers it's a lot less urgent.

One side effect of this: sometimes we skip over repos that need to be synced. For example, if we add a new target to a repo (e.g., a git.m.o repo, when we had previously only been pushing to github), or a new branch or tag regex (there will be a later blog post on these). If the repo in question has a lot of activity, this would populate on the next push. If it's a closed or low-activity repo, that might not happen for days, or weeks, or ever.

I added a --no-check-incoming commandline option, with a corresponding global check_incoming config setting to skip this behavior, and convert/sync everything. Also, I added per-repo check_incoming flags (defaulting to True) for more granularity. This helped in debugging the relbranch issue on mozilla-beta (see later blog post).

November 2022

S M T W T F S
  12345
67 89101112
13141516171819
20212223242526
27282930   

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Apr. 23rd, 2025 01:31 pm
Powered by Dreamwidth Studios