Showing posts with label mozilla. Show all posts
Showing posts with label mozilla. Show all posts

New blog location

>> Wednesday, May 17, 2017

I moved my blog to WordPress.

New location is here https://kimmoir.blog/

Read more...

Welcome Mozilla Releng summer interns

>> Friday, May 13, 2016

We're delighted to have Francis Kang and Connor Sheehan join the Mozilla release engineering team as summer interns.  Francis is studying at the University of Toronto while Connor attends McMaster University in Hamilton, Ontario.  We'll have another intern (Anthony) join us later on in the summer who will be working from our San Francisco office.

Francis and Connor will be working on implementing some new features in release promotion as well as  migrating some builds to taskcluster.  I'll be mentoring Francis,  while Rail will be mentoring Connor.  If you are in the Toronto office, please drop by to say hi to them.  Or welcome them on irc as fkang or sheehan. 

Kim, Francis, Connor and Rail
They are both already off to a great start and have pull requests merged into production that fixed some release promotion issues.  Their code was used in the Firefox 47.0 beta 5 release promotion that we ran last night so their first week was quite productive.


Mentoring an intern provides an opportunity to see the systems we run from a fresh perspective.  They both have lots of great questions which makes us revisit why design decisions were made, could we do things better?   Like all teaching roles, I always find that I learn a tremendous amount from the experience, and hope they have fun learning real world software engineering concepts with respect to running large distributed systems.

Welcome to Mozilla!

Read more...

RelEng & RelOps Weekly highlights - March 4, 2016

>> Monday, March 07, 2016

It was a busy week with many releases in flight, as well as preparation for running beta 1 with release promotion next week.  We also are in the process of adding more capacity to certain test platform pools to lower wait times given all the new e10s tests that have been enabled.

Improve Release Pipeline:

  • Nick ran a staging release for 46.0b1 to check for issues before the merge, preventing some bustage for Fennec and ensuring we can fall back to the old system if any unexpected issues show up with release promotion
  • Varun improved Balrog’s detection of certain types of bad data.
  • Ben finished most of the work involved with preparing Balrog to move to CloudOps infrastructure, including automatically building Docker images.
  • We’re hoping to do our first beta with the new release build promotion pipeline next week for 47.0b1. Stay tuned!
Everyone gets a release promotion!  Source: http://i.imgur.com/WMmqSDI.jpg

Improve CI Pipeline:
  • Dustin deployed a new version of the TaskCluster tools/login system with much improved UI for handling signing in and out and editing clients and roles.  He also simplified the existing roles, with the result that the set of roles now fits on one screen, and is entirely composed of human-readable names.  All of this works toward two important goals: building a sign-in system that is useful and usable by all mozillians; and configuring the access-control system to give everyone their appropriate permissions and no more.

Release:

The releases calendar is getting busier as we get closer to the end of the cycle. Many releases were shipped or are still in-flight:
  • Firefox 45.0b10
  • Fennec 45.0b11
  • Fennec 45.0 (in-progress)
  • Firefox 45.0 (in-progress) - we shipped the RC to the beta channel
  • Firefox 45.0esr (in-progress)
  • Firefox 38.7.0esr (in-progress)
As always, you can find more specific release details in our post-mortem minutes:
https://wiki.mozilla.org/Releases:Release_Post_Mortem:2016-03-02 
https://wiki.mozilla.org/Releases:Release_Post_Mortem:2016-03-09

Operational:
  • Vlad, Alin and Amy reallocated 28 Windows test machines to the w7 pool to help with backlog and e10s testing.
  • Jake deployed new OpenSSL packages to protect our infrastructure from Drown Attack and various other recent OpenSSL vulnerabilities

Until next time!

Read more...

RelEng & RelOps Weekly highlights - February 26, 2016

>> Monday, February 29, 2016

It was a busy week for release engineering as several team members travelled to the Vancouver office to sprint on the release promotion project. The goal of the release promotion project is to promote continuous integration builds to release channels, allowing us to ship releases much more quickly.



Improve Release Pipeline:
  • Chris, Jordan, Callek (remotely), Kim, Mihai and Rail had a sprint on Release Promotion. We made so much progress on this project that we decided to use the new process for Firefox 46.0b1. https://bugzil.la/1118794 So many green jobs!


Improve CI Pipeline:

Release:
  • Ben, Mihai, Nick, Rail and Callek Shipped Firefox 45.0b6, Firefox 45.0b7, Firefox 45.0b9, Fennec 45.0b6, Thunderbird 45.0b2 

Operational:
  • Alin landed changes to run mochitest-push-e10s tests on Windows 7 https://bugzil.la/1248729. This is another step toward completing the enabling of e10s tests. 

Read more...

The mystery of high pending counts

>> Friday, September 25, 2015

In September, Mozilla release engineering started experiencing high pending counts on our test pools, notably Windows, but also Linux (and consequently Android).  High pending counts mean that there are thousands of jobs queued to run on the machines that are busy running other jobs.  The time developers have to wait for their test results is longer than ideal.


Usually, pending counts clear overnight as less code is pushed during the night (in North America) which invokes fewer builds and tests.  However, as you can see from the graph above, the Windows test pending counts were flat last night. They did not clear up overnight. You will also note that try, which usually comprises 63% of our load, has very highest pending counts compared to other branches.  This is because many people land on try before pushing to other branches, and tests aren't coalesced on try.


The work to determine the cause of high pending counts is always an interesting mystery.
  • Are end to end times for tests increasing?
  • Have more tests been enabled recently?
  • Are retries increasing? (Tests the run multiple times because the initial runs fail due to infrastructure issues)
  • Are jobs that are coalesced being backfilled and consuming capacity?
  • Are tests being chunked into smaller jobs that increase end to end time due to the added start up time?
Mystery by ©Stuart Richards, Creative Commons by-nc-sa 2.0

Joel Maher and I looked at the data for this last week and discovered what we believe to be the source of the problem.  We have determined that since the end of August a number of new test jobs were enabled that increased the compute time per push on Windows by 13% or 2.5 hours per push.  Most of these new test jobs are for e10s
Increase in seconds that new jobs added to the total compute time per push.  (Some existing jobs also reduced their compute time for a total difference about about 2.5 more hours per push on Windows)
The e10s initiative is an important initiative for Mozilla to make Firefox performance and security even better.  However, since new e10s and old tests will continue to run in parallel, we need to get creative on how to have acceptable wait times given the limitations of our current Windows tests pools.  (All of our Windows test run on bare metal in our datacentre, not on Amazon).
 
Release engineering is working to reduce this pending counts given our current hardware constraints with the following initiatives: 

To reduce Linux pending counts:
  • Added 200 new instances to the tst-emulator64 pool (run Android test jobs on Linux emulators) (bug 1204756)
  • In process of adding more Linux32 and Linux64 buildbot masters (bug 1205409) which will allow us to expand our capacity more

Ongoing work to reduce the Windows pending counts:

  • Disable Linux32 Talos tests and redeploy these machines as Windows test machines (bug 1204920 and bug 1208449)
  • Reduce the number of talos jobs by running SETA on talos (bug 1192994)
  • Developer productivity team is investigating whether non-operating specific tests that run on multiple windows test platforms can run on fewer platforms.

How can you help? 

Please be considerate when invoking try pushes and only select the platforms that you explicitly require to test.  Each try push for all platforms and all tests invokes over 800 jobs.

Read more...

Test job reduction by the numbers

>> Tuesday, June 16, 2015

In an earlier post,  I wrote how we had reduced the amount of test jobs that run on two branches to allow us to scale our infrastructure more effectively.  We run the tests that historically identify regressions more often.  The ones that don't, we skip on every Nth push.  We now have data on how this reduced the number of jobs we run since we began implementation in April.

We run SETA on two branches (mozilla-inbound and fx-team) and on 18 types of builds.  Collectively, these two branches represent about 20% of pushes each month.  Implementing SETA allowed us to move  from ~400 -> ~240 jobs per push on these two branches1 We run the tests identified as not reporting regressions on every 10th commit or 90 minutes since the last test was scheduled.  We run the critical tests on every commit.2

Reduction in number of jobs per push on mozilla-inbound as SETA scheduling is rolled out

A graph for the fx-team branch shows a similar trend. It was a staged rollout starting in early April, as I enabled platforms and as the SETA data became available. The dip in early June reflects where I enabled SETA for Android 4.3.

This data will continue to be updated in our scheduling configuration as it evolves and is updated by the code that Joel and Vaibhav wrote to analyze regressions. The analysis identifies that there were

Jobs to ignore: 440
Jobs to run: 114
Total number of jobs: 554

which is significant.  Our buildbot configurations are updated the latest SETA data with every reconfig, which occurs usually occurs every couple of days.

The platforms configured to run fewer tests for both opt and debug are

        MacOSX (10.6, 10.10)
        Windows (XP, 7, 8)
        Ubuntu 12.04 for linux32, linux64 and ASAN x64
        Android 2.3 armv7 API 9
        Android 4.3 armv7 API 11+

Additional info
1Tests may have been disabled/added at the same time,  this is not taken into account
2There still some scheduling issues to be fixed see bug 1174870  and bug 1174746 for further details

Read more...

Mozilla pushes - May 2015

>> Friday, June 12, 2015

Here's May 2015's monthly analysis of the pushes to our Mozilla development trees. You can load the data as an HTML page or as a json file.


Trends

The number of pushes decreased from those recorded in the previous month (8894) with a total of 8363. 

 
Highlights

  • 8363 pushes
  • 270 pushes/day (average)
  • Highest number of pushes/day: 445 pushes on May 21, 2015
  • 16.03 pushes/hour (highest average)

General Remarks
  • Try has around 62% of all the pushes now
  • The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 27% of all the pushes.

Records
  • August 2014 was the month with most pushes (13090  pushes)
  • August 2014 had the highest pushes/day average with 422 pushes/day
  • July 2014 had the highest average of "pushes-per-hour" with 23.51 pushes/hour
  • October 8, 2014 had the highest number of pushes in one day with 715 pushes 



Read more...

Mozilla pushes - April 2015

>> Friday, May 01, 2015

Here's April 2015's  monthly analysis of the pushes to our Mozilla development trees. You can load the data as an HTML page or as a json file.  


Trends
The number of pushes decreased from those recorded in the previous month with a total of 8894.  This is due to the fact that gaia-try is managed by taskcluster and thus these jobs don't appear in the buildbot scheduling databases anymore which this report tracks.


Highlights

  • 8894 pushes
  • 296 pushes/day (average)
  • Highest number of pushes/day: 528 pushes on Apr 1, 2015
  • 17.87 pushes/hour (highest average)

General Remarks

  • Try has around 58% of all the pushes now that we no longer track gaia-try
  • The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 28% of all the pushes.

Records

  • August 2014 was the month with most pushes (13090  pushes)
  • August 2014 had the highest pushes/day average with 422 pushes/day
  • July 2014 had the highest average of "pushes-per-hour" with 23.51 pushes/hour
  • October 8, 2014 had the highest number of pushes in one day with 715 pushes  



Note
I've changed the graphs to only track 2015 data.  Last month they were tracking 2014 data as well but it looked crowded so I updated them.  Here's a graph showing the number of pushes over the last few years for comparison.



Read more...

Less testing, same great Firefox taste!

>> Tuesday, April 28, 2015


Running a large continuous integration farm forces you to deal with many dynamic inputs coupled with capacity constraints. The number of pushes increase.  People add more tests.  We build and test on a new platform.  If the number of machines available remains static, the computing time associated with a single push will increase.  You can scale this for platforms that you build and test in the cloud (for us - Linux and Android on emulators), but this costs more money.  Adding hardware for other platforms such as Mac and Windows in data centres is also costly and time consuming.

Do we really need to run every test on every commit? If not, which tests should be run?  How often do they need to be run in order to catch regressions in a timely manner (i.e. able to bisect where the regression occurred)


Several months ago, jmaher and vaibhav1994, wrote code to analyze the test data and determine the minimum number of tests required to run to identify regressions.  They named their software SETA (search for extraneous test automation). They used historical data to determine the minimum set of tests that needed to be run to catch historical regressions.  Previously, we coalesced tests on a number of platforms to mitigate too many jobs being queued for too few machines.  However, this was not the best way to proceed because it reduced the number of times we ran all tests, not just less useful ones.  SETA allows us to run a subset of tests on every commit that historically have caught regressions.  We still run all the test suites, but at a specified interval. 

SETI – The Search for Extraterrestrial Intelligence by ©encouragement, Creative Commons by-nc-sa 2.0
In the last few weeks, I've implemented SETA scheduling in our our buildbot configs to use the data that the analysis that Vaibhav and Joel  implemented.  Currently, it's implemented on mozilla-inbound and fx-team branches which in aggregate represent around 19.6% (March 2015 data) of total pushes to the trees.  The platforms configured to run fewer tests for both opt and debug are
  • MacOSX (10.6, 10.10)
  • Windows (XP, 7, 8)
  • Ubuntu 12.04 for linux32, linux64 and ASAN x64
  • Android 2.3 armv7 API 9

As we gather more SETA data for newer platforms, such as Android 4.3, we can implement SETA scheduling for it as well and reduce our test load.  We continue to run the full suite of tests on all platforms other branches other than m-i and fx-team, such as mozilla-central, try, and the beta and release branches. If we did miss a regression by reducing the tests, it would appear on other branches mozilla-central. We will continue to update our configs to incorporate SETA data as it changes.

How does SETA scheduling work?
We specify the tests that we would like to run on a reduced schedule in our buildbot configs.  For instance, this specifies that we would like to run these debug tests on every 10th commit or if we reach a timeout of 5400 seconds between tests.

http://hg.mozilla.org/build/buildbot-configs/file/2d9e77a87dfa/mozilla-tests/config_seta.py#l692


Previously, catlee had implemented a scheduling in buildbot that allowed us to coallesce jobs on a certain branch and platform using EveryNthScheduler.  However, as it was originally implemented, it didn't allow us to specify tests to skip, such as mochitest-3 debug on MacOSX 10.10 on mozilla-inbound.  It would only allow us to skip all the debug or opt tests for a certain platform and branch.

I modified misc.py to parse the configs and create a dictionary for each test specifying the interval at which the test should be skipped and the timeout interval.  If the tests has these parameters specified, it should be scheduled using the  EveryNthScheduler instead of the default scheduler.

http://hg.mozilla.org/build/buildbotcustom/file/728dc76b5ad0/misc.py#l2727
There are still some quirks to work out but I think it is working out well so far. I'll have some graphs in a future post on how this reduced our test load. 

Further reading
Joel Maher: SETA – Search for Extraneous Test Automation



Read more...

Mozilla pushes - March 2015

>> Wednesday, April 15, 2015

Here's March 2015's  monthly analysis of the pushes to our Mozilla development trees. You can load the data as an HTML page or as a json file.

Trends
The number of pushes increased from those recorded in the previous month with a total of 10943. 

Highlights

  • 10943 pushes
  • 353 pushes/day (average)
  • Highest number of pushes/day: 579 pushes on Mar 11, 2015
  • 23.18 pushes/hour (highest average)

General Remarks
  • Try keeps on having around 49% of all the pushes
  • The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 26% of all the pushes.

Records
  • August 2014 was the month with most pushes (13090  pushes)
  • August 2014 had the highest pushes/day average with 422 pushes/day
  • July 2014 had the highest average of "pushes-per-hour" with 23.51 pushes/hour
  • October 8, 2014 had the highest number of pushes in one day with 715 pushes 





Read more...

Scaling Yosemite

>> Friday, March 20, 2015

We migrated most of our Mac OS X 10.8 (Mountain Lion) test machines to 10.10.2 (Yosemite) this quarter.

This project had two major constraints:
1) Use the existing hardware pool (~100 r5 mac minis)
2) Keep wait times sane1.  (The machines are constantly running tests most of the day due to the distributed nature of the Mozilla community and this had to continue during the migration.)

So basically upgrade all the machines without letting people notice what you're doing!

Yosemite Valley - Tunnel View Sunrise by ©jeffkrause, Creative Commons by-nc-sa 2.0

Why didn't we just buy more minis and add them to the existing pool of test machines?
  1. We run performance tests and thus need to have all the machines running the same hardware within a pool so performance comparisons are valid.  If we buy new hardware, we need to replace the entire pool at once.  Machines with different hardware specifications = useless performance test comparisons.
  2. We tried to purchase some used machines with the same hardware specs as our existing machines.  However, we couldn't find a source for them.  As Apple stops production of old mini hardware each time they announce a new one, they are difficult and expensive to source.
Apple Pi by ©apionid, Creative Commons by-nc-sa 2.0

Given that Yosemite was released last October, why we are only upgrading our test pool now?  We wait until the population of users running a new platform2 surpass those the old one before switching.

Mountain Lion -> Yosemite is an easy upgrade on your laptop.  It's not as simple when you're updating production machines that run tests at scale.

The first step was to pull a few machines out of production and verify the Puppet configuration was working.  In Puppet, you can specify commands to only run certain operating system versions. So we implemented several commands to accommodate changes for Yosemite. For instance, changing the default scrollbar behaviour, new services that interfere with test runs needed to be disabled, debug tests required new Apple security permissions configured etc.

Once the Puppet configuration was stable, I updated our configs so the people could run tests on Try and allocated a few machines to this pool. We opened bugs for tests that failed on Yosemite but passed on other platforms.  This was a very iterative process.  Run tests on try.  Look at failures, file bugs, fix test manifests. Once we had to the opt (functional) tests in a green state on try, we could start the migration.

Migration strategy
  • Disable selected Mountain Lion machines from the production pool
  • Reimage as Yosemite, update DNS and let them puppetize
  • Land patches to disable Mountain Lion tests and enable corresponding Yosemite tests on selected branches
  • Enable Yosemite machines to take production jobs
  • Reconfig so the buildbot master enable new Yosemite builders and schedule jobs appropriately
  • Repeat this process in batches
    • Enable Yosemite opt and performance tests on trunk (gecko >= 39) (50 machines)
    • Enable Yosemite debug (25 more machines)
    • Enable Yosemite on mozilla-aurora (15 more machines)
We currently have 14 machines left on Mountain Lion for mozilla-beta and mozilla-release branches.

As a I mentioned earlier, the two constraints with this project were to use the existing hardware pool that constantly runs tests in production and keep the existing wait times sane.  We encountered two major problems that impeded that goal:

It's a compliment when people say things like "I didn't realize that you updated a platform" because it means the upgrade did not cause large scale fires for all to see.  So it was a nice to hear that from one of my colleagues this week.

Thanks to philor, RyanVM and jmaher for opening bugs with respect to failing tests and greening them up.  Thanks to coop for many code reviews. Thanks dividehex for reimaging all the machines in batches and to arr for her valiant attempts to source new-to-us minis!

References
1Wait times represent the time from when a job is added to the scheduler database until it actually starts running. We usually try to keep this to under 15 minutes but this really varies on how many machines we have in the pool.
2We run tests for our products on a matrix of operating systems and operating system versions. The terminology for operating system x version in many release engineering shops is a platform.  To add to this, the list of platform we support varies across branches.  For instance, if we're going to deprecate a platform, we'll let this change ride the trains to release.

Further reading
Bug 1121175: [Tracking] Fix failing tests on Mac OSX 10.10 
Bug 1121199: Green up 10.10 tests currently failing on try 
Bug 1126493: rollout 10.10 tests in a way that doesn't impact wait times
Bug 1144206: investigate what is causing frequent talos failures on 10.10
Bug 1125998: Debug tests initially took 1.5-2x longer to complete on Yosemite


Why don't you just run these tests in the cloud?
  1. The Apple EULA severely restricts virtualization on Mac hardware. 
  2. I don't know of any major cloud vendors that offer the Mac as a platform.  Those that claim they do are actually renting racks of Macs on a dedicated per host basis.  This does not have the inherent scaling and associated cost saving of cloud computing.  In addition, the APIs to manage the machines at scale aren't there.
  3. We manage ~350 Mac minis.  We have more experience scaling Apple hardware than many vendors. Not many places run CI at Mozilla scale :-) Hopefully this will change and we'll be able to scale testing on Mac products like we do for Android and Linux in a cloud.

Read more...

Mozilla pushes - February 2015

>> Tuesday, March 17, 2015

Here's February's 2015 monthly analysis of the pushes to our Mozilla development trees. You can load the data as an HTML page or as a json file.

Trends
Although February is a shorter month, the number of pushes were close to those recorded in the previous month.  We had a higher average number of daily pushes (358) than in January (348).

Highlights
10015 pushes
358 pushes/day (average)
Highest number of pushes/day: 574 pushes on Feb 25, 2015
23.18 pushes/hour (highest)

General Remarks
Try had around 46% of all the pushes
The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 22% of all the pushes

Records
August 2014 was the month with most pushes (13090  pushes)
August 2014 has the highest pushes/day average with 422 pushes/day
July 2014 has the highest average of "pushes-per-hour" with 23.51 pushes/hour
October 8, 2014 had the highest number of pushes in one day with 715 pushes 





Read more...

Mozilla pushes - January 2015

>> Friday, February 13, 2015

Here's January 2015's monthly analysis of the pushes to our Mozilla development trees. You can load the data as an HTML page or as a json file.

Trends
We're back to regular volume after the holidays. Also, it's really cold outside in some parts of the of the Mozilla world.  Maybe committing code > going outside.


Highlights
10798 pushes
348 pushes/day (average)
Highest number of pushes/day: 562 pushes on Jan 28, 2015
18.65 pushes/hour (highest)

General Remarks
Try had around around 42% of all the pushes
The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 24% of all of the pushes

Records
August 2014 was the month with most pushes (13,090  pushes)
August 2014 has the highest pushes/day average with 422 pushes/day
July 2014 has the highest average of "pushes-per-hour" with 23.51 pushes/hour
October 8, 2014 had the highest number of pushes in one day with 715 pushes 




Read more...

Reminder: Releng 2015 submissions due Friday, January 23

>> Wednesday, January 21, 2015

Just a reminder that submissions for the Releng 2015 conference are due this Friday, January 23. 

It will be held on May 19, 2015 in Florence Italy.

If you've done recent work like

  • migrating your build or test pipeline to the cloud
  • switching to a new build system
  • migrating to a new version control system
  • optimized your configuration management system or switched to a new one
  • implemented continuous integration for mobile devices
  • reduced end to end build times
  • or anything else build, release, configuration and test related
we'd love to hear from you.  Please consider submitting a talk!

In addition, if you have colleagues that work in this space that might have interesting topics to discuss at this workshop, please forward this information. I'm happy to talk to people about the submission process or possible topics if there are questions.

Il Duomo di Firenze by ©eddi_07, Creative Commons by-nc-sa 2.0


Sono nel comitato che organizza la conferenza Releng 2015 che si terrà il 19 Maggio 2015 a Firenze. La scadenza per l’invio dei paper è il 23 Gennaio 2015.

http://releng.polymtl.ca/RELENG2015/html/index.html

se avete competenze in:
  • migrazione del sistema di build o dei test nel cloud
  • aggiornamento del processo di build
  • migrazione ad un nuovo sistema di version control
  • ottimizzazione o aggiornamento del configuration management system
  • implementazione di un sistema di continuos integration per dispositivi mobili
  • riduzione dei tempi di build
  • qualsiasi cambiamento che abbia migliorato il sistema di build/test/release
e volete discutere della vostra esperienza, inviateci una proposta di talk!

Per favore inoltrate questa richiesta ai vostri colleghi e alle persone interessate a questi argomenti. Nel caso ci fossero domande sul processo di invio o sui temi di discussione, non esitate a contattarmi.

(Thanks Massimo for helping with the Italian translation).

More information
Releng 2015 web page
Releng 2015 CFP now open

Read more...

Mozilla pushes - December 2014

>> Thursday, January 08, 2015


Here's December 2014's monthly analysis of the pushes to our Mozilla development trees. You can load the data as an HTML page or as a json file.

Trends
There was a low number of pushes this month.  I expect this is due to the Mozilla all-hands in Portland in early December where we were encouraged to meet up with other teams instead of coding :-) and the holidays at the end of the month for many countries.
As as side node, in 2014 we had a total number of 124423 pushes, compared to 79233 in 2013 which represents a growth rate of 57% this year.

Highlights
7836 pushes
253 pushes/day (average)
Highest number of pushes/day: 706 pushes on Dec 17, 2014
15.25 pushes/hour (highest)

General Remarks
Try had around around 46% of all the pushes
The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 23% of all of the pushes

Records
August 2014 was the month with most pushes (13,090  pushes)
August 2014 has the highest pushes/day average with 422 pushes/day
July 2014 has the highest average of "pushes-per-hour" with 23.51 pushes/hour
October 8, 2014 had the highest number of pushes in one day with 715 pushes 







Read more...

Mozilla pushes - November 2014

>> Wednesday, December 03, 2014

Here's November's monthly analysis of the pushes to our Mozilla development trees.  You can load the data as an HTML page or as a json file.

Trends
Not a record breaking month, in fact we are down over 2000 pushes since the last month.

Highlights
10376 pushes
346 pushes/day (average)
Highest number of pushes/day: 539 pushes on November 12
17.7 pushes/hour (average)

General Remarks
Try keeps had around 38% of all the pushes, and gaia-try has about 30%. The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 23% of all the pushes.

Records
August 2014 was the month with most pushes (13,090  pushes)
August 2014 has the highest pushes/day average with 422 pushes/day
July 2014 has the highest average of "pushes-per-hour" with 23.51 pushes/hour
October 8, 2014 had the highest number of pushes in one day with 715 pushes    







Read more...

Scaling capacity while saving cash

>> Wednesday, November 12, 2014

There was a very interesting release engineering summit this Monday held in concert with LISA in Seattle.  I was supposed fly there this past weekend so I could give a talk on Monday but late last week I became ill and was unable to go.   Which was very disappointing because the summit looked really great and I was looking forward to meeting the other release engineers and learning about the challenges they face.

Scale in the Market  ©Clint Mickel, Creative Commons by-nc-sa 2.0

Although I didn't have the opportunity to give the talk in person, the slides for it are available on slideshare and my mozilla people account   The talk describes how we scaled our continuous integration infrastructure on AWS to handle double the amount of pushes it handled in early 2013, all while reducing our AWS monthly bill by 2/3.

Cost per push from Oct 2012 until Oct 2014. This does not include costs for on premise equipment. It reflects our monthly AWS bill divided by the number of monthly pushes (commits).  The chart reflects costs from October 2012-2014.

Thank you to Dinah McNutt and the other program committee members for organizing this summit.  I look forward to watching the talks once they are online.

Read more...

Mozilla pushes - October 2014

Here's the October 2014 monthly analysis of the pushes to our Mozilla development trees.  You can load the data as an HTML page or as a json file.

Trends
We didn't have a record breaking month in terms of the number of pushes, however we did have a daily record on October 18 with 715 pushes. 

Highlights
12821 pushes, up slightly from the previous month
414 pushes/day (average)
Highest number of pushes/day: 715 pushes on October 8
22.5 pushes/hour (average)

General Remarks
Try keeps had around 39% of all the pushes, and gaia-try has about 31%. The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 21% of all the pushes

Records
August 2014 was the month with most pushes (13,090  pushes)
August 2014 has the highest pushes/day average with 422 pushes/day
July 2014 has the highest average of "pushes-per-hour" with 23.51 pushes/hour
October 8, 2014 had the highest number of pushes in one day with 715 pushes




Read more...

Mozilla pushes - September 2014

>> Monday, October 27, 2014

Here's September 2014's monthly analysis of the pushes to our Mozilla development trees.
You can load the data as an HTML page or as a json file.


Trends
Suprise!  No records were broken this month.

Highlights
12267 pushes
409 pushes/day (average)
Highest number of pushes/day: 646 pushes on September 10, 2014
22.6 pushes/hour (average)

General Remarks
Try has around 36% of pushes and Gaia-Try comprise about 32%.  The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 22% of all the pushes.

Records
August 2014 was the month with most pushes (13,090  pushes)
August 2014 has the highest pushes/day average with 620 pushes/day
July 2014 has the highest average of "pushes-per-hour" with 23.51 pushes/hour
August 20, 2014 had the highest number of pushes in one day with 690 pushes





Read more...

Mozilla Releng: The ice cream

>> Wednesday, September 17, 2014

A week or so ago, I was commenting in IRC that I was really impressed that our interns had such amazing communication and presentation skills.  One of the interns, John Zeller said something like "The cream rises to the top", to which I replied "Releng: the ice cream of CS".  From there, the conversation went on to discuss what would be the best ice cream flavour to make that would capture the spirit of Mozilla releng.  The consensus at the end was was that Irish Coffee (coffee with whisky) with cookie dough chunks was the favourite.  Because a lot of people like on the team like coffee, whisky makes it better and who doesn't like cookie dough?

I made this recipe over the weekend with some modifications.  I used the coffee recipe from the Perfect Scoop.  After it was done churning in the ice cream maker,  instead of whisky, which I didn't have on hand, I added Kahlua for more coffee flavour.  I don't really like cookie dough in ice cream but cooked chocolate chip cookies cut up with a liberal sprinkling of Kahlua are tasty.

Diced cookies sprinkled with Kahlua

Ice cream ready to put in freezer

Finished product
I have to say, it's quite delicious :-) If I open source ever stops being fun, I'm going to start a dairy empire.  Not really. Now back to bugzilla...

Read more...

  © Blogger template Simple n' Sweet by Ourblogtemplates.com 2009

Back to TOP