Trunk Based Development Deployment Pipeline



  • We are currently working on transitioning to Trunk Based Development and starting to look at our deployment pipeline and how we can improve.

    Our current workflow:

    1. All engineers work on the trunk, frequently committing, which is automatically deployed to our dev environment

    2. When QA sign-off on dev, we generate a release by cutting a release branch (releases/v1.0) from trunk which then will get deployed to our UAT environment. This is a manual approval step. Once approved, the release branch is generated dynamically and pushed to the repo, and also deployed.

    3. Test team (QE) then essentially performs E2E testing in UAT and they request no new code is merged apart from only cherry-picks for P1 defects identified. We cherry pick from trunk. Devs also continue working on trunk fixing any other P2/P3 defects or adding new features.

    4. Once QE sign-off on UAT, that snapshot can then be pushed along to the next environments if applicable, and eventually PROD.

    Questions/Problems

    In this approach, UAT is updated from the release branch. How can we handle getting sign-off on future work (P2, P3, features) in UAT? Do we need a separate stage for deploying from DEV?

    Basically, I am trying to figure out the best approach to handle releases when there is a long gap between sign-off and actual release. This will change in the future as we have plans to introduce feature flags and eventually aim to get to true CI/CD

    We currently use Azure DevOps.



  • There are a few changes that I'd make.

    First, I'd get rid of the QA sign-off on dev before cutting a release branch. I'd look at methods to instill a culture of developer-led testing (especially if that means developing automated tests) on your trunk. Of course, this doesn't mean that your testers shouldn't be using the trunk deployed to the development environment - they can be practicing any manual test cases or doing exploratory testing and giving feedback.

    Second, if you haven't, I'd look at automating the end-to-end testing done in the UAT environment, at least from a regression standpoint. You may want to do some manual testing, especially of an exploratory nature, in UAT, but you want to reduce the burden of manual testing, especially as the system increases in complexity.

    It may not matter much, but I would recommend fixing defects in the release branch and merging back into the trunk. You could also cherry-pick or rebase or some other method, as well. But I've found that going from release branch to development branch is more intuitive.

    Once you have a sign off on the release, apply a tag. You can either get the state of the code back into trunk and deploy the tag in the trunk or you can deploy the head of the release branch. Both are the same thing.

    If you have multiple UAT in progress at once, then you may need multiple environments. However, you're also introducing complexity around a defect found in UAT2 that also impacts UAT1 and managing getting the fixes synchronized. I'd want to understand what makes UAT take so long and what can be done to get an accepted system into production faster to reduce parallel UATs.




Suggested Topics

  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2