Deploying microservices in a consistent way using different gitlab repositories
I'm looking for a good organization to deploy our solution consisting in multiple apps using Gitlab and K8S.
Our SaaS app that is made of :
- A backend app, mostly our API (django)
- A user app (React)
- An admin app (React)
Both frontend apps are connected to the API.
Currently the backend and user apps are in the same gitlab repository, and are deployed using a CI/CD pipeline that builds the apps, builds the docker images, and deploy them to K8S using a Helm Chart. The Helm Chart is located in the same repo.
I recently added the admin app in another gitlab repository and I'm concerned about keeping all apps consistent, that is to say, both frontend apps have to compatible with the API.
I'm thinking about adding another repository especially for the deployment (let's call this Deploy Repo). This repo could contain:
- 3 git submodules, one for each sub app,
- The Helm chart,
- Other files related to deployment
I thought about using git submodules to be able to have separate projects. The devs would update the right versions in the Deploy Repo when a service is ready to be deployed.
The push would then trigger the CI/CD pipeline, build all apps, and deploy all together using the Helm Chart.
Is this a good idea to use submodules like this ? What could be the best practice to link multiple projects together ?
I'm also concerned about how I could do to build only the sub project that has changed instead of all projects.
I have seen that it could be possible to link the pipelines of all subprojects together, and use artifacts to pass the needed files, but I'm not sure if this is a good solution.
I'll give some general suggestions and specific example how we are doing it - hope it will help.
- First of all, your idea to have a separate deployment repository is good and it is considered best practice in most cases.
- Next, it makes more sense to organize things on artifact level rather than code level - essentially, your deployment repository would have records of artifacts rather than code pointers via say git submodules. Personally, I try to stay away from git submodules whenever possible as they complicate things a lot.
- To elaborate on the above, each of your existing individual repositories would produce artifacts (docker images) via CI, place this artifacts to some registry, and then you would reference those artifacts in your deployment repository.
- Now, the basic ways of maintaining artifact changes in deployment repository is manual edits of artifact identifiers, which can be simplified by referencing via tags. I.e., you reference specific artifact as myartifact:2 - and then all version 2 artifacts would automatically make it to release. Say you start with 2.0.0, then add 2.0.1 - it will be picked accordingly. Note, that referencing via tags is fine for testing, but not recommended for production - for production we always recommend specific referencing via sha256 digest of the image which ensures uniqueness and immutability.
- To simplify the above tasks, people may put all artifact images into helm values files or use Kustomization with all image identifiers in one place. Some pull request logic may also be put on top (which is not always needed in the fully automated way below).
- Finally, there are fully automated ways. The way we do it is we use automation via Reliza Hub - tool we are working on - to glue everything together. So in the nutshell it tells your helm chart what artifacts to use in the automated way.
- Here is my toy project that shows various paths around the concepts above: https://github.com/taleodor/mafia-deployment