GitLabNewsTech&Tips

GitOps를 위한 GitLab을 사용하여 모든 클라우드에 배포하는 방법

Source: GitLab Blog | Author: Sara Kassabian

GitOps의 일부 기능만 지원하는 DevOps 도구가 많지만 GitLab은 하나의 협업 플랫폼으로서 아이디어에서 애플리케이션 코드와 배포까지 수행할 수 있는 유일한 도구입니다. GitLab의 전략 계정 리더인 Brad Downey는 3 부의 블로그 및 비디오 시리즈에서 GitOps가 어떻게 작동하는지 설명해줍니다. 본 시리즈의 3번째 부분에서 Brad는 Kubernetes와 GitLab CI를 사용하여 GitLab의 Auto DevOps 기능이 모든 클라우드에 어떻게 배포되는지 보여줍니다. GitLab의 시리즈를 처음 보십니까? GitLab이 GitOps 프로세스를 강화하는 방법GitLab을 인프라에 사용하는 방법도 확인해 보십시오.

GitOps is a workflow that uses a Git repository as the single source of truth for all infrastructure and application deployments. There are benefits beyond version control when using GitLab as the core repository. GitLab is a powerful tool that empowers your team to practice good GitOps procedures through its collaborative platform, ease of infrastructure deployments, as well as its multicloud compatibility.

Brad Downey, strategic account leader at GitLab, shows how to deploy applications to three Kubernetes servers using one common workflow. We successfully deploy the applications using Auto DevOps, powered by GitLab CI, with Helm and Kubernetes.

Watch the demonstration to see how to replicate this application deployment process.

GitOps demo overview

First, we open the gitops-demo group README.md file, which shows the structure of the gitops-demo group. There are a few projects and two subgroups: infrastructure and applications. The previous blog post focused on infrastructure, while this demonstration dives deeper in application deployments.

GitOps demo map
This image is a map of the repository that is the basis for our GitOps demo.

Inside the applications folder

Brad created four different applications for this demo: my-asp-net-app1; my-spring-app2; my-ruby-app3; my-python-app4. There are also three different Kubernetes clusters, which each correspond to a different cloud environment: Microsoft Azure (AKS), Amazon (EKS), and Google Cloud (GKE).

By clicking the Kubernetes button on the left-hand corner, we can see that all of the Kubernetes clusters are registered to GitLab. The environmental scopes represent which application is deployed to each cloud.

ASP.NET application on AKS

AutoDevOps at work

The first example is an ASP.NET application, which is the equivalent of a Hello, World app – in other words, it’s nothing too fancy. There are a few modifications that are specific to how we’re deploying this application, which lives in the application CI file, which is where the magic of GitLab’s Auto DevOps feature lives.

“The first thing we do is import the main Auto DevOps template,” says Brad. “We set a couple of variables, override a few commands for stages that are more applicable to .net code, and finally at the bottom here, I see that I’ve set my environment automatically to deploy production into AKS.”

include:
  - template: Auto-DevOps.gitlab-ci.yml

variables:
  DEPENDENCY_SCANNING_DISABLED: "true"

test:
  stage: test
  image: microsoft/dotnet:latest
  script:
    - 'dotnet test --no-restore'

license_management:
  stage: test
  before_script:
    - sudo apt-get update
    - sudo apt-get install -y dotnet-runtime-2.2 dotnet-sdk-2.2

production:
  environment:
    name: aks/production
    url: http://$CI_PROJECT_PATH_SLUG.$KUBE_INGRESS_BASE_DOMAIN

The pipeline will run automatically after a commit and deploy successfully, but we can take a peek inside the pipeline to see how it works.

GitOps demo map
The stages of the pipeline from build to production for the ASP.NET application.

A quick look inside the pipeline shows that all the jobs passed successfully. The Auto DevOps feature kicked off the build stage, which creates a Docker container and uploads it to the built-in Docker registry. The test phase is comprehensive and includes container scanning, license management, SAST, and unit tests.

Click the security and license tabs to dive deeper into the testing results, if you wish. The application deploys to production in the final stage of the pipeline.

Inside the AKS cluster

The ASP.NET application is deploying to the AKS cluster. Go to Operations > Environments to see the environment configured for this application. Metrics such as the HTTP error rates, latency rates, and throughput are available because Prometheus is already integrated into GitLab’s Kubernetes clusters.

The environment can be launched directly from here; just click the live URL to see the application running on AKS. There isn’t a lot of extra code beyond what is already configured into GitLab that tells the application how to deploy. The Auto DevOps feature creates a Helm chart and deploys it to Kubernetes and AKS.

Java Spring application on GKE

Brad configured the Spring application similarly as the ASP.NET application by using a Dockerfile. Brad added the Dockerfile into the repository root directory.

ROM maven:3-jdk-8-alpine
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN mvn package
ENV PORT 5000
EXPOSE $PORT
CMD [ "sh", "-c", "mvn -Dserver.port=${PORT} spring-boot:run" ]

The Spring application deployment differs from the ASP.NET application in one way: it does not need any overrides to the AutoDevOps template. It uses the default template, deploying to GKE instead of AKS. The workflow for application deployment is identical regardless of what cloud the application is being deployed to. This makes Multi-Cloud easy.

“I was able to produce a very similar build, test, and production run in this environment,” says Brad. “I get the same metrics, the error rates, latencies, throughputs, etc., and in this case, my application is running automatically in a container on Kubernetes in my GKE cluster.”

Python application on EKS

The final example is a Python application that deploys on EKS. The components are similar to the previous examples, and use gitlab-ci.yml to change the production environment to EKS, and a Dockerfile to prepare the Helm chart. There are also a few overrides.

include:
  - template: Auto-DevOps.gitlab-ci.yml
test:
  image: python:3.7
  script:
    - pip install -r requirements.txt
    - pip install pylint
    - pylint main.py
production:
  environment:
    name: eks/production
    url: http://$CI_PROJECT_PATH_SLUG.$KUBE_INGRESS_BASE_DOMAIN

The GitLab CI file tells the application to deploy on EKS.

FROM python:3.7
WORKDIR /app
ADD . /app/
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python", "/app/main.py"

The Dockerfile gets the Helm chart ready.

Just like in previous examples, the pipeline runs the same way as in the other applications with build, test, and production phases. Once the application is deployed to EKS, you can open up the live link and see the Python application in your browser window.

In the end, we can see that GitLab is a true multi-cloud solution that enables businesses to make decisions about which cloud provider they want to use, without disparate workflows, while still maintaining good GitOps practices.

“All of this is a consistent interface with the exact same workflow, making it simple to deploy to any major cloud running Kubernetes integrated with GitLab.”

GitLab for GitOps

Good GitOps procedure has you making a Git repository the single source of truth for all of the code. In our demonstration, there is no code that exists outside of the Git repository – with infrastructure code living in the infrastructure repo, and application code living in the application repo.

While any Git repository could suffice for good GitOps procedure, there are few DevOps tools that truly encompass the core pillars of GitOps: collaboration, transparency in process, and version control.

Tools like epics, issues, and merge requests which are the crux of GitLab foster communication and transparency between teams. Infrastructure teams can build code using Terraform or Ansible templates in GitLab, and deploy to the cloud using GitLab CI. GitLab is the true multi-cloud solution, allowing teams to deploy an application to any cloud service using GitLab CI and Kubernetes without having to significantly augment their workflows.

댓글 남기기