Tag Archives: Azure

Building and Deploying Micro Services with Azure Kubernetes Service (AKS) and Azure DevOps Part-4

It is best practice to create Azure components before building and releasing code. I would usually point you to the Azure portal for learning purposes, but in this scenario you need to work at the command line, so if you don’t have it installed already I would strongly advise you to install the Azure CLI.

Preparing the user machine

Azure CLI

Install Azure CLI 2.0 on Windows

In addition, here are two important Kubernetes tools to download as well:

Kubernetes Tools

Kubectlhttps://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md (Go to the newest “Client Binaries” and grab it.)

s113

Helmhttps://github.com/kubernetes/helm/releases go to this link and click on Windows (checksum), then helm files will be downloaded into your local machine.

(Or) you can click the below link directly to download the Helm files.

Windows (checksum): https://storage.googleapis.com/kubernetes-helm/helm-v2.9.1-windows-amd64.zip

s114

Note: This blog explains how to download and install Azure CLI, Kubectl and Helm files in Windows OS.

They are plain executables, so no installers; first create a folder in your PC or local machine in the following path: C:\k8s – this is where you are going to store and work with helm and Kubectl tools. Then copy and paste them into the folder you just created.

Now your folder path C:\k8s should be like this figure below:

s115

If you are having .Zip files in the above path, you can extract the files in the same folder only.

For the sake of simplicity you should add this folder path to your Environment Variables in Windows:

clip_image007

Kubectl is the “control app” for Kubernetes, and Helm is the package manager (or the equivalent of NuGet in the .NET world if you like) for Kubernetes.

You might ask yourself why you need Kubernetes native tools, when you are using a managed Kubernetes service, and that is a valid question. A lot of managed services put an abstraction on top of the underlying service, and hides the original in various manners. An important thing to understand about AKS is that while it certainly abstracts parts of the k8s setup it does not hide the fact that it is k8s. This means that you can interact with the cluster as if you had set it up from scratch. Which also means that if you are already are a k8s ninja you can still feel at home, otherwise, it’s necessary to learn at least some of the tooling. You don’t need to aspire to ninja-level knowledge of Kubernetes; however, you need to be able to follow along as I switch between k8s for shorthand and typing Kubernetes out properly.

Reference Links

Overview of kubectl

https://kubernetes.io/docs/reference/kubectl/overview/

Kubectl Cheat Sheet

https://kubernetes.io/docs/reference/kubectl/cheatsheet/

The package manager for Kubernetes

https://docs.helm.sh/

Creating the AKS cluster using the Azure CLI

  1. Open the Command Prompt with administrative mode.
  2. The first step for using the Azure CLI is logging in:

    az login

    s117

    Note: This login process is implemented using the OAuth DeviceProfile flow. You can implement this if you like:

    https://blogs.msdn.microsoft.com/azuredev/2018/02/13/assisted-login-using-the-oauth-deviceprofile-flow/

  3. If you have multiple subscriptions in Azure, you might need to use az account list and az account set –subscription <Your Azure Subscription ID> to make sure you’re working on the right one:

    s118

Create a resource group

  1. You need a resource group to contain the AKS instance. (Technically it doesn’t matter which location you deploy the resource group too, but I suggest going with one that is supported by AKS and sticking with it throughout the setup.)

    Create a resource group with the az group create command. An Azure resource group is a logical group in which Azure resources are deployed and managed.

    When creating a resource group you are asked to specify a location, this is where your resources will live in Azure.

    The following command creates a resource group named KZEU-AKSDMO-SB-DEV-RGP-01 in the eastus location.

    az group create –name KZEU-AKSDMO-SB-DEV-RGP-01 –location eastus

    s74

Create AKS cluster

  1. Next you need to create the AKS cluster:

    Use the az aks create command to create an AKS cluster. The following command creates a cluster named DemoAKS01 with one node.

    az aks create –name KZEU-AKSDMO-SB-DEV-AKS-01 –resource-group KZEU-AKSDMO-SB-DEV-RGP-01 –node-count 1 –generate-ssh-keys –kubernetes-version 1.11.2 –node-vm-size Standard_DS1_v2

    s75

You will notice that I chose a specific Kubernetes version, which seems to be a low-level detail when we’re dealing with a service that should handle this for us. The reason for this is that Kubernetes is a fast-moving target, so you might need to be on a certain level for specific features and/or compatibility. 1.11.2 is to date the newest AKS supported version, so you may verify if there is a newer one meanwhile, or upgrade the version later. If you don’t specify the version you will be given the default version, which was on the 1.7.x branch when I tested.

Since this is a managed service which may create a delay for a new version of Kubernetes being released and available in AKS, close management would be needed.

To keep costs down in the test environment I’m only using one node, but in production you should ramp this up to at least 3 for high availability and scale. I also specified the VM size to be a DS1_v2. (This is also the default if you omit the parameter.) I tried keeping cost low, and going with the cheapest SKU I could locate, but the performance was abysmal when going through the cycle of pulling and deploying images repeatedly; so I upgraded.

In light of this I would like to highlight another piece of goodness with AKS. In a Kubernetes cluster you have management nodes and worker nodes. Just like you need more than one worker to distribute the load, you need multiple managers to have high availability.

AKS takes care of the management, but not only is it abstracted away, you don’t pay for it either – you pay for the nodes, and that’s it.

After several minutes the command completes and returns JSON-formatted information about the cluster.

Important:

Save the JSON output in a separate text file, because you need the ssh keys later in this document.

Note:

If you get the below error while running the above az aks create command, then you can re-run the same command once again.

Deployment failed. Error occurred in request.

s121

Note:

While creating AKS, internally a new resource group is created (like MC_<Resource Group Name>_<AKS Name>_<Resource Group Location>) which is consists of Virtual machine, Virtual network, DNS Zone, Availability set, Network interface, Network security group, Load balancer and Public IP address etc.…

Connect to the cluster

  1. To manage a Kubernetes cluster use kubectl, the Kubernetes command-line client.
  2. If you want to install it locally, use the az aks install-cli command.

    az aks install-cli

  3. Connect kubectl to your Kubernetes cluster by using the az aks get-credentials command and configure accordingly. This step downloads credentials and configures the Kubernetes CLI to use them.

    az aks get-credentials –resource-group KZEU-AKSDMO-SB-DEV-RGP-01 –name KZEU-AKSDMO-SB-DEV-AKS-01

    s76

  4. Verify the connection to your cluster via the kubectl get command to return a list of the cluster nodes. Note that this can take a few minutes to appear.

    kubectl get nodes

    s77

  5. You should also check that you are able to open the Kubernetes dashboard by running

    az aks browse –resource-group KZEU-AKSDMO-SB-DEV-RGP-01 –name KZEU-AKSDMO-SB-DEV-AKS-01

    s78

This will launch a browser tab with a graphical representation:

s125

Kubectl also allows for connecting to the dashboard (kubectl proxy); however, when using the Azure CLI everything is automatically piggybacked onto the Azure session you have. You’ll notice that the address is 127.0.0.1 even though it isn’t local, but that’s just some proxy address where the traffic is tunneled through to Azure.

Configure the helm in local machine

  1. Helm needs to be primed as well to be ready for later. Based on having a working cluster as verified in the previous step, helm will automagically work out where to apply its logic. (You can have multiple clusters, so part of the point in verifying that the cluster is ok is to make sure you’re connected to the right one.) Apply the following:

    helm.exe init

    helm.exe repo update

In my case helm.exe is available in the following path. I used the complete helm.exe path for executing the above commands in the command prompt:

clip_image026

s126

The cluster should now be more or less ready to have images deployed.

Much like we refer to images when building virtual machines, Docker uses the same concept although slightly different on the implementation level. To get running containers inside your Kubernetes cluster you need a repository for these images. The default public repo is Docker Hub, and images stored there will be entirely suited for your AKS cluster. But we don’t want to make our images available on the Internet for now, so we will want a private repository. In the Azure ecosystem this is delivered by Azure Container Registry (ACR).

You can easily create this in the portal, for coherency, let’s do this through the CLI as well. You can throw this into the AKS resource group, but we will create a new group for our registry, since a registry is logically speaking a separate entity. Then it becomes more obvious to re-use across clusters too.

Create an azure container registry (ACR) using the Azure CLI

Create a resource group

  1. Create a resource group with the az group create command. An Azure resource group is a logical group in which Azure resources are deployed and managed.

    When creating a resource group you are asked to specify a location; this is where your resources will live in Azure.

    The following command creates a resource group named KZEU-AKSDMO-SB-DEV-RGP-02 in the eastus location.

    az group create –name KZEU-AKSDMO-SB-DEV-RGP-02 –location eastus

    s79

Create a container registry

  1. Here you create a Basic registry. Azure Container Registry is available in several different SKUs as described briefly in the following table. For extended details on each, see Container registry SKUs.

    SKU

    Description

    Basic

    A cost-optimized entry point for developers learning about Azure Container Registry. Basic registries have the same programmatic capabilities as Standard and Premium (Azure Active Directory authentication integration, image deletion, and web hooks), however, there are size and usage constraints.

    Standard

    The Standard registry offers the same capabilities as Basic, but with increased storage limits and image throughput. Standard registries should satisfy the needs of most production scenarios.

    Premium

    Premium registries have higher limits on constraints, such as storage and concurrent operations, including enhanced storage capabilities to support high-volume scenarios. In addition to higher image throughput capacity, Premium adds features like geo-replication for managing a single registry across multiple regions, maintaining a network-close registry to each deployment.

  2. Create an ACR instance using the az acr create command.

    The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. In the following command, DemoACRregistry is used. Update this to a unique value.

    az acr create –resource-group KZEU-AKSDMO-SB-DEV-RGP-02 –name KZEUAKSDMOSBDEVACR01 –sku Basic

    When the registry is created, the output is similar to the following:

    {

    “additionalProperties”: {},

    “adminUserEnabled”: false,

    “creationDate”: “2018-06-28T06:07:11.755241+00:00”,

    “id”: “/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxxxx/resourceGroups/KZEU-AKSDMO-SB-DEV-RGP-02/providers/Microsoft.ContainerRegistry/registries/KZEUAKSDMOSBDEVACR01”,

    “location”: “eastus”,

    “loginServer”: “kzeuaksdmosbdevacr01.azurecr.io”,

    “name”: ” KZEUAKSDMOSBDEVACR01″,

    “provisioningState”: “Succeeded”,

    “resourceGroup”: “KZEU-AKSDMO-SB-DEV-RGP-02”,

    “sku”: {

    “additionalProperties”: {},

    “name”: “Basic”,

    “tier”: “Basic”

    },

    “status”: null,

    “storageAccount”: null,

    “tags”: {},

    “type”: “Microsoft.ContainerRegistry/registries”

    }

    s80

Authenticate with Azure Container Registry from Azure Kubernetes Service

  1. While you can now browse the contents of the registry in the portal that does not mean that your cluster can do so. As is indicated by the message upon successful creation of the ACR component we need to create a service principal that will be used by Kubernetes, and we need to give this service principal access to ACR.
  2. If you’re new to the concept of service principal you can refer to the below links:

    Authenticate with a private Docker container registry

    https://docs.microsoft.com/en-us/azure/container-registry/container-registry-authentication

    Azure Container Registry authentication with service principals

    https://docs.microsoft.com/en-us/azure/container-registry/container-registry-auth-service-principal

    Authenticate with Azure Container Registry from Azure Kubernetes Service

    https://docs.microsoft.com/en-us/azure/container-registry/container-registry-auth-aks

Grant AKS access to ACR

  1. When an AKS cluster is created a service principal is also created to manage cluster operability with Azure resources. This service principal can also be used for authentication with an ACR registry. To do so, a role assignment needs to be created to grant the service principal read access to the ACR resource.
  2. The following sample can be used to complete this operation.

    Open the Windows PowerShell ISE (x86) in your local machine and run the below script.

    #Sign in using Interactive Mode using your login credentials
    
    az login
    
    #Sign in using Interactive Mode with older experience using your login credentials
    
    #az login --use-device-code
    
    #Set the current azure subscription
    
    az account set --subscription ''
    
    #See your current azure subscription
    
    #az account show
    
    #Get the id of the service principal configured for AKS
    
    $AKS_RESOURCE_GROUP = "KZEU-AKSDMO-SB-DEV-RGP-01"
    
    $AKS_CLUSTER_NAME = "KZEU-AKSDMO-SB-DEV-AKS-01"
    
    $CLIENT_ID=$(az aks show --resource-group $AKS_RESOURCE_GROUP --name $AKS_CLUSTER_NAME --query "servicePrincipalProfile.clientId" --output tsv)
    
    # Get the ACR registry resource id
    
    $ACR_NAME = "KZEUAKSDMOSBDEVACR01"
    
    $ACR_RESOURCE_GROUP = "KZEU-AKSDMO-SB-DEV-RGP-02"
    
    $ACR_ID=$(az acr show --name $ACR_NAME --resource-group $ACR_RESOURCE_GROUP --query "id" --output tsv)
    
    #Create role assignment
    
    az role assignment create --assignee $CLIENT_ID --role Reader --scope $ACR_ID
    

    s81

Output of the above PowerShell Script:

s82

Docker application DevOps workflow with Microsoft tools

Visual Studio, Azure DevOps and Application Insights provide a comprehensive ecosystem for development and IT operations that allow your team to manage projects and to rapidly build, test, and deploy containerized applications:

s83

Microsoft tools can automate the pipeline for specific implementations of containerized applications (Docker, .NET Core, or any combination with other platforms) from global builds and Continuous Integration (CI) and tests with Azure DevOps, to Continuous Deployment (CD) to Docker environments (Dev/Staging/Production), and to provide analytics information about the services back to the development team through Application Insights. Every code commit can trigger a build (CI) and automatically deploy the services to specific containerized environments (CD).

Developers and testers can easily and quickly provision production-like dev and test environments based on Docker by using templates from Azure.

The complexity of containerized application development increases steadily depending on the business complexity and scalability needs. Examples of these are applications based on Microservices architecture. To succeed in such kind of environment your project must automate the whole lifecycle—not only build and deployment, but also management of versions along with the collection of telemetry. In summary, Azure DevOps and Azure offer the following capabilities:

  • Azure DevOps source code management (based on Git or Team Foundation Version Control), agile planning (Agile, Scrum, and CMMI are supported), continuous integration, release management, and other tools for agile teams.
  • Azure DevOps include a powerful and growing ecosystem of first- and third-party extensions that allow you to easily construct a continuous integration, build, test, delivery, and release management pipeline for microservices.
  • Run automated tests as part of your build pipeline in Azure DevOps.
  • Azure DevOps tightens the DevOps lifecycle with delivery to multiple environments –  not just for production environments,but also for testing, including A/B experimentation, canary releases, etc.
  • Docker, Azure Container Registry and Azure Resource Manager. Organizations can easily provision Docker containers from private images stored in Azure Container Registry along with any dependency on Azure components (Data, PaaS, etc.) using Azure Resource Manager (ARM) templates with tools they are already comfortable working with.

Steps in the outer-loop DevOps workflow for a Docker application

The outer-loop workflow is end-to-end represented in the above Figure. Now, let’s drill down on each of its steps.

Step 1. Inner loop development workflow for Docker applications

This step was explained in detail in the Part-2 blog, but here is where the outer-loop also starts, in the very precise moment when a developer pushes code to the source control management system (like Git) triggering Continuous Integration (CI) pipeline executions.

Share your code with Visual Studio 2017 and Azure DevOps Git

In Part-3 blog, you already published your code into Azure DevOps by creating the new team project named as AKSDemo.

Changing branch from master to dev in VS2017

Before making changes to your project, first you need to change the branch from master to dev.

  1. Visual Studio uses the Sync view in Team Explorer to fetch changes. Changes downloaded by fetch are not applied until you Pull or Sync the changes.
  2. Open your AKSDemo solution in VS2017, then open up the Synchronization view in Team Explorer by selecting the Home icon and choosing Sync:

    s84

  3. Choose Fetch to update the incoming commits list. (There are two Fetch links, one near the top and one in the Incoming Commits section. You can use either one as they both do the same thing.):

    image

  4. You can review the results of the fetch operation in the Incoming Commits section. Right now you don’t have any Incoming Commits, but the new branch created in Part-3 blog came here while doing the fetch operation.

    Note:

    You will need to fetch the branch before you can see it and swap to it in your local repo.

  5. After that right click on master branch and choose the Manage Branches:

    image

  6. Click on the Refresh icon located at the top; after that expand the remotes/origin and then double click on the dev branch. With that you are automatically redirected to the dev branch from the master branch:

    image

  7. Now you can work on dev branch.
Commit and push updates

Changes in APIApplication project

  1. Right click on your APIApplication project then click on Add and choose the New Folder:
    image
  • Enter the New Folder name as Utils.
  • Right click on Utils folder, then click on Add and choose New Item:s231
  • Complete the Add New Item dialog:
    • In the left pane, tap ASP.NET Core
    • In the center pane, tap Text File
    • Name of the Text File as apiapplication.yaml
    • Click Add button

      s232

  • Open the apiapplication.yaml file under the Utils folder of your APIApplication project; then add the below lines of code:
    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: apiapplication
    spec:
      template:
        metadata:
          labels:
            app: apiapplication
        spec:
          containers:
          - name: apiapplication
            image: kzeuaksdmosbdevacr01.azurecr.io/apiapplication:#{Version}#
            env:
            - name: ConnectionStrings_DBConnection
              value: "Server=tcp:kzeu-aksdmo-sb-dev-sq-01.database.windows.net,1433;Initial Catalog=KZEU-AKSDMO-SB-DEV-SDB-01;Persist Security Info=False;User ID=kishore;Password=iSMAC2016;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;"
            ports:
            - containerPort: 80
            imagePullPolicy: Always
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: apiapplication
    spec:
      type: LoadBalancer
      ports:
      - port: 80
      selector:
        app: apiapplication
    
    Be aware – yaml files want spaces, not tabs, and it expects the indent hierarchy to be as above. If you get the indentations wrong, it will not work.

Note:

If you can’t validate the code inside .yml files, you can refer to this link.s86

The above yaml contains the type: LoadBalancer. This means, after the container is deployed to Kubernetes, then Kubernetes assigns the proper IP Address to this (apiapplication) container.

I’m not delving into the explanations here, but as you can probably figure out this defines some of the necessary things to describe the container.

  1. You also need a slightly different Dockerfile for this, so add one called Dockerfile.CI in your APIApplication project.
  2. Right click on your APIApplication project, then click on Add and choose the Add New Item:

    image

  3. Complete the Add New Item dialog:
    • In the left pane, tap ASP.NET Core
    • In the center pane, tap Text File
    • Name the Text File as Dockerfile.CI
    • Click Add button.s235

  • Open the Dockerfile.CI file under the main Dockerfile of your APIApplication project and then add the below lines of code:

    FROM microsoft/aspnetcore-build:2.0 AS build-env
    WORKDIR /app
    
    # Copy csproj and restore as distinct layers
    COPY *.csproj ./
    RUN dotnet restore
    
    # Copy everything else and build
    COPY . ./
    RUN dotnet publish -c Release -o out
    
    # Build runtime image
    FROM microsoft/aspnetcore:2.0
    WORKDIR /app
    COPY --from=build-env /app/out .
    ENTRYPOINT ["dotnet", "APIApplication.dll"]
    

    image

Changes in WebApplication project

  1. Right click on your WebApplication project, then click on Add and choose the New Folder:image:
  2. Enter the New Folder name as Utils.
  3. Right click on Utils folder, then click on Add and choose New Item:s241
  4. Complete the Add New Item dialog:
    • In the left pane, tap ASP.NET Core
    • In the center pane, tap Text File
    • Name of the Text File as webapplication.yaml
    • Click Add button.s239
  5. Open the webapplication.yaml file under the Utils folder of your APIApplication project and then add the below lines of code:

    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: webapplication
    spec:
      template:
        metadata:
          labels:
            app: webapplication
        spec:
          containers:
          - name: webapplication
            image: kzeuaksdmosbdevacr01.azurecr.io/webapplication:#{Version}#
            env:
            - name: AppSettings_APIURL
              value: http://40.87.88.177/
            ports:
            - containerPort: 80
            imagePullPolicy: Always
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: webapplication
    spec:
      type: LoadBalancer
      ports:
      - port: 80
      selector:
        app: webapplication
    

    Be aware – yaml files want spaces, not tabs, and it expects the indent hierarchy to be as above. If you get the indentations wrong it will not work.

    Note:

    If you can’t validate the code inside .yml files, you can refer to this link.
    image

    The above yaml contains the environment variable named as AppSettings_APIURL and also contains the type: LoadBalancer. This means after the container is deployed to Kubernetes, then Kubernetes assigns the proper IP Address to this (webapplication) container

Note:

If you want more information about handling the environment variables in kubernetes deployment file, you can refer to the below link:

https://pascalnaber.wordpress.com/2017/11/29/handling-settings-and-environment-variables-of-your-net-core-2-application-hosted-in-a-docker-container-during-development-and-on-kubernetes-helm-to-the-resque/

  • I’m not delving into the explanations here, but as you can probably figure out this defines some of the necessary things to describe the container.
  • We also need a slightly different Dockerfile for this, so add one called Dockerfile.CI in your WebApplication project
  • Right click on your WebApplication project, then click on Add and choose the Add New Item:

    s241

  • Complete the Add New Item dialog:
    • In the left pane, tap ASP.NET Core
    • In the center pane, tap Text File
    • Name of the Text File as Dockerfile.CI
    • Click Add button.

      s235

  • Open the Dockerfile.CI file under the main Dockerfile of your WebApplication project and then add the below lines of code:
    FROM microsoft/aspnetcore-build:2.0 AS build-env
    WORKDIR /app
    
    # Copy csproj and restore as distinct layers
    COPY *.csproj ./
    RUN dotnet restore
    
    # Copy everything else and build
    COPY . ./
    RUN dotnet publish -c Release -o out
    
    # Build runtime image
    FROM microsoft/aspnetcore:2.0
    WORKDIR /app
    COPY --from=build-env /app/out .
    ENTRYPOINT ["dotnet", "WebApplication.dll"]
    

    image

    • As you write your code, your changes are automatically tracked by Visual Studio. You can commit changes to your local Git repository by selecting the pending changes icon clip_image069 from the status bar.
  • On the Changes view in Team Explorer, add a message describing your update and commit your changes:image
  • Select the unpublished changes status bar icon clip_image072 (or select Sync from the Home view in Team Explorer). Select Push to update your code in Azure DevOps:image
Get changes from others

Sync your local repo with changes from your team as they make updates. For that you can refer to this link.

Step 2. SCC integration and management with Azure DevOps and Git

At this step, you need to have a Version Control system to gather a consolidated version of all the code coming from the different developers in the team.

Even when SCC and source-code management might sound trivial to most developers, when developing Docker applications in a DevOps lifecycle, it is critical to highlight that the Docker images with the application must not be submitted directly to the global Docker Registry (like Azure Container Registry or Docker Hub) from the developer’s machine.

On the contrary, the Docker images to be released and deployed to production environments have to be created based on the source code that is being integrated in your global build/CI pipeline of your source-code repository (like Git).

The local images generated by the developers should be used just by the developer when testing within his/her own machine. This is why it is critical to have the DevOps pipeline triggered from the SCC code.

Microsoft Azure DevOps support Git and Team Foundation Version Control: You can choose between them and use it for an end-to-end Microsoft experience. However, you can also manage your code in external repositories (like GitHub, on-premises Git repos or Subversion) and still be able to connect to them and get the code as the starting point for your DevOps CI pipeline.

Note:

Here, we are currently using Azure DevOps and Git for managing the source code pushed by developers into a specified repository (for example AKSDemo). We are also creating the Build and Release Pipelines here.

Alright, now you have everything set up for creating specifications of your pipeline from Visual Studio to a running containers. We need two definitions for this:

  • How Azure DevOps should build and push the resulting artifacts to ACR.
  • How AKS should deploy the containers.

Step 3. Build, CI, Integrate with Azure DevOps and Docker

There are two ways you can approach handling of the building process.

You can push the code into the repo, build the code “directly”, pack it into a Docker image, and push the result to ACR.

The other approach is to push the code to the repo, “inject” the code into a Docker image for the build, have the output in a new Docker image, and then push the result to ACR. This would for instance allow you to build things that aren’t supported natively by Azure DevOps.

I seem to get the second approach to run slightly faster. So, here we will choose the second approach.

The AKS part and the Azure DevOps part takes turns in this game. First you built Continuous Integration (CI) on Azure DevOps, then you built a Kubernetes cluster (AKS), and now the table is turned towards Azure DevOps again for Continuous Deployment (CD). So, before building the CD pipeline let’s set up a CI pipeline.

Building a CI pipeline
  1. Login to your Azure DevOps account, Select Azure Pipelines, it should automatically take you to the Builds page:

    s88

  2. Create a New build pipeline:

    s89

  3. Click on Use the visual designer to create build pipeline without YAML:

    s24

  4. Make sure that the Source, Team project name along with Repository and Default branch which you are working, as shown in the  figure below, and click on the Continue button:

    s25

  5. Choose template as ASP.NET Core:

    s90

  6. On the left side, select Pipeline and specify a Name of your choice. For the Agent pool, select Hosted Ubuntu 1604:

    s91

  7. After that click on the Get sources; with that all values are selected by default for getting the code from a specified repository along with a specified branch. If you want to change the default values then you can change here:

    s92

  8. By default the ASP.NET Core template will provide the Restore, Build, Test, Publish and Publish Artifact tasks. You can remove the Test task from the current build pipeline, because you are not doing any testing.
  9. Right click on Test task choose Remove selected task(s):

    s93

  10. There is no need to modify the Restore and Build tasks of .NET Core.
Publish
  1. Next, on the left side, select your new Publish task:

    s94

  2. Now you need to modify the Publish task to publish your .NET Core project using the .NET Core task with the Command set to publish.
  3. Configure the .NET Core task as follows:
    • Display name: Publish
    • Command: publish
    • Path to project(s): The path to the csproj file(s) to use. You can use wildcards (e.g. **/.csproj for all .csproj files in all subfolders). Example: **/*.csproj
    • Uncheck “Publish Web Projects“.
    • Arguments: Arguments to the selected command. For example configuration $(BuildConfiguration) –output $(build.artifactstagingdirectory)
    • Uncheck “Zip Published Projects“.
    • Uncheck “Add project name to publish path“.s95
  4. Next, you need to add four Docker tasks (not Docker Compose). Coming to the Docker part, you need to do build and push images to the Azure Container Registry. Currently AKSDemo project contains applications like WebApplication and APIApplication, so, you need to add two build Docker tasks and two push Docker tasks.
Build an image for APIApplication
  1. On the Tasks tab, select the plus sign (+) to add a task to Job 1. On the right side, type “Docker” in the search box and click on the Add button of Docker build task as shown in the figure below:

    s96

  2. On the left side, select your new Docker task:

    s97

  3. Firstly, configure the Docker tasks for APIApplication. For this you need Azure Resource Manager Subscription connection. If your Azure DevOps account is already linked to an Azure subscription, then it will automatically display under Azure subscription drop down as shown in below screenshot. Otherwise click on Manage:

    s98

Azure Resource Manager Endpoint

  1. In above step when you click on Manage link. It navigates to Settings tab, select New service connection option on the left pane:

    s56

  2. Whenever you click on the New Service Endpoint a dropdown list will open. From that you can select Azure Resource Manager endpoint:

    s57

  3. If your work is already backed by your azure service principal, then give a name to your service endpoint and select the Azure subscription from the dropdown:

    s58

  4. If not backed by Azure then click on the hyperlink ‘use the full version of the service connection dialog’ as shown in above screenshot.
  5. If you have your service principal details, you can enter directly and click on Verify connection. If the Connection is verified successfully, then click on the OK button. Otherwise you can reference the service connections link provided on the same popup as marked in below screenshot:

    s59

  6. Once you are done adding a service principal successfully, you will see something like the figure below:

    s60

  7. Now go back to your Build definition page, and click on refresh icon. It should display the Azure Resource Manager endpoint in the Azure subscription dropdown list which you created in the previous step:

    s99

Configure first Docker task

  1. Configure the Docker task for Build an Image of APIApplication as follows:
    • Display name: Build an image “APIApplication”
    • Container Registry Type: Select a Container Registry Type. For example: Azure Container Registry
    • Azure Subscription: Select an Azure subscription
    • Azure Container Registry: Select an Azure Container Registry. For example: KZEUAKSDMOSBDEVACR01
    • Command: Select a Docker command. For example: build
    • Dockerfile: Path to the Docker file to use. Must be within the Docker build context. For example: APIApplication/Dockerfile.CI
    • Check the Use Default Build Context: Set the build context to the directory that contains the Docker file.
    • Image Name: Name of the Docker image to build. For example: kzeuaksdmosbdevacr01.azurecr.io/apiapplication:$(Build.BuildId)
    • Check the Qualify Image Name: Qualify the image name with the Docker registry connection’s hostname if not otherwise specified.
    • Check the Include Latest Tag. (While we’re not actively using it we still want to have it available just in case.): Include the ‘latest’ tag when building or pushing the Docker image:

      s100

Push an image of APIApplication
  1. Next add one more Docker task for pushing the image of APIApplication into Azure Container Registry. For that select the Tasks tab, select the plus sign (+) to add a task to Job 1. On the right side, type “Docker” in the search box and click on the Add button of Docker build task, as shown in the figure below:

    s101

  2. On the left side, select your new Docker task:

    s102

Configure second Docker task

  1. Configure the above Docker task for Push an Image of APIApplication as follows:
    • Display name: Push an image “APIApplication”
    • Container registry type: Select a Container Registry Type. For example: Azure Container Registry
    • Azure subscription: Select an Azure subscription
    • Azure container registry: Select an Azure Container Registry. For example: KZEUAKSDMOSBDEVACR01
    • Command: Select a Docker action. For example: push
    • Image name: Name of the Docker image to push. For example: kzeuaksdmosbdevacr01.azurecr.io/apiapplication:$(Build.BuildId)
    • Check the Qualify image name: Qualify the image name with the Docker registry connection’s hostname, if not otherwise specified.s103
Build an Image for WebApplication
  1. Next, add one more Docker task for building the image of WebApplication. For that select the Tasks tab, then select the plus sign (+) to add a task to Job 1. On the right side, type “Docker” in the search box and click on the Add button of Docker build task, as shown in the figure below:

    s104

  2. On the left side, select your new Docker task:

    s105

Configure third Docker task

  1. Configure the above Docker task for Build an Image of WebApplication as follows:
    • Display name: Build an image “WebApplication”
    • Container registry type: Select a Container Registry Type. For example: Azure Container Registry
    • Azure subscription: Select an Azure subscription
    • Azure container registry: Select an Azure Container Registry. For example: KZEUAKSDMOSBDEVACR01
    • Command: Select a Docker command. For example: build
    • Dockerfile: Path to the Docker file to use. Must be within the Docker build context. For example: WebApplication/Dockerfile.CI
    • Check the Use default build context: Set the build context to the directory that contains the Docker file.
    • Image name: Name of the Docker image to build. For example: kzeuaksdmosbdevacr01.azurecr.io/webapplication:$(Build.BuildId)
    • Check the Qualify image name: Qualify the image name with the Docker registry connection’s hostname if not otherwise specified.
    • Check the Include latest tag. (While we’re not actively using it we still want to have it available just in case.): Include the ‘latest’ tag when building or pushing the Docker image.

      s106

Push an image of WebApplication
  1. Next, add one more Docker task for pushing the image of WebApplication into Azure Container Registry. For that select Tasks tab, select the plus sign (+) to add a task to Job 1. On the right side, type “Docker” in the search box and click on the Add button of Docker build task as shown in the figure below:

    s107

  2. On the left side, select your new Docker task.

    s108

Configure fourth Docker task

  1. Configure the above Docker task for Push an Image of WebApplication as follows:
    • Display name: Push an image “WebApplication”
    • Container registry type: Select a Container Registry Type. For example: Azure Container Registry
    • Azure subscription: Select an Azure subscription
    • Azure container registry: Select an Azure Container Registry. For example: KZEUAKSDMOSBDEVACR01
    • Command: Select a Docker action. For example:push
    • Image name: Name of the Docker image to push. For example: kzeuaksdmosbdevacr01.azurecr.io/apiapplication:$(Build.BuildId)
    • Check the Qualify image name: Qualify the image name with the Docker registry connection’s host name, if not otherwise specified.s109
  2. Click on Save:image

Install Replace Tokens Azure DevOps task from Marketplace
  1. You used a variable as Version in webapplication.yaml and apiapplication.yaml for the image tag, but it doesn’t automatically get translated; so you need a separate task for this i.e Replace Tokens. But it’s currently not in Azure DevOps, therefore, add a task from the Azure DevOps marketplace.
  2. For that click on Browse Marketplace:

    s110

  3. Next, it will navigate to a new page for Marketplace:  enter “Replace Tokens” on the Search Box and select from the top most result search list as shown in the figure below:

    s111

  4. Click on Get it free button:

    s112

  5. Select an Azure DevOps Organization and then click on Install:

    s113

    Note:

    If you want more information about Replace Tokens task, you can refer to this link

    https://marketplace.visualstudio.com/items?itemName=qetza.replacetokens

  6. After having installed the above extension from Azure DevOps Marketplace go back to your build pipeline and refresh your current browser page.
Replace tokens in apiapplication.yaml
  1. On the Tasks tab, select the plus sign (+) to add a task to Job 1. On the right side, type “Replace Tokens” in the search box and click on the Add button of Replace Tokens build task, as shown in the figure below:

    s115

  2. On the left side, select your new Replace Tokens task:

    s116

  3. Configure the above Replace Tokens task for replacing the tokens in apiapplication.yaml file as follows:
    • Display name: Replace tokens in apiapplication.yaml
    • Root directory: Base directory for searching files. If not specified, the default working directory will be used. For Example: APIApplication/Utils
    • Target files: Absolute or relative comma or newline-separated paths to the files to replace tokens. Wildcards can be used. For Example: apiapplication.yaml
    • Leave the remaining parameter values as default values.

      s117

Replace tokens in webapplication.yaml
  1. Next, add one more Replace Tokens task for replacing tokens in the webapplication.yaml file. For that select Tasks tab and then select the plus sign (+) to add a task to Job 1. On the right side, type “Replace Tokens” in the search box and click on the Add button of Replace Tokens build task, as shown in the figure below:s114
  2. On the left side, select your new Replace Tokens task:s118
  • Configure the above Replace Tokens task for replacing the tokens in webapplication.yaml file as follows:
    • Root directory: Base directory for searching files. If not specified the default working directory will be used. For Example: WebApplication/Utils
    • Target files: Absolute or relative comma or newline-separated paths to the files to replace tokens. Wildcards can be used. For Example: webapplication.yaml
    • Leave the remaining parameter values as default values.

      s119

    • Display name: Replace tokens in webapplication.yaml

  • To define which variable to replace head to the Variables tab and add the name “Version” and value $(Build.BuildId):s120

Copy Files
  1. Go back to the Tasks tab then add Copy Files task, for copying the files from source folder to target folder using match patterns. For that select Tasks tab and then select the plus sign (+) to add a task to Job 1. On the right side, type “Copy Files” in the search box and click on the Add button of Copy Files build task, as shown in the figure below:

    s121

  2. On the left side, select your new Copy Files task:

    s122

  3. Configure the above Copy Files task for copying the files from the source folder to the target folder using match patterns as follows:
    • Display name: Copy Files to: $(build.artifactstagingdirectory)
    • Source Folder: The source folder that the copy pattern(s) will be run from. Empty is the root of the repo. For example: $(build.sourcesdirectory)
    • Contents: File paths to include as part of the copy. Supports multiple lines of match patterns. For example: **
    • Target Folder: The target folder or UNC path files will copy to. For example: $(build.artifactstagingdirectory)

      s123

Note:

If you want more information about Copy Files task you can refer this link

https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/copy-files?view=vsts

Publish Build Artifacts
  1. This Publish Build Artifacts task is automatically added whenever you choose ASP.NET Core as a template.
  2. Configure the above Publish Build Artifacts task as follows:
    • Display name: Copy Files to: Publish Artifact
    • Path to publish: The folder or file path to publish. This can be a fully-qualified path or a path relative to the root of the repository. Wildcards are not supported. For example: $(build.artifactstagingdirectory)
    • Artifact name: The name of the artifact to create in the publish location. For example: drop
    • Artifact publish location: Choose whether to store the artifact in Azure DevOps/TFS  or to copy it to a file share that must be accessible from the build agent. For example: Visual Studio Team Services/TFS

      s124

Note:

If you want  more information about Publish Build Artifacts task, you can refer to this link.

https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/publish-build-artifacts?view=vsts

Enable continuous integration
  1. Select the Triggers tab and check the Enable continuous integration option.
  2. Add the Path filters as shown in the figure below:

s125

The above build will be triggered only if you modify the files in APIApplication and WebApplication of your team project i.e AKSDemo. This build will not be triggered, if you modify the files in the DatabaseApplication of your team project i.e AKSDemo.

Note:

Here I am adding Path filters for APIApplication and WebApplication in Triggers tab because this AKSDemo repository contains the DatabaseApplication. If not adding to the Path filters here, then you are getting the error during execution of the build. This is because currently you are using Hosted Ubuntu 1604 as Agent pool; by using this agent you are not able to restore the packages and build the DatabaseApplication. That’s why you created separated CI and CD for this DatabaseApplication in the Part-3 blog.

Specify Build number format
  1. Select the Options tab and give the Build number format like (for example $(date:yyyyMMdd)$(rev:.r) ) this figure below:

    s126

Complete Build Pipeline
  1. Go to the Tasks tab, then see your completed pipeline like this:

    s127

Save and queue the build

Save and queue a build manually and test your build pipeline.

  1. Select Save & queue, and then select Save & queue:

    s128

  2. On the dialog box, select Save & queue once more:

    s129

  3. This queues a new build on the Hosted Ubuntu 1604 agent.
  4. You see a link to the new build on the top of the page:

    s130

  5. Choose the link to watch the new build as it happens. Once the agent is allocated, you’ll start seeing the live logs of the build:

    s131

  6. The process will take several minutes to complete this build, but with a bit of luck you will have a complete list of green checkmarks:

    s132

    If something fails there should be a hint in the logs to suggest why.

  7. After successful build, go to the build summary. On the Artifacts tab of the build, notice that the drop is published as an artifact:

    s133

Step 4. Continuous Delivery (CD), Deploy

Building a CD pipeline

Provided the above build for APIApplication and WebApplication worked, you can nnow define our CD pipeline. Remember; CI is about building and testing the code as often as possible, and CD is about taking the (successful) results of these builds (Artifacts) and deploy into a cluster as often as possible. In general every CD definition containts the Dev, QA, UAT, Staging and Production environments. But for now this CD definition contains the Dev environment only.

Define the process for deploying the .yaml files into Azure Kubernetes Service in one stage.

  1. Go to the Pipelines tab and then select Releases. Next, select the action to create a New pipeline. If a release pipeline is already created, select the plus sign (+ New) and then select Release pipeline:

    s134

  2. Select the action to start with an Empty job:

    image

  3. Name the stage Dev and change the Release name as AKSDemo Release Definition:

    s135

  4. In the Artifacts panel, select + Add and specify a Source (Build pipeline). Select Add:

    s136

  5. Select the Tasks tab and select your Dev stage:

    s137

  6. On Tasks page, select Agent pool as Hosted Ubuntu 1604:

    s138

Deploy .NET Core Web API Application to Kubernetes Cluster
  1. Add a Deploy to Kubernetes task, for Deploy, configure, and update your Kubernetes cluster in Azure Kubernetes Service by running kubectl commands. For that select Tasks tab and then select the plus sign (+) to add a task to Agent job. On the right side, type “kubernetes” in the search box and click on the Add button of Deploy to Kubernetes task, as shown in the figure below:

    s139

  2. On the left side, select your new Deploy to Kubernetes task:

    s140

Configure Deploy to Kubernetes task

  1. Configure the above Deploy to Kubernetes task for Deploy, configure and update your Kubernetes cluster in Azure Kubernetes Service by running kubectl commands.
    • Display name: Deploy APIApplication to Kubernetes
    • Service connection type: Select a service connection type. Here I can choose type as Azure Resource Manager.
    • Azure subscription: Select the Azure Resource Manager subscription, which contains Azure Container Registry.Note: To configure new service connection, select the Azure subscription from the list and click ‘Authorize’. If your subscription is not listed or if you want to use an existing Service Principal, you can setup an Azure service connection using ‘Add’ or ‘Manage’ button.

      Note: To configure a new service connection select the Azure subscription from the list and click ‘Authorize’.

      If your subscription is not listed or if you want to use an existing service principal, you can setup an Azure service connection using the ‘Add’ or ‘Manage’ button.

    • Resource group: Select an Azure resource group which contains Azure kubernetes service. For Example: KZEU-AKSDMO-SB-DEV-RGP-01
    • Kubernetes cluster: Select an Azure managed cluster. For Example: KZEU-AKSDMO-SB-DEV-AKS-01
    • Namespace: Set the namespace for the kubectl command by using the –namespace flag. If the namespace is not provided, the commands will run in the default namespace. For Example: default
    • Command: Select or specify a kubectl command to run. For Example: apply
    • Check the Use configuration files to Use Kubernetes configuration file with the kubectl command. Filename, directory, or URL to Kubernetes configuration files can also be provided.
    • Configuration file: Filename, directory, or URL to kubernetes configuration files that will be used with the commands. For Example: $(System.DefaultWorkingDirectory)/_AKSDemo-Docker-CI/drop/APIApplication/Utils/apiapplication.yaml

      s141

    • Under Advanced, specify the Kubectl version to use:

      s142

Deploy .NET Core Web Application to Kubernetes Cluster
  1. Next, add one more Deploy to Kubernetes task, for Deploy, configure, and update your Kubernetes cluster in Azure Kubernetes Service by running kubectl commands For that select Tasks tab and then select the plus sign (+) to add a task to Agent job. On the right side, type “kubernetes” in the search box and click on the Add button of Deploy to Kubernetes task, as shown in the  figure below:

    s139

  2. On the left side, select your new Deploy to Kubernetes task:

    s144

Configure Deploy to Kubernetes task

  1. Configure the above Deploy to Kubernetes task to deploy, configure, and update your Kubernetes cluster in Azure Kubernetes Service. For this, run the following kubectl commands.
    • Display name: Deploy WebApplication to Kubernetes
    • Service connection type: Select a service connection type. Here I can choose type as Azure Resource Manager.
    • Azure subscription: Select the Azure Resource Manager subscription, which contains Azure Container Registry. Note: To configure a new service connection, select the Azure subscription from the list and click ‘Authorize’. If your subscription is not listed or if you want to use an existing Service Principal, you can setup an Azure service connection using the ‘Add’ or ‘Manage’ button.

      Note: To configure new a service connection, select the Azure subscription from the list and click ‘Authorize’.

      If your subscription is not listed or if you want to use an existing service principal, you can setup an Azure service connection using the ‘Add’ or ‘Manage’ button.

    • Resource group: Select an Azure resource group which contains Azure kubernetes service. For Example: KZEU-AKSDMO-SB-DEV-RGP-01
    • Kubernetes cluster: Select an Azure managed cluster. For Example: KZEU-AKSDMO-SB-DEV-AKS-01
    • Namespace: Set the namespace for the kubectl command by using the –namespace flag. If the namespace is not provided, the commands will run in the default namespace. For Example: default
    • Command: Select or specify a kubectl command to run. For Example: apply
    • Check the Use configuration files to Use Kubernetes configuration file with the kubectl command. Filename, directory, or URL to Kubernetes configuration files can also be provided.
    • Configuration file: Filename, directory, or URL to kubernetes configuration files that will be used with the commands. For Example: $(System.DefaultWorkingDirectory)/_AKSDemo-Docker-CI/drop/WebApplication/Utils/webapplication.yaml

      s145

    • Under Advanced, specify the Kubectl version to use:

      s142

Specify Release number format
  1. Select the Options tab and give the Release number format, as  (for example Database Release-$(rev:r) ) in the figure below:

    s146

Enable continuous deployment trigger
  1. Go the Pipeline on the Releases tab, Select the Lightning bolt to trigger continuous deployment and then enable the Continuous deployment trigger on the right:

    s147

  2. Click on Save:

    s148

Complete Release Pipeline
  1. Go to the Pipeline tab, then see your completed release pipeline like this:

    s149

Deploy a release
  1. Create a new release:

    s150

  2. Define the trigger settings and artifact source for the release and then select Create:

    s151

  3. Open the release you just created:

    s152

  4. View the logs to get real-time data about the release:

    s153

    Note: You can track the progress of each release to see if it has been deployed to all the stages. You can track the commits that are part of each release, the associated work items, and the results of any test runs that you’ve added to the release pipeline.

  5. If the pipeline runs successful, you should see a list of green checkmarks here, just like you saw in the the release pipeline:

    s154

If something fails, there should be a hint in the logs to suggest why.

Now everything completed to setup the build and release definitions for Web Application and API Application. But while doing the initial setup, you have to create the build and release manually without using automatic triggers of the build and release definitions.

From next time onwards, you can modify the code in either API Application or Web Application and check-in your code. This time it will automatically build and then get deployed all the way to the Dev stage, because you already enabled the automatic triggers for both build and release definitions.

Step 5. Run and manage

Once the above build and release succeeded, open the command prompt in administrator mode in your local machine and enter the below command:

az aks browse –resource-group <Resource Group Name> –name <AKS Cluster Name>

For example: az aks browse –resource-group KZEU-AKSDMO-SB-DEV-RGP-01 –name KZEU-AKSDMO-SB-DEV-AKS-01

image

Note:
If you are getting the error by running the above command, then you need to follow Connect to the cluster steps.

This will launch a browser tab for you with a graphical representation:

s255

In the above kubernetes dashboard you can see the Workloads Statuses with complete Green. This means that there are no failures.

Deployments:

s155

Pods:

s156

Replica Sets:

s157

Services

Further down on the same page you will see the Services section where you can observe the External Endpoints of your images like webapplication and apiapplciation:

s158

Click on the above External Endpoints of webapplication and apiapplication services.

apiapplication:

s159

webapplication:

s160

Step 6. Monitor and diagnose

This step is not explained here. But after some time I will come up with a new blog to monitor and diagnose.

Building and Deploying Micro Services with Azure Kubernetes Service (AKS) and Azure DevOps Part-1

Overview of this 4-Part Blog series

This blog outlines the process to

  • Compile a Database application and Deploy into Azure SQL Database 
  • Compile Docker-based ASP.NET Core Web application, API application 
  • Deploy web and API applications into to a Kubernetes cluster running on Azure Kubernetes Service (AKS) using the Azure DevOps

s161

The content of this blog is divided up into 4 main parts:
Part-1: Explains the details of Docker & how to set up local and development environments for Docker applications
Part-2: Explains in detail the Inner-loop development workflow for both Docker and Database applications
Part-3: Explains in detail the Outer-loop DevOps workflow for a Database application
Part-4: Explains in detail how to create an Azure Kubernetes Service (AKS), Azure Container Registry (ACR) through the Azure CLI, and an Outer-loop DevOps workflow for a Docker application

Part-1: The details of Docker & how to set up local and development environments for Docker applications

Introduction to Containers and Docker

      I.   The creation of Containers and their use
      II.  Docker Containers vs Virtual Machines
      III. What is Docker?
      IV. Docker Benefits
      V.  Docker Architecture and Terminology

 I. The creation of Containers and their use

Containerization is an approach to software development in which an application or service, its dependencies, and its configuration are packaged together as a container image. You then can test the containerized application as a unit and deploy it as a container image instance to the host operating system.
Placing software into containers makes it possible for developers and IT professionals to deploy those containers across environments with little or no modification.
Containers also isolate applications from one another on a shared operating system (OS). Containerized applications run on top of a container host, which in turn runs on the OS (Linux or Windows). Thus, containers have a significantly smaller footprint than virtual machine (VM) images.
Containers offer the benefits of isolation, portability, agility, scalability, and control across the entire application life cycle workflow. The most important benefit is the isolation provided between Dev and Ops.

II. Docker Containers vs. Virtual Machines

Docker containers are lightweight because in contrast to virtual machines, they don’t need the extra load of a hypervisor, but run directly within the host machine’s kernel. This means you can run more containers on a given hardware combination than if you were using virtual machines. You can even run Docker containers within host machines that are actually virtual machines!

Picture7

III. What is Docker?

  • An open platform for developing, shipping, and running applications
  • Enables separating your applications from your infrastructure for quick software delivery 
  • Enables managing your infrastructure in the same way you manage your applications
  • By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production
  • Uses the Docker Engine to quickly build and package apps as Docker images are created, using files written in the Dockerfile format that then are deployed and run in a layered container

IV. Docker Benefits

1.  Fast, consistent delivery of your applications

Docker streamlines the development lifecycle by allowing developers to work in standardized environments. It uses local containers to support your applications and services. Containers are great for continuous integration and continuous delivery (CI/CD) workflow.

Consider the following scenario:
Your developers write code locally and share their work with their colleagues using Docker containers.
They use Docker to push their applications into a test environment and execute automated and manual tests.
When developers find bugs, they can fix them in the development environment and redeploy them to the test environment for testing and validation.
When testing is complete, getting the fix to the customer is as simple as pushing the updated image to the production environment

2.  Runs more workloads on the same hardware

Docker is lightweight and fast. It provides a viable, cost-effective alternative to hypervisor-based virtual machines, so you can use more of your compute capacity to achieve your business goals.

Docker is perfect for high density environments and for small and medium deployments where you need to do more with fewer resources.

V. Docker Architecture and Terminology

1.  Docker Architecture Overview

The Docker Engine is a client-server application with three major components:

  • A server which is a type of long-running program called a daemon process
  • A RESET API which specifies interfaces that programs can use to talk to the daemon and instruct it what to do
  • A command line interface (CLI) client (the Docker command)

Picture8

Docker client and daemon relation:

  • Both client and daemon can run on the same system, or you can connect a client to a remote Docker daemon
  • When using commands such as docker run, the client sends them to Docker Daemon, which carries them out
  • Both client and daemon communicate via a RESET API, sockets or a network interface

Picture9

2. Docker Terminology

The following are the basic definitions anyone needs to understand before getting deeper into Docker.

Azure Container Registry

  •  A public resource for working with Docker images and its components in Azure
  •  This provides a registry that is close to your deployments in Azure and that gives you control over access, making it possible to use your Azure Active Directory groups and permissions.

Build

  •  The action of building a container image based on the information and context provided by its Dockerfile as well as additional files in the folder where the image is built
  •  You can build images by using the Docker build command

Cluster

  •  A collection of Docker hosts exposed as if they were a single virtual Docker host so that the application can scale to multiple instances of the services spread across multiple hosts within the cluster
  •  Can be created  by using Docker Swarm, Mesosphere DC/OS, Kubernetes, and Azure Service Fabric

Note: If you use Docker Swarm for managing a cluster, you typically refer to the cluster as a swarm instead of a cluster.

Compose

  •  A command-line tool and YAML file format with metadata for defining and running multi-container applications
  •  You define a single application based on multiple images with one or more .yml files that can override values depending on the environment
  •  After you have created the definitions, you can deploy the entire multi-container application by using a single command (docker-compose up) that creates a container per image on the Docker host

Container
An instance of an image is called a container. The container or instance of a Docker image will contain the following components:

  1. An operating system selection (for example, a Linux distribution or Windows)
  2. Files added by the developer (for example, app binaries)
  3. Configuration (for example, environment settings and dependencies)
  4. Instructions for what processes to run by Docker
    • A container represents a runtime for a single application, process, or service. It consists of the contents of a Docker image, a runtime environment, and a standard set of instructions.
    •  You can create, start, stop, move, or delete a container using the Docker API or CLI.
    •   When scaling a service, you create multiple instances of a container from the same image. Or, a batch job can create multiple containers from the same image, passing different parameters to each instance.

Docker client

  • Is the primary way that many Docker users interact with Docker
  •  Can communicate with more than one daemon

Docker Community Edition (CE)

  •  Provides development tools for Windows and mac OS for building, running, and testing containers locally
  •  Docker CE for Windows provides development environments for both Linux and Windows Containers
  •  The Linux Docker host on Windows is based on a Hyper-V VM. The host for Windows Containers is directly based on Windows
  • Docker CE for Mac is based on the Apple Hypervisor framework and the xhyve hypervisor, which provides a Linux Docker host VM on Mac OS X
  •  Docker CE for Windows and for Mac replaces Docker Toolbox, which was based on Oracle VirtualBox

Docker daemon (dockerd)

  • Listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes
  •  Can also communicate with other daemons to manage Docker services

Docker Enterprise Edition

It is designed for enterprise development and is used by IT teams who build, ship, and run large business-critical applications in production.

Dockerfile

It is a text file that contains instructions for how to build a Docker image

Docker Hub

  • A public registry to upload images and work with them
  • Provides Docker image hosting, public or private registries, build triggers, web hooks, and integration with GitHub and Bitbucket

Docker Image

  • A package with all of the dependencies and information needed to create a container. An image includes all of the dependencies (such as frameworks) plus deployment and configuration to be used by a container runtime.
  • Usually, an image derives from multiple base images that are layers stacked one atop the other to form the container’s file system.
  • An image is immutable after it has been created. Docker image containers can run natively on Linux and Windows:

    •  Windows images can run only on Windows host
    •  Linux images can run only on Linux hosts, meaning a host server or a VM
    •  Developers working on Windows can create images for either Linux or Windows Containers

Docker Trusted Registry (DTR)

It is a Docker registry service (from Docker) that you can install on-premises so that it resides within the organization’s datacenter and network. It is convenient for private images that should be managed within the enterprise. Docker Trusted Registry is included as part of the Docker Datacenter product. For more information, go to https://docs.docker.com/docker-trusted-registry/overview/.

Orchestrator

  •  A tool that simplifies management of clusters and Docker hosts
  •  Used to manage images, containers, and hosts through a CLI or a graphical user interface
  •  Helps managing container networking, configurations, load balancing, service discovery, high availability, Docker host configuration, and more
  •  Responsible for running, distributing, scaling, and healing workloads across a collection of nodes
  •  Typically, orchestrator products are the same products that provide cluster infrastructure, like Mesosphere DC/OS, Kubernetes, Docker Swarm, and Azure Service Fabric

Registry

  •  A service that provides access to repositories
  •  The default registry for most public images is Docker Hub (owned by Docker as an organization)
  •  A registry usually contains repositories from multiple teams

Companies often have private registries to store and manage images that they’ve       created.  Azure Container Registry is another example.

Repository (also known as repo)

  • A collection of related Docker images labeled with a tag that indicates the image version
  • Some repositories contain multiple variants of a specific image, such as an image containing SDKs (heavier), an image containing only runtimes (lighter), and so on. Those variants can be marked with tags
  • A single repository can contain platform variants, such as a Linux image and a Windows image

Tag:

A mark or label that you can apply to images so that different images or versions of the same image (depending on the version number or the destination environment) can be identified

Setting up local and development environments for Docker applications

 

Basic Docker taxonomy: containers, images, and registries

Picture10

Introduction to the Docker application lifecycle

The lifecycle of containerized applications is like a journey which starts with the developer. The developer chooses and begins with containers and Docker because it eliminates friction between deployments and IT Operations, which ultimately helps them to be more agile, more productive end-to-end, faster.

Picture1

By the very nature of the Containers and Docker technology, developers are able to easily share their software and dependencies with IT Operations and production environments while eliminating the typical “it works on my machine” excuse.

Containers solve application conflicts between different environments. Indirectly, Containers and Docker bring developers and IT Ops closer together. It makes it easier for them to collaborate effectively.

With Docker Containers, developers own what’s inside the container (application/service and dependencies to frameworks/components) and how the containers/services behave together as an application composed by a collection of services.

The interdependencies of the multiple containers are defined with a docker-compose.yml file, or what could be called a deployment manifest.

Meanwhile, IT Operation teams (IT Pros and IT management) can focus on the management of production environments, infrastructure, and scalability, monitoring and ultimately making sure the applications are delivering right for the end-users, without having to know the content of the various containers. Hence the “container” name because of the analogy to shipping containers in real-life. In a similar way than the shipping company gets the contents from a-b without knowing or caring about the contents, in the same way developers own the contents within a container.

Developers on the left of the above image, are writing code and running their code in Docker containers locally using Docker for Windows/Linux. They define their operating environment with a dockerfile that specifies the base OS they run on, and the build steps for building their code into a Docker image.

They define how one or more images will inter-operate using a deployment manifest like a docker-compose.yml file. As they complete their local development, they push their application code plus the Docker configuration files to the code repository of their choice (i.e. Git repos).

The DevOps pillar defines the build-CI-pipelines using the dockerfile provided in the code repo. The CI system pulls the base container images from the Docker registries they’ve configured and builds the Docker images. The images are then validated and pushed to the Docker registry used for the deployments to multiple environments.

Operation teams on the right of the above image, are managing deployed applications and infrastructure in production while monitoring the environment and applications so they provide feedback and insights to the development team about how the application must be improved. Container apps are typically run in production using Container Orchestrators.

Introduction to a generic E2E Docker application lifecycle workflow

s1

Benefits from DevOps for containerized applications

The most important benefits provided by a solid DevOps workflow are:

  1. Deliver better quality software faster and with better compliance
  2. Drive continuous improvement and adjustments earlier and more economically
  3. Increase transparency and collaboration among stakeholders involved in delivering and operating software
  4. Control costs and utilize provisioned resources more effectively while minimizing security risks
  5. Plug and play well with many of your existing DevOps investments, including investments in open source

Introduction to the Microsoft platform and tools for containerized applications

s2

The above figure shows the main pillars in the lifecycle of Docker apps classified by the type of work delivered by multiple teams (app-development, DevOps infrastructure processes and IT Management and Operations).

Microsoft Technologies

3rd party-Azure pluggable

Platform for Docker Apps
  • Visual Studio & Visual Studio Code
  • .NET
  • Azure Kubernetes Service
  • Azure Service Fabric
  • Azure Container Registry

 

 

  • Any code editor (i.e. Sublime, etc.)
  • Any language (Node, Java etc.)
  • Any Orchestrator and Scheduler
  • Any Docker Registry

 

DevOps for Docker Apps

 

 

  • Azure DevOps Services
  • Team Foundation Server
  • Azure Kubernetes Service
  • Azure Service Fabric

 

 

  • GitHub, Git, Subversion, etc.
  • Jenkins, Chef, Puppet, Velocity, CircleCI, TravisCI, etc.
  • On-premises Docker Datacenter, Docker Swarm, Mesos DC/OS, Kubernetes,
    etc.

 

Management & Monitoring

 

 

  • Operations Management Suite
  • Application Insights

 

  • Marathon, Chronos, etc

 

The Microsoft platform and tools for containerized Docker applications, as defined in above Figure has the following components:

    • Platform for Docker Apps development. The development of a service, or collection of services that make up an “app”. The development platform provides all the work a developer requires prior to pushing their code to a shared code repo. Developing services, deployed as containers, are very similar to the development of the same apps or services without Docker. You continue to use your preferred language (.NET, Node.js, Go, etc.) and preferred editor or IDE like Visual Studio or Visual Studio Code. However, rather than consider Docker a deployment target, you develop your services in the Docker environment. You build, run, test and debug your code in containers locally, providing the target environment at development time. By providing the target environment locally, Docker containers enable what will drastically help you improve your Development and Operations lifecycle. Visual Studio and Visual Studio Code have extensions to integrate the container build, run and test your .NET, .NET Core and Node.js applications.
    • DevOps for Docker Apps. Developers creating Docker applications can leverage Azure DevOps Services (Azure DevOps) or any other third party product like Jenkins, to build out a comprehensive automated application lifecycle management (ALM).
      With Azure DevOps, developers can create container-focused DevOps for a fast, iterative process that covers source-code control from anywhere (Azure DevOps-Git, GitHub, any remote Git repository or Subversion), continuous integration (CI), and internal unit tests, inter container/service integration tests, continuous delivery CD, and release management (RM). Developers can also automate their Docker application releases into Azure Kubernetes Service, from development to staging and production environments.
      • IT production management and monitoring.
        Management –
        IT can manage production applications and services in several ways:

        1. Azure portal. If using OSS orchestrators, Azure Kubernetes Service (AKS) plus cluster management tools like Docker Datacenter and Mesosphere Marathon help you to set up and maintain your Docker environments. If using Azure Service Fabric, the Service Fabric Explorer tool allows you to visualize and configure your cluster
        2. Docker tools. You can manage your container applications using familiar tools. There’s no need to change your existing Docker management practices to move container workloads to the cloud. Use the application management tools you’re already familiar with and connect via the standard API endpoints for the orchestrator of your choice. You can also use other third party tools to manage your Docker applications like Docker Datacenter or even CLI Docker tools.
        3. Open source tools. Because AKS expose the standard API endpoints for the orchestration engine, the most popular tools are compatible with Azure Kubernetes Service and, in most cases, will work out of the box—including visualizers, monitoring, command line tools, and even future tools as they become available.
        Monitoring – While running production environments, you can monitor every angle with:
        1. Operations Management Suite (OMS). The “OMS Container Solution” can manage and monitor Docker hosts and containers by showing information about where your containers and container hosts are, which containers are running or failed, and Docker daemon and container logs. It also shows performance metrics such as CPU, memory, network and storage for the container and hosts to help you troubleshoot and find noisy neighbour containers.
        2. Application Insights. You can monitor production Docker applications by simply setting up its SDK into your services so you can get telemetry data from the applications.

Set up a local environment for Docker

A local development environment for Dockers has the following prerequisites:

If your system does not meet the requirements to run Docker for Windows, you can install Docker Toolbox, which uses Oracle Virtual Box instead of Hyper-V.

  • README FIRST for Docker Toolbox and Docker Machine users: Docker for Windows requires Microsoft Hyper-V to run. The Docker for Windows installer enables Hyper-V for you, if needed, and restart your machine. After Hyper-V is enabled, VirtualBox no longer works, but any VirtualBox VM images remain. VirtualBox VMs created with docker-machine (including the default one typically created during Toolbox install) no longer start. These VMs cannot be used side-by-side with Docker for Windows. However, you can still use docker-machine to manage remote VMs.
  • Virtualization must be enabled in BIOS and CPU SLAT-capable. Typically, virtualization is enabled by default. This is different from having Hyper-V enabled. For more detail see Virtualization must be enabled in Troubleshooting.

Enable Hypervisor

Hypervisor enables virtualization, which is the foundation on which all container orchestrators operate, including Kubernetes.

This blog uses Hyper-V as the hypervisor. On many Windows 10 versions, Hyper-V is already installed—for example, on 64-bit versions of Windows Professional, Enterprise, and Education in Windows 8 and later. It is not available on Windows Home edition.

NOTE: If you’re running something other than Windows 10 on your development platforms, another hypervisor option is to use VirtualBox, a cross-platform virtualization application. For a list of hypervisors, see “Install a Hypervisor” on the Minikube page of the Kubernetes documentation.

NOTE:
Install Hyper-V on Windows 10: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v

To enable Hyper-V manually on Windows 10 and set up a virtual switch:

          1. Go to the Control Panel >select Programs then click on Turn Windows features on or off.
            Picture2
          2. Select the Hyper-V check boxes, then click OK.
          3. To set up a virtual switch, type hyper in the Windows Start menu, then select Hyper-V Manager.
          4. In Hyper-V Manager, select Virtual Switch Manager.
          5. Select External as the type of virtual switch.
          6. Select the Create Virtual Switch button.
          7. Ensure that the Allow management operating system to share this network adapter checkbox is selected.

The current version of Docker for Windows runs on 64bit Windows 10 Pro, Enterprise and Education (1607 Anniversary Update, Build 14393 or later).

Containers and images created with Docker for Windows are shared between all user accounts on machines where it is installed. This is because all Windows accounts use the same VM to build and run containers.

Nested virtualization scenarios, such as running Docker for Windows on a VMWare or Parallels instance, might work, but come with no guarantees. For more information, see Running Docker for Windows in nested virtualization scenarios

Installing Docker for Windows

Docker for Windows is a Docker Community Edition (CE) app.

  • The Docker for Windows install package includes everything you need to run Docker on a Windows system.
  • Download the above file, and double click on downloaded installer file then follow the install wizard to accept the license, authorize the installer, and proceed with the install.
  • You are asked to authorize Docker.app with your system password during the install process. Privileged access is needed to install networking components, links to the Docker apps, and manage the Hyper-V VMs.
  • Click Finish on the setup complete dialog to launch Docker.
  • The installation provides Docker Engine, Docker CLI client, Docker Compose, Docker Machine, and Kitematic.

More info:  To learn more about installing Docker for Windows, go to https://docs.docker.com/docker-for-windows/.

Note:

  1. You can develop both Docker Linux containers and Docker Windows containers with Docker for Windows.
  2. The current version of Docker for Windows runs on 64bit Windows 10 Pro, Enterprise and Education (1607 Anniversary Update, Build 14393 or later).
  3. Virtualization must be enabled. You can verify that virtualization is enabled by checking the Performance tab on the Task Manager.
  4. The Docker for Windows installer enables Hyper-V for you.
  5. Containers and images created with Docker for Windows are shared between all user accounts on machines where it is installed. This is because all Windows accounts use the same VM to build and run containers.
  6. We can switch between Windows and Linux containers.

Test your Docker installation

  1. Open a terminal window (Command Prompt or PowerShell, but not PowerShell ISE).
  2. Run docker –version or docker version to ensure that you have a supported version of Docker:
  3. The output should tell you the basic details about your Docker environment:

docker –version

Docker version 18.05.0-ce, build f150324

docker version

Client:
Version: 18.05.0-ce
API version: 1.37
Go version: go1.9.5
Git commit: f150324
Built: Wed May 9 22:12:05 2018
OS/Arch: windows/amd64
Experimental: false
Orchestrator: swarm

Server:
Engine:
Version: 18.05.0-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.10.1
Git commit: f150324
Built: Wed May 9 22:20:16 2018
OS/Arch: linux/amd64
Experimental: true

Note: The OS/Arch field tells you the operating system you’re using. Docker is cross-platform, so you can manage Windows Docker servers from a Linux client and vice-versa, using the same docker commands.

Start Docker for Windows

Docker does not start automatically after installation. To start it, search for Docker, select Docker for Windows in the search results, and click it (or hit Enter).

Picture3

When the whale in the status bar stays steady, Docker is up-and-running, and accessible from any terminal window.

Picture4

If the whale is hidden in the Notifications area, click the up arrow on the taskbar to show it. To learn more, see Docker Settings.

If you just installed the app, you also get a popup success message with suggested next steps, and a link to this documentation.

Picture5

When initialization is complete, select About Docker from the notification area icon to verify that you have the latest version.

Congratulations! You are up and running with Docker for Windows.

Picture6

Important Docker Commands

Description Docker command
To get the list of all Images docker images -a

docker image ls -a

To Remove the Docker Image based in ID:

 

docker rmi d62ae1319d0a
To get the list of all Docker Containers

 

docker ps -a

docker container ls -a

To Remove the Docker Container based in ID:

 

docker container rm d62ae1319d0a
To Remove ALL Docker Containers

 

docker container rm -f $(docker container ls -a -q)
Getting Terminal Access of a Container in Running state

 

docker exec -it <containername> /bin/bash (For Linux)

docker exec -it <containername> cmd.exe (For Windows)

Set up Development environment for Docker apps

Development tools choices: IDE or editor

No matter if you prefer a full and powerful IDE or a lightweight and agile editor, either way Microsoft have you covered when developing Docker applications?

Visual Studio Code and Docker CLI (Cross-Platform Tools for Mac, Linux and Windows). If you prefer a lightweight and cross-platform editor supporting any development language, you can use Microsoft Visual Studio Code and Docker CLI.

These products provide a simple yet robust experience which is critical for streamlining the developer workflow.

By installing “Docker for Mac” or “Docker for Windows” (development environment), Docker developers can use a single Docker CLI to build apps for either Windows or Linux (execution environment). Plus, Visual Studio code supports extensions for Docker with intellisense for Docker files and shortcut-tasks to run Docker commands from the editor.

Download and Install Visual Studio Code

Download and Install Docker for Mac and Windows

Visual Studio with Docker Tools.

When using Visual Studio 2015 you can install the add-on tools “Docker Tools for Visual Studio”.

When using Visual Studio 2017, Docker Tools come built-in already.

In both cases you can develop, run and validate your applications directly in the target Docker environment.

F5 your application (single container or multiple containers) directly into a Docker host with debugging, or CTRL + F5 to edit & refresh your app without having to rebuild the container.

This is the simples and more powerful choice for Windows developers targeting Docker containers for Linux or Windows.

Download and Install Visual Studio Enterprise 2015/2017

Download and Install Docker for Mac and Windows

If you’re using Visual Studio 2015, you must have Update 3 or a later version plus the Visual Studio Tools for Docker.

More info:  For instructions on installing Visual Studio, go to https://www.visualstudio.com/
products/vs-2015-product-editions
.

To see more about installing Visual Studio Tools for Docker, go to http://aka.ms/vstoolsfordocker and https://docs.microsoft.com/aspnet/core/host-and-deploy/docker/visual-studio-tools-for-docker.

If you’re using Visual Studio 2017, Docker support is already included.

Language and framework choices

You can develop Docker applications and Microsoft tools with most modern languages. The following is an initial list, but you are not limited to it.

  1. .NET Core and ASP.NET Core
  2. Node.js
  3. Go Lang
  4. Java
  5. Ruby
  6. Python

Basically, you can use any modern language supported by Docker in Linux or Windows.

Note: But In this blog, we are using development IDE as Visual Studi0 2017 and use .NET Core and ASP.NET Core programming languages for developing Containerized based applications.