Tag Archives: Kubernetes

Building and Deploying Micro Services with Azure Kubernetes Service (AKS) and Azure DevOps Part-3

Database application DevOps workflow with Microsoft tools

s4

Steps in the outer-loop DevOps workflow for a Database application

The outer-loop end-to-end workflow is represented in the above Figure. Now, let’s drill down on each of its steps.

Prerequisites for the outer-loop

  1. Microsoft Azure Account: You will need a valid and active Azure account for this blog/document. If you do not have one, you can sign up for a free trial
    • If you are a Visual Studio Active Subscriber, you are entitled for a $50-$150 Azure credit per month. You can refer to this link to find out more including how to activate and start using your monthly Azure credit.
    • If you are not a Visual Studio Subscriber, you can sign up for the FREE Visual Studio Dev Essentials program to create Azure free account (includes 1 year of free services, $200 for 1st month).
  2. You will need an Azure DevOps Account. If you do not have one, you can sign up for free here

Step 1. Inner loop development workflow for a Database application

This step was explained in detail in the Part-2 blog/document, but here is where the outer-loop also starts, in the very precise moment when a developer pushes code to the source control management system (like Git) triggering Continuous Integration (CI) pipeline executions.

Share your code with Visual Studio 2017 and Azure DevOps Git

Share your Visual Studio solution i.e AKSDemo.sln in a new Azure DevOps Git repo.

Create a local Git repo for your project
  1. Go to Solution Explorer i.e AKSDemo.sln in your Visual Studio 2017,
  2. Create a new local Git repo for your project by selecting clip_image001 on the status bar in the lower right hand corner of Visual Studio. Or you can also right-click your solution in Solution Explorer and choose Add Solution to Source Control:

    image

This will create a new repository in the folder of the solution and commit your code there. Once you have a local repo, select items in the status bar to quickly navigate between Git tasks in Team Explorer:

image

clip_image002 Shows the number of unpublished commits in your local branch. Selecting this will open the Sync view in Team Explorer.

clip_image003 Shows the number of uncommitted file changes. Selecting this will open the Changes view in Team Explorer.

clip_image005 Shows the current Git repo. Selecting this will open the Connect view in Team Explorer.

clip_image006 Shows your current Git branch. Selecting this displays a branch picker to quickly switch between Git branches or create new branches.

Note

If you don’t see any icons such as clip_image001[4] orclip_image003, ensure that you have a project open that is part of a Git repo. If your project is brand new or not yet added to a repo, you can add it to one by selecting clip_image004 on the status bar, or by right-clicking your solution in Solution Explorer and choosing Add Solution to Source Control.

Publish your code to Azure DevOps
  1. Navigate to the Push view in Team Explorer by choosing the clip_image001[7] icon in the status bar. Or you can also select Sync from the Home view in Team Explorer.
  2. In the Push view in Team Explorer, select the Publish Git Repo button under Push to Visual Studio Team Services:

    clip_image002

  3. Choose Azure DevOps user account from dropdown list, if your account is not there in the dropdown list then click on Add an account and then enter your Azure DevOps account login credentials:

    clip_image003[5]

  4. Select your account in the Team Services Domain drop-down.
  5. Enter your repository name and select Publish repository:

    clip_image004[5]

    This creates a new project in your account with the same name as the repository. To create the repo in an existing project, click Advanced next to Repository name and select a project.

  6. Your code is now in an Azure DevOps repo. You can view your code on the web by selecting See it on the web:

    clip_image005

  7. Now your new team project is available in your Azure DevOps account:

    s17

    Note:

    The new repository contains the four projects like DatabaseApplication, APIApplication, WebApplication and docker-compose.

Create a new branch from the web
  1. Open your team project AKSDemo by double click on it:

    s18

  2. Navigate to Repos then choose Branches:

    s19

  3. Select the New branch button in the upper right corner of the page:

    s20

  4. In the Create a branch dialog, enter a name for your new branch, select a branch to base the work of, and associate any work items:

    image

  5. Select Create branch. Now, a new branch is ready for you to work in.

Note:

You will need to fetch the branch before you can see it and swap to it in your local repo.

Step 2. SCC integration and management with Azure DevOps

Here, we are using Azure DevOps and Git for managing the source code pushed by developers into specified repository (for example AKSDemo) and creating the Build and Release Pipelines.

Now, we are ready to create specifications of our pipeline from Visual Studio to a deploy database application. We need two definitions for this:

  • How Azure DevOps should build the code of Database application
  • How Azure DevOps should deploy DACPAC file into Azure SQL Database

Step 3. Build, CI, Integrate with Azure DevOps

Use Azure Pipelines in the visual designer

You can create and configure your build and release pipelines in the Azure DevOps web portal with the visual designer:

  1. Configure Azure Pipelines to use your Git repo.
  2. Use the Azure Pipelines visual designer to create and configure your build and release pipelines.
  3. Push your code to your version control repository which triggers your pipeline, running any tasks such as building or testing code.
  4. The build creates an artifact that is used by the rest of your pipeline, running any tasks such as deploying to staging or production.
  5. Your code is now updated, built, tested, and packaged and can be deployed to any target.

clip_image016

Benefits of using the visual designer

The visual designer is great for users who are new to CI and CD.

  • The visual representation of the pipelines makes it easier to get started
  • The visual designer is located in the same hub as the build results, making it easier to switch back and forth and make changes if needed
Create a build pipeline
  1. Create a build pipeline that’s to build your DatabaseApplication and create artifacts.
  2. Select Azure Pipelines, it should automatically take you to the Builds page:

    s22

  3. Create a new pipeline:

    s23

  4. Click on Use the visual designer to create a pipeline without YAML:

    s24

  5. Make sure that the Source, Team project name along with the Repository and Default branch which you are working are reflected correctly as shown in the figure below and then click on the Continue button:

    s25

  6. Start with an Empty job:

    s26

  7. On the left side, select Pipeline and specify whatever Name you want to use. For the Agent pool, select Hosted VS2017:

    s27

  8. After that click on the Get sources, in that all values are selected by default for getting the code from specified repository along with specified branch; if you want to change the default values then you can change here:

    s28

MSBuild
  1. On the left side, select the plus sign (+) to add a task to Job 1. On the right side, type “MSBuild” in the search box and click on the Add button of MSBuild build task as shown in below figure:

    s30

  2. On the left side, select your new MSBuild task:

    clip_image033

  3. Now, you want to configure the above MSBuild task for building your database project.
  4. Configure the MSBuild task as follows:
    • Display name: Build solution DatabaseApplication.sqlproj
    • Project: Relative path from repo root of the project(s) or solution(s) to run. Wildcards can be used. For Example: DatabaseApplication/DatabaseApplication.sqlproj
    • MSBuild Version: If the preferred version cannot be found, the latest version found will be used instead. On a macOS agent, xbuild (Mono) will be used if version is lower than 15.0. For Example: Latest
    • MSBuild Architecture: Optionally supply the architecture (x86, x64) of MSBuild to run. For Example: MSBuild x86
    • MSBuild Arguments: Additional arguments passed to MSBuild (on Windows) and xbuild (on macOS). For Example: /t:build /p:CmdLineInMemoryStorage=True

Note:

If you want to know more about  the MSBuild task, you can refer to this link

https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/build/msbuild?view=vsts

Copy Files
  1. Next, add Copy Files task for copying the files from source folder to target folder using match patterns. On the Tasks tab, select the plus sign (+) to add a task to Job 1. On the right side, type “Copy Files” in the search box, and click on the Add button of Copy Files build task as shown in the figure below:

    s33

  2. On the left side, select your new Copy Files task:

    s34

  3. Configure the above Copy Files task for copying the files from the source folder to the target folder using match patterns as follows:
    • Display name: Copy Database related Files to: $(build.artifactstagingdirectory)
    • Source Folder: The source folder that the copy pattern(s) will be run from. Empty is the root of the repo. For example: DatabaseApplication/bin/Debug
    • Contents: File paths to include as part of the copy. Supports multiple lines of match patterns. For example: *.dacpac
    • Target Folder: Target folder or UNC path files will copy to.
    • For example: $(build.artifactstagingdirectory)

      s35

Note:

If you want to know more about the Copy Files task, you can refer to this link

https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/copy-files?view=vsts

Publish Build Artifacts
  1. Next, add Publish Build Artifacts task for Publish build artifacts to Visual Studio Team Services. On the Tasks tab, select the plus sign (+) to add a task to Job 1. On the right side, type “Publish Build” in the search box and click on the Add button of Publish Build Artifacts, as shown in the figure below:

    s36

  2. On the left side, select your new Publish Build Artifacts task:

    clip_image043

  3. Configure the above Publish Build Artifacts task as follows:
    • Display name: Publish Artifact: DatabaseDrop
    • Path to publish: The folder or file path to publish. This can be a fully-qualified path or a path relative to the root of the repository. Wildcards are not supported. For example: $(build.artifactstagingdirectory)
    • Artifact name: The name of the artifact to create in the publish location. For example: DatabaseDrop
    • Artifact publish location: Choose whether to store the artifact in Visual Studio Team Services/TFS, or to copy it to a file share that must be accessible from the build agent. For example: Visual Studio Team Services/TFS or Azure Artifacts/TFS

      s38

Note:

If you want to know more about the Publish Build Artifacts task, you can refer to this link

https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/publish-build-artifacts?view=vsts

Note:

Artifacts are the files that you want your build to produce. Artifacts can be nearly anything your team needs to test or deploy your app. For example, you have  .DLL and .EXE executable files and .PDB symbols file of a C# or C++ .NET Windows app.

To produce artifacts, we provide tools such as copying with pattern matching and a staging directory in which you can gather your artifacts before publishing them. See Artifacts in Azure Pipelines.

Enable continuous integration (CI)
  1. Select the Triggers tab and check the Enable continuous integration option.
  2. Add the Path filters as shown in the figure below:

    s39

The above build is triggered only if you modify the files in the DatabaseApplication of your team project i.e AKSDemo. This build will not be triggered, if you modify the files in APIApplication and WebApplication of your team proejct i.e AKSDemo.

Note:

Here I am adding Path filters for the  DatabaseApplication in the Triggers tab because this AKSDemo repository contains APIApplication and WebApplication. If not added, the Path filters then for every commit this build will trigger. This is not recommended and that’s why I added the path filters for this build pipeline. This build will be triggered whenever developers modify the files in the  DatabaseApplication project and commit changes into your team project i.e AKSDemo.

One more reason to add the path filters for this build pipeline is that whenever developers modify the files in both the APIApplication and the WebApplication, they commit changes into your team project i.e AKSDemo. At that time you are getting the error while execution of this build. because currently you are using Hosted VS2017 as Agent pool.  By using this agent you are not able to build and push linux images into the Azure Container Registry. That’s why you have to create separate CI and CD for the APIApplication andthe WebApplication in the next part or further steps.

Note:

A continuous integration trigger on a build pipeline indicates that the system should automatically queue a new build whenever a code change is committed. You can make the trigger more general or more specific, and also schedule your build (for example, on a nightly basis). See Build triggers.

Specify Build number format
  1. Select the Options tab and give the Build number format, as shown (for example $(date:yyyyMMdd)$(rev:.r) ) in this figure below:

    s40

Complete Build Pipeline
  1. Go to the Tasks tab, then see your completed pipeline like this:

    s41

Save and queue the build

Save and queue a build manually and test your build pipeline.

  1. Select Save & queue, and then select Save & queue:

    s42

  2. On the dialog box, select Save & queue once more:

    s43

    This queues a new build on the Microsoft-hosted agent.

  3. You see a link to the new build on the top of the page:

    s44

  4. Choose the link to watch the new build as it happens. Once the agent is allocated, you’ll start seeing the live logs of the build:

    s45

  5. After a successful build, go to the build summary. On the Artifacts tab of the build, notice that the DatabaseDrop is published as an artifact:

    s46

Step 4. Continuous Delivery (CD), Deploy

Provided the above build for the DatabaseApplication worked,  you can now define your CD pipeline. Remember; CI is about building and testing the code as often as possible, and CD is about taking the (successful) results of these builds (Artifacts) and deploy into a target resource as often as possible. In general every CD definition has the Dev, QA, UAT, Staging and Production environments. But for now this CD definition has the Dev environment, only.

Create a release pipeline

Define the process for deploying the .dacpac file into Azure SQL Database in one stage.

  1. Go to the Pipelines tab, and then select Releases. Next, select the action to create a New pipeline. If a release pipeline is already created, select the plus sign (+) and then select Create a release pipeline:

    s48

  2. Select the action to start with an Empty job:

    s49

  3. Name the stage Dev and change the Release name as Database Release Definition:

    s50

  4. In the Artifacts panel, select + Add and specify a Source (Build pipeline). Select Add:

    s51

  5. Select the Tasks tab and select your Dev stage:

    s52

Azure SQL Database Deployment
  1. Add the Azure SQL Database Deployment task for deploying an Azure SQL Database to an existing Azure SQL Server by using either DACPACs or SQL Server scripts.
  2. Select the plus sign (+) for the job to add a task to the job. On the Add tasks dialog box, select Deploy, locate the Azure SQL Database Deployment task, and then select its Add button:

    s53

  3. On the left side, select your new Azure SQL Database Deployment task:

    s54

  4. Next, configure the Azure SQL Database Deployment task for the DatabaseApplication. For this you need the Azure Resource Manager Subscription connection. If your Azure DevOps account is already linked to an Azure subscription then it will automatically display in the Azure subscription drop down, as shown in the screenshot below, and click Authorize. Otherwise click on Manage:

    s55

Note:

Refer to the below link for Azure SQL Database Deployment task:

https://github.com/Microsoft/vsts-tasks/blob/master/Tasks/SqlAzureDacpacDeploymentV1/README.md

Azure Resource Manager Endpoint

  1. In the above step when clicking on Manage link, you get to the Settings tab; select the New service connection option on the left pane:

    s56

  2. Whenever you click on the New Service Endpoint a dropdown list will open; from that you can select the Azure Resource Manager endpoint.

    s57

  3. If your organization is already backed by your azure service principal authentication, then give a name to your service endpoint and select the Azure subscription from the dropdown:

    s58

  4. If not backed by Azure, then click on the hyperlink ‘use the full version of the service connection dialog’, as shown in the above screenshot.
  5. If you have your service principal details, you can enter directly and click on Verify connection; if the Connection is verified successfully, then click on the OK button; otherwise, you can refer to the service connections link provided on the same popup, as marked in the screenshot below:

    s59

  6. Once you added service principal successfully, then you will see the following:

    s60

  7. Now, go back to your Release definition page and click on the refresh icon of the Azure SQL Database Deployment task. It should display the Azure Resource Manager endpoint in the Azure subscription dropdown list which you created in the previous step.

    image

  8. Configure the above task for deploying the DatabaseApplication into Azure SQL Database as follows:
    • Display name: Execute Azure SQL : DacpacTask
    • Azure Connection Type: Select an Azure Connection Type. For example: Azure Resource Manager
    • Azure Subscription: Select an Azure subscription
    • Azure SQL Server Name: Azure SQL Server name
    • Database Name: Name of the Azure SQL Database, where the files will be deployed
    • Server Admin Login: Specify the Azure SQL Server administrator login
    • Password: Password for the Azure SQL Server administrator
    • Action: Choose one of the SQL Actions from the list. For Example: Publish
    • Type: Choose one of the type from the list. For Example: SQL DACPAC File
    • DACPAC File: Location of the DACPAC file on the automation agent. For Example: $(System.DefaultWorkingDirectory)/_AKSDemo-Database-CI/DatabaseDrop/DatabaseApplication.dacpac

      s61

Define the Variables
  1. Go to the Variables on the Releases tab, then click on Add; enter the Name and Value, and choose the Scope as Dev.

    s62

  2. For this release definition you need to define four variables, which are ServerName, DatabaseName, DatabaseUserName, and DatabasePassword.

    s72

Specify Release number format
  1. Select the Options tab and give the Release a number format, as shown (for example Database Release-$(rev:r) ) in the figure below:

    s64

Enable continuous deployment trigger
  1. Go to the Pipeline on the Releases tab, Select the Lightning bolt to trigger continuous deployment, and then enable the Continuous deployment trigger on the right:

    s65

  2. Click on Save:

    s66

Complete Release Pipeline
  1. Go to the Pipeline tab, then see your completed release pipeline like this:

    s67

Deploy a release
  1. Create a new release:

    s68

  2. Define the trigger settings and artifact source for the release and then select Create:

    s69

  3. Open the release that you just created:

    s70

  4. View the logs to get real-time data about the release:

    s71

    Note:

    You can track the progress of each release to see if it has been deployed to all the stages. You can track the commits that are part of each release, the associated work items, and the results of any test runs that you’ve added to the release pipeline.

  5. If the pipeline runs successfully you’ll get a list of green checkmarks,  just like the release pipeline:

    s73

Now everything completed to setup build and release definitions for the Database Application. But, while doing the initial setup of the build and release definitions you have to created build and release manually without using automatic triggers of build and release definitions.

From next time onward you can modify the files in the DatabaseApplication and check in your code with the automatic build, because you already enabled the automatic triggers for both build and release definitions. As a result your code gets automatically deployed all the way to the Dev stage

Building and Deploying Micro Services with Azure Kubernetes Service (AKS) and Azure DevOps Part-1

Overview of this 4-Part Blog series

This blog outlines the process to

  • Compile a Database application and Deploy into Azure SQL Database 
  • Compile Docker-based ASP.NET Core Web application, API application 
  • Deploy web and API applications into to a Kubernetes cluster running on Azure Kubernetes Service (AKS) using the Azure DevOps

s161

The content of this blog is divided up into 4 main parts:
Part-1: Explains the details of Docker & how to set up local and development environments for Docker applications
Part-2: Explains in detail the Inner-loop development workflow for both Docker and Database applications
Part-3: Explains in detail the Outer-loop DevOps workflow for a Database application
Part-4: Explains in detail how to create an Azure Kubernetes Service (AKS), Azure Container Registry (ACR) through the Azure CLI, and an Outer-loop DevOps workflow for a Docker application

Part-1: The details of Docker & how to set up local and development environments for Docker applications

Introduction to Containers and Docker

      I.   The creation of Containers and their use
      II.  Docker Containers vs Virtual Machines
      III. What is Docker?
      IV. Docker Benefits
      V.  Docker Architecture and Terminology

 I. The creation of Containers and their use

Containerization is an approach to software development in which an application or service, its dependencies, and its configuration are packaged together as a container image. You then can test the containerized application as a unit and deploy it as a container image instance to the host operating system.
Placing software into containers makes it possible for developers and IT professionals to deploy those containers across environments with little or no modification.
Containers also isolate applications from one another on a shared operating system (OS). Containerized applications run on top of a container host, which in turn runs on the OS (Linux or Windows). Thus, containers have a significantly smaller footprint than virtual machine (VM) images.
Containers offer the benefits of isolation, portability, agility, scalability, and control across the entire application life cycle workflow. The most important benefit is the isolation provided between Dev and Ops.

II. Docker Containers vs. Virtual Machines

Docker containers are lightweight because in contrast to virtual machines, they don’t need the extra load of a hypervisor, but run directly within the host machine’s kernel. This means you can run more containers on a given hardware combination than if you were using virtual machines. You can even run Docker containers within host machines that are actually virtual machines!

Picture7

III. What is Docker?

  • An open platform for developing, shipping, and running applications
  • Enables separating your applications from your infrastructure for quick software delivery 
  • Enables managing your infrastructure in the same way you manage your applications
  • By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production
  • Uses the Docker Engine to quickly build and package apps as Docker images are created, using files written in the Dockerfile format that then are deployed and run in a layered container

IV. Docker Benefits

1.  Fast, consistent delivery of your applications

Docker streamlines the development lifecycle by allowing developers to work in standardized environments. It uses local containers to support your applications and services. Containers are great for continuous integration and continuous delivery (CI/CD) workflow.

Consider the following scenario:
Your developers write code locally and share their work with their colleagues using Docker containers.
They use Docker to push their applications into a test environment and execute automated and manual tests.
When developers find bugs, they can fix them in the development environment and redeploy them to the test environment for testing and validation.
When testing is complete, getting the fix to the customer is as simple as pushing the updated image to the production environment

2.  Runs more workloads on the same hardware

Docker is lightweight and fast. It provides a viable, cost-effective alternative to hypervisor-based virtual machines, so you can use more of your compute capacity to achieve your business goals.

Docker is perfect for high density environments and for small and medium deployments where you need to do more with fewer resources.

V. Docker Architecture and Terminology

1.  Docker Architecture Overview

The Docker Engine is a client-server application with three major components:

  • A server which is a type of long-running program called a daemon process
  • A RESET API which specifies interfaces that programs can use to talk to the daemon and instruct it what to do
  • A command line interface (CLI) client (the Docker command)

Picture8

Docker client and daemon relation:

  • Both client and daemon can run on the same system, or you can connect a client to a remote Docker daemon
  • When using commands such as docker run, the client sends them to Docker Daemon, which carries them out
  • Both client and daemon communicate via a RESET API, sockets or a network interface

Picture9

2. Docker Terminology

The following are the basic definitions anyone needs to understand before getting deeper into Docker.

Azure Container Registry

  •  A public resource for working with Docker images and its components in Azure
  •  This provides a registry that is close to your deployments in Azure and that gives you control over access, making it possible to use your Azure Active Directory groups and permissions.

Build

  •  The action of building a container image based on the information and context provided by its Dockerfile as well as additional files in the folder where the image is built
  •  You can build images by using the Docker build command

Cluster

  •  A collection of Docker hosts exposed as if they were a single virtual Docker host so that the application can scale to multiple instances of the services spread across multiple hosts within the cluster
  •  Can be created  by using Docker Swarm, Mesosphere DC/OS, Kubernetes, and Azure Service Fabric

Note: If you use Docker Swarm for managing a cluster, you typically refer to the cluster as a swarm instead of a cluster.

Compose

  •  A command-line tool and YAML file format with metadata for defining and running multi-container applications
  •  You define a single application based on multiple images with one or more .yml files that can override values depending on the environment
  •  After you have created the definitions, you can deploy the entire multi-container application by using a single command (docker-compose up) that creates a container per image on the Docker host

Container
An instance of an image is called a container. The container or instance of a Docker image will contain the following components:

  1. An operating system selection (for example, a Linux distribution or Windows)
  2. Files added by the developer (for example, app binaries)
  3. Configuration (for example, environment settings and dependencies)
  4. Instructions for what processes to run by Docker
    • A container represents a runtime for a single application, process, or service. It consists of the contents of a Docker image, a runtime environment, and a standard set of instructions.
    •  You can create, start, stop, move, or delete a container using the Docker API or CLI.
    •   When scaling a service, you create multiple instances of a container from the same image. Or, a batch job can create multiple containers from the same image, passing different parameters to each instance.

Docker client

  • Is the primary way that many Docker users interact with Docker
  •  Can communicate with more than one daemon

Docker Community Edition (CE)

  •  Provides development tools for Windows and mac OS for building, running, and testing containers locally
  •  Docker CE for Windows provides development environments for both Linux and Windows Containers
  •  The Linux Docker host on Windows is based on a Hyper-V VM. The host for Windows Containers is directly based on Windows
  • Docker CE for Mac is based on the Apple Hypervisor framework and the xhyve hypervisor, which provides a Linux Docker host VM on Mac OS X
  •  Docker CE for Windows and for Mac replaces Docker Toolbox, which was based on Oracle VirtualBox

Docker daemon (dockerd)

  • Listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes
  •  Can also communicate with other daemons to manage Docker services

Docker Enterprise Edition

It is designed for enterprise development and is used by IT teams who build, ship, and run large business-critical applications in production.

Dockerfile

It is a text file that contains instructions for how to build a Docker image

Docker Hub

  • A public registry to upload images and work with them
  • Provides Docker image hosting, public or private registries, build triggers, web hooks, and integration with GitHub and Bitbucket

Docker Image

  • A package with all of the dependencies and information needed to create a container. An image includes all of the dependencies (such as frameworks) plus deployment and configuration to be used by a container runtime.
  • Usually, an image derives from multiple base images that are layers stacked one atop the other to form the container’s file system.
  • An image is immutable after it has been created. Docker image containers can run natively on Linux and Windows:

    •  Windows images can run only on Windows host
    •  Linux images can run only on Linux hosts, meaning a host server or a VM
    •  Developers working on Windows can create images for either Linux or Windows Containers

Docker Trusted Registry (DTR)

It is a Docker registry service (from Docker) that you can install on-premises so that it resides within the organization’s datacenter and network. It is convenient for private images that should be managed within the enterprise. Docker Trusted Registry is included as part of the Docker Datacenter product. For more information, go to https://docs.docker.com/docker-trusted-registry/overview/.

Orchestrator

  •  A tool that simplifies management of clusters and Docker hosts
  •  Used to manage images, containers, and hosts through a CLI or a graphical user interface
  •  Helps managing container networking, configurations, load balancing, service discovery, high availability, Docker host configuration, and more
  •  Responsible for running, distributing, scaling, and healing workloads across a collection of nodes
  •  Typically, orchestrator products are the same products that provide cluster infrastructure, like Mesosphere DC/OS, Kubernetes, Docker Swarm, and Azure Service Fabric

Registry

  •  A service that provides access to repositories
  •  The default registry for most public images is Docker Hub (owned by Docker as an organization)
  •  A registry usually contains repositories from multiple teams

Companies often have private registries to store and manage images that they’ve       created.  Azure Container Registry is another example.

Repository (also known as repo)

  • A collection of related Docker images labeled with a tag that indicates the image version
  • Some repositories contain multiple variants of a specific image, such as an image containing SDKs (heavier), an image containing only runtimes (lighter), and so on. Those variants can be marked with tags
  • A single repository can contain platform variants, such as a Linux image and a Windows image

Tag:

A mark or label that you can apply to images so that different images or versions of the same image (depending on the version number or the destination environment) can be identified

Setting up local and development environments for Docker applications

 

Basic Docker taxonomy: containers, images, and registries

Picture10

Introduction to the Docker application lifecycle

The lifecycle of containerized applications is like a journey which starts with the developer. The developer chooses and begins with containers and Docker because it eliminates friction between deployments and IT Operations, which ultimately helps them to be more agile, more productive end-to-end, faster.

Picture1

By the very nature of the Containers and Docker technology, developers are able to easily share their software and dependencies with IT Operations and production environments while eliminating the typical “it works on my machine” excuse.

Containers solve application conflicts between different environments. Indirectly, Containers and Docker bring developers and IT Ops closer together. It makes it easier for them to collaborate effectively.

With Docker Containers, developers own what’s inside the container (application/service and dependencies to frameworks/components) and how the containers/services behave together as an application composed by a collection of services.

The interdependencies of the multiple containers are defined with a docker-compose.yml file, or what could be called a deployment manifest.

Meanwhile, IT Operation teams (IT Pros and IT management) can focus on the management of production environments, infrastructure, and scalability, monitoring and ultimately making sure the applications are delivering right for the end-users, without having to know the content of the various containers. Hence the “container” name because of the analogy to shipping containers in real-life. In a similar way than the shipping company gets the contents from a-b without knowing or caring about the contents, in the same way developers own the contents within a container.

Developers on the left of the above image, are writing code and running their code in Docker containers locally using Docker for Windows/Linux. They define their operating environment with a dockerfile that specifies the base OS they run on, and the build steps for building their code into a Docker image.

They define how one or more images will inter-operate using a deployment manifest like a docker-compose.yml file. As they complete their local development, they push their application code plus the Docker configuration files to the code repository of their choice (i.e. Git repos).

The DevOps pillar defines the build-CI-pipelines using the dockerfile provided in the code repo. The CI system pulls the base container images from the Docker registries they’ve configured and builds the Docker images. The images are then validated and pushed to the Docker registry used for the deployments to multiple environments.

Operation teams on the right of the above image, are managing deployed applications and infrastructure in production while monitoring the environment and applications so they provide feedback and insights to the development team about how the application must be improved. Container apps are typically run in production using Container Orchestrators.

Introduction to a generic E2E Docker application lifecycle workflow

s1

Benefits from DevOps for containerized applications

The most important benefits provided by a solid DevOps workflow are:

  1. Deliver better quality software faster and with better compliance
  2. Drive continuous improvement and adjustments earlier and more economically
  3. Increase transparency and collaboration among stakeholders involved in delivering and operating software
  4. Control costs and utilize provisioned resources more effectively while minimizing security risks
  5. Plug and play well with many of your existing DevOps investments, including investments in open source

Introduction to the Microsoft platform and tools for containerized applications

s2

The above figure shows the main pillars in the lifecycle of Docker apps classified by the type of work delivered by multiple teams (app-development, DevOps infrastructure processes and IT Management and Operations).

Microsoft Technologies

3rd party-Azure pluggable

Platform for Docker Apps
  • Visual Studio & Visual Studio Code
  • .NET
  • Azure Kubernetes Service
  • Azure Service Fabric
  • Azure Container Registry

 

 

  • Any code editor (i.e. Sublime, etc.)
  • Any language (Node, Java etc.)
  • Any Orchestrator and Scheduler
  • Any Docker Registry

 

DevOps for Docker Apps

 

 

  • Azure DevOps Services
  • Team Foundation Server
  • Azure Kubernetes Service
  • Azure Service Fabric

 

 

  • GitHub, Git, Subversion, etc.
  • Jenkins, Chef, Puppet, Velocity, CircleCI, TravisCI, etc.
  • On-premises Docker Datacenter, Docker Swarm, Mesos DC/OS, Kubernetes,
    etc.

 

Management & Monitoring

 

 

  • Operations Management Suite
  • Application Insights

 

  • Marathon, Chronos, etc

 

The Microsoft platform and tools for containerized Docker applications, as defined in above Figure has the following components:

    • Platform for Docker Apps development. The development of a service, or collection of services that make up an “app”. The development platform provides all the work a developer requires prior to pushing their code to a shared code repo. Developing services, deployed as containers, are very similar to the development of the same apps or services without Docker. You continue to use your preferred language (.NET, Node.js, Go, etc.) and preferred editor or IDE like Visual Studio or Visual Studio Code. However, rather than consider Docker a deployment target, you develop your services in the Docker environment. You build, run, test and debug your code in containers locally, providing the target environment at development time. By providing the target environment locally, Docker containers enable what will drastically help you improve your Development and Operations lifecycle. Visual Studio and Visual Studio Code have extensions to integrate the container build, run and test your .NET, .NET Core and Node.js applications.
    • DevOps for Docker Apps. Developers creating Docker applications can leverage Azure DevOps Services (Azure DevOps) or any other third party product like Jenkins, to build out a comprehensive automated application lifecycle management (ALM).
      With Azure DevOps, developers can create container-focused DevOps for a fast, iterative process that covers source-code control from anywhere (Azure DevOps-Git, GitHub, any remote Git repository or Subversion), continuous integration (CI), and internal unit tests, inter container/service integration tests, continuous delivery CD, and release management (RM). Developers can also automate their Docker application releases into Azure Kubernetes Service, from development to staging and production environments.
      • IT production management and monitoring.
        Management –
        IT can manage production applications and services in several ways:

        1. Azure portal. If using OSS orchestrators, Azure Kubernetes Service (AKS) plus cluster management tools like Docker Datacenter and Mesosphere Marathon help you to set up and maintain your Docker environments. If using Azure Service Fabric, the Service Fabric Explorer tool allows you to visualize and configure your cluster
        2. Docker tools. You can manage your container applications using familiar tools. There’s no need to change your existing Docker management practices to move container workloads to the cloud. Use the application management tools you’re already familiar with and connect via the standard API endpoints for the orchestrator of your choice. You can also use other third party tools to manage your Docker applications like Docker Datacenter or even CLI Docker tools.
        3. Open source tools. Because AKS expose the standard API endpoints for the orchestration engine, the most popular tools are compatible with Azure Kubernetes Service and, in most cases, will work out of the box—including visualizers, monitoring, command line tools, and even future tools as they become available.
        Monitoring – While running production environments, you can monitor every angle with:
        1. Operations Management Suite (OMS). The “OMS Container Solution” can manage and monitor Docker hosts and containers by showing information about where your containers and container hosts are, which containers are running or failed, and Docker daemon and container logs. It also shows performance metrics such as CPU, memory, network and storage for the container and hosts to help you troubleshoot and find noisy neighbour containers.
        2. Application Insights. You can monitor production Docker applications by simply setting up its SDK into your services so you can get telemetry data from the applications.

Set up a local environment for Docker

A local development environment for Dockers has the following prerequisites:

If your system does not meet the requirements to run Docker for Windows, you can install Docker Toolbox, which uses Oracle Virtual Box instead of Hyper-V.

  • README FIRST for Docker Toolbox and Docker Machine users: Docker for Windows requires Microsoft Hyper-V to run. The Docker for Windows installer enables Hyper-V for you, if needed, and restart your machine. After Hyper-V is enabled, VirtualBox no longer works, but any VirtualBox VM images remain. VirtualBox VMs created with docker-machine (including the default one typically created during Toolbox install) no longer start. These VMs cannot be used side-by-side with Docker for Windows. However, you can still use docker-machine to manage remote VMs.
  • Virtualization must be enabled in BIOS and CPU SLAT-capable. Typically, virtualization is enabled by default. This is different from having Hyper-V enabled. For more detail see Virtualization must be enabled in Troubleshooting.

Enable Hypervisor

Hypervisor enables virtualization, which is the foundation on which all container orchestrators operate, including Kubernetes.

This blog uses Hyper-V as the hypervisor. On many Windows 10 versions, Hyper-V is already installed—for example, on 64-bit versions of Windows Professional, Enterprise, and Education in Windows 8 and later. It is not available on Windows Home edition.

NOTE: If you’re running something other than Windows 10 on your development platforms, another hypervisor option is to use VirtualBox, a cross-platform virtualization application. For a list of hypervisors, see “Install a Hypervisor” on the Minikube page of the Kubernetes documentation.

NOTE:
Install Hyper-V on Windows 10: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v

To enable Hyper-V manually on Windows 10 and set up a virtual switch:

          1. Go to the Control Panel >select Programs then click on Turn Windows features on or off.
            Picture2
          2. Select the Hyper-V check boxes, then click OK.
          3. To set up a virtual switch, type hyper in the Windows Start menu, then select Hyper-V Manager.
          4. In Hyper-V Manager, select Virtual Switch Manager.
          5. Select External as the type of virtual switch.
          6. Select the Create Virtual Switch button.
          7. Ensure that the Allow management operating system to share this network adapter checkbox is selected.

The current version of Docker for Windows runs on 64bit Windows 10 Pro, Enterprise and Education (1607 Anniversary Update, Build 14393 or later).

Containers and images created with Docker for Windows are shared between all user accounts on machines where it is installed. This is because all Windows accounts use the same VM to build and run containers.

Nested virtualization scenarios, such as running Docker for Windows on a VMWare or Parallels instance, might work, but come with no guarantees. For more information, see Running Docker for Windows in nested virtualization scenarios

Installing Docker for Windows

Docker for Windows is a Docker Community Edition (CE) app.

  • The Docker for Windows install package includes everything you need to run Docker on a Windows system.
  • Download the above file, and double click on downloaded installer file then follow the install wizard to accept the license, authorize the installer, and proceed with the install.
  • You are asked to authorize Docker.app with your system password during the install process. Privileged access is needed to install networking components, links to the Docker apps, and manage the Hyper-V VMs.
  • Click Finish on the setup complete dialog to launch Docker.
  • The installation provides Docker Engine, Docker CLI client, Docker Compose, Docker Machine, and Kitematic.

More info:  To learn more about installing Docker for Windows, go to https://docs.docker.com/docker-for-windows/.

Note:

  1. You can develop both Docker Linux containers and Docker Windows containers with Docker for Windows.
  2. The current version of Docker for Windows runs on 64bit Windows 10 Pro, Enterprise and Education (1607 Anniversary Update, Build 14393 or later).
  3. Virtualization must be enabled. You can verify that virtualization is enabled by checking the Performance tab on the Task Manager.
  4. The Docker for Windows installer enables Hyper-V for you.
  5. Containers and images created with Docker for Windows are shared between all user accounts on machines where it is installed. This is because all Windows accounts use the same VM to build and run containers.
  6. We can switch between Windows and Linux containers.

Test your Docker installation

  1. Open a terminal window (Command Prompt or PowerShell, but not PowerShell ISE).
  2. Run docker –version or docker version to ensure that you have a supported version of Docker:
  3. The output should tell you the basic details about your Docker environment:

docker –version

Docker version 18.05.0-ce, build f150324

docker version

Client:
Version: 18.05.0-ce
API version: 1.37
Go version: go1.9.5
Git commit: f150324
Built: Wed May 9 22:12:05 2018
OS/Arch: windows/amd64
Experimental: false
Orchestrator: swarm

Server:
Engine:
Version: 18.05.0-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.10.1
Git commit: f150324
Built: Wed May 9 22:20:16 2018
OS/Arch: linux/amd64
Experimental: true

Note: The OS/Arch field tells you the operating system you’re using. Docker is cross-platform, so you can manage Windows Docker servers from a Linux client and vice-versa, using the same docker commands.

Start Docker for Windows

Docker does not start automatically after installation. To start it, search for Docker, select Docker for Windows in the search results, and click it (or hit Enter).

Picture3

When the whale in the status bar stays steady, Docker is up-and-running, and accessible from any terminal window.

Picture4

If the whale is hidden in the Notifications area, click the up arrow on the taskbar to show it. To learn more, see Docker Settings.

If you just installed the app, you also get a popup success message with suggested next steps, and a link to this documentation.

Picture5

When initialization is complete, select About Docker from the notification area icon to verify that you have the latest version.

Congratulations! You are up and running with Docker for Windows.

Picture6

Important Docker Commands

Description Docker command
To get the list of all Images docker images -a

docker image ls -a

To Remove the Docker Image based in ID:

 

docker rmi d62ae1319d0a
To get the list of all Docker Containers

 

docker ps -a

docker container ls -a

To Remove the Docker Container based in ID:

 

docker container rm d62ae1319d0a
To Remove ALL Docker Containers

 

docker container rm -f $(docker container ls -a -q)
Getting Terminal Access of a Container in Running state

 

docker exec -it <containername> /bin/bash (For Linux)

docker exec -it <containername> cmd.exe (For Windows)

Set up Development environment for Docker apps

Development tools choices: IDE or editor

No matter if you prefer a full and powerful IDE or a lightweight and agile editor, either way Microsoft have you covered when developing Docker applications?

Visual Studio Code and Docker CLI (Cross-Platform Tools for Mac, Linux and Windows). If you prefer a lightweight and cross-platform editor supporting any development language, you can use Microsoft Visual Studio Code and Docker CLI.

These products provide a simple yet robust experience which is critical for streamlining the developer workflow.

By installing “Docker for Mac” or “Docker for Windows” (development environment), Docker developers can use a single Docker CLI to build apps for either Windows or Linux (execution environment). Plus, Visual Studio code supports extensions for Docker with intellisense for Docker files and shortcut-tasks to run Docker commands from the editor.

Download and Install Visual Studio Code

Download and Install Docker for Mac and Windows

Visual Studio with Docker Tools.

When using Visual Studio 2015 you can install the add-on tools “Docker Tools for Visual Studio”.

When using Visual Studio 2017, Docker Tools come built-in already.

In both cases you can develop, run and validate your applications directly in the target Docker environment.

F5 your application (single container or multiple containers) directly into a Docker host with debugging, or CTRL + F5 to edit & refresh your app without having to rebuild the container.

This is the simples and more powerful choice for Windows developers targeting Docker containers for Linux or Windows.

Download and Install Visual Studio Enterprise 2015/2017

Download and Install Docker for Mac and Windows

If you’re using Visual Studio 2015, you must have Update 3 or a later version plus the Visual Studio Tools for Docker.

More info:  For instructions on installing Visual Studio, go to https://www.visualstudio.com/
products/vs-2015-product-editions
.

To see more about installing Visual Studio Tools for Docker, go to http://aka.ms/vstoolsfordocker and https://docs.microsoft.com/aspnet/core/host-and-deploy/docker/visual-studio-tools-for-docker.

If you’re using Visual Studio 2017, Docker support is already included.

Language and framework choices

You can develop Docker applications and Microsoft tools with most modern languages. The following is an initial list, but you are not limited to it.

  1. .NET Core and ASP.NET Core
  2. Node.js
  3. Go Lang
  4. Java
  5. Ruby
  6. Python

Basically, you can use any modern language supported by Docker in Linux or Windows.

Note: But In this blog, we are using development IDE as Visual Studi0 2017 and use .NET Core and ASP.NET Core programming languages for developing Containerized based applications.