Docker – Little Big Extra http://littlebigextra.com A technology blog covering topics on Java, Scala, Docker, AWS, BigData, DevOps and much more to come. Do it yourself instructions for complex to simple problems for a novice to an expert. Thu, 04 Oct 2018 08:57:01 +0000 en-US hourly 1 https://wordpress.org/?v=5.6.8 http://littlebigextra.com/wp-content/uploads/2023/04/cropped-logo-32x32.png Docker – Little Big Extra http://littlebigextra.com 32 32 Five points guide to become a Docker Guru for beginners http://littlebigextra.com/five-points-guide-to-become-a-docker-guru/ http://littlebigextra.com/five-points-guide-to-become-a-docker-guru/#respond Tue, 25 Sep 2018 15:50:48 +0000 http://www.littlebigextra.com/?p=1328 Introduction When it comes to Docker many people ask about the courses for learning and mastering containerization and deployment using Docker. Based on my experience of using Docker I feel the best resources to learn Docker are available at Docker Documentation. Here I will list out 5 basic steps which will help all, who wish to […]

The post Five points guide to become a Docker Guru for beginners appeared first on Little Big Extra.

]]>
Share this article on

Introduction

When it comes to Docker many people ask about the courses for learning and mastering containerization and deployment using Docker. Based on my experience of using Docker I feel the best resources to learn Docker are available at . Here I will list out 5 basic steps which will help all, who wish to learn and master Docker 

 

5 Points Guide to become a Docker Guru

1. Installing Docker on your Operating system/local machine
 

The first baby step starts with installing Docker on your local machine. Make sure the docker recommendations for memory (at least 4GB of RAM ) are hardware requirements are met.   Docker is available for both  and .

Docker Community Edition is open source version which can be used free of cost and has most of the docker features minus support. 

When the installation finishes, Docker starts automatically, and you should see a Whale Icon in the notification area. which means Docker is running, and accessible from a terminal.  Open a command-line terminal and Run docker version to check the version.  Run docker run hello-world to verify that Docker can pull and run images from DockerHub. You may be asked for your DockerHub credentials for pulling Images. 

2. Familiarizing yourself with docker pull and docker run commands 

Next step would be to familiarize yourself with docker pull and docker run commands.  docker pull gets the images from docker registry and docker run runs the downloaded image. The running image is also called Docker Container.

If you just use docker run command, then it will first pull the image and then start running the image.  While using docker run make sure that you familiarize yourself, especially with the various flag which can be used. There are many flags which can be used but some of the handy ones are as listed below 

  • -d: detached or foreground running 
  •  –name: If you do not assign a container name with the –name option, then the daemon generates a random string name for you. 
  • -P: Publish exposed ports to the host interfaces   

At this point, I will strongly advise downloading . Kitematic’s one-click install gets Docker running on your Mac and lets you control your app containers from a graphical user interface (GUI). If you missed any flags with your run command, you can fix them using Kitematic’s UI. Also, another feature which I like about kitematic is that it is easy to remove unused images, get into the bash shell of Docker container, see the logs etc.   

3. Creating Images using Dockerfile and pushing images to the registry using docker push  

Docker file is the building block of Docker images and containers. Dockerfiles use a simple DSL which allows you to automate the steps to create an image. A Docker image consists of read-only layers each of which represents a Dockerfile instruction. The layers are stacked and each one is a delta of the changes from the previous layer.  

Once your application is ready and now you want to package it into a Docker Image you will need to use Docker file DSL instructions. If all is well, at this point and all instructions of DockerFile has been completed and a docker image has been created then do a sanity check using docker run command to verify if things are working. 

Use docker push to share your images to the  registry or to a self-hosted one so they can be used across various environments and by various users/team members.   

4. Docker Compose and Persistent Data Storage 

Considering that now you are master with the all the above commands and able to create docker images, the next would be to use multiple docker images. With Compose, you use a YAML file to . You can configure as many containers as you want, how they should be built and connected, and where data should be stored. When the YAML file is complete, you can run a single command to build, run, and configure all of the containers. 

Since you have now multiple Docker containers running and interacting with each other, now you want to persist the data too. Docker Volumes are your saviour for this cause, you can use the –v or –mount flag to map data from Docker Containers to disks. Since Docker Containers are ephermal, once they are killed any data stored within them will also be lost, hence it is important to use the docker volumes so data can be persisted.   

5. Docker Swarm 

So now you have multiple containers running using docker-compose, the next step is to ensure availability and high performance for your application by distributing it over the number of Docker hosts inside a cluster.   With Docker Swarm, you can control a lot of things,  

  • You can have multiple instances of a single service(container) running on same/different machines. 
  • You can scale up or scale down the number of containers at the runtime. 
  • Service discovery, rolling updates and load balancing are provided by default. 

If you are using Docker Swarm make sure that you familiarize yourself with Quorum of Nodes and Master/Worker configuration. The managers in Docker Swarm need to define a quorum of managers which, in simple terms, means that the number of available manager nodes should be always greater or equal to (n+1)/2, where n is the number of manager nodes. So if you have 3 manager nodes, 2 should be always up, and if you have 5 manager nodes, 3 should be up. Also, it is a good idea to have managers running in different geographic locations.

Hopefully, the above points will help in giving insight into Docker world for novices. I have documented my docker journey over here Docker Archives – Little Big Extra.

The best way to learn is by doing it.   

The post Five points guide to become a Docker Guru for beginners appeared first on Little Big Extra.

]]>
http://littlebigextra.com/five-points-guide-to-become-a-docker-guru/feed/ 0
Configuring Kibana and ElasticSearch for Log Analysis with Fluentd on Docker Swarm http://littlebigextra.com/using-kibana-and-elasticsearch-for-log-analysis-with-fluentd-on-docker-swarm/ http://littlebigextra.com/using-kibana-and-elasticsearch-for-log-analysis-with-fluentd-on-docker-swarm/#respond Sat, 01 Sep 2018 15:33:28 +0000 http://littlebigextra.com/?p=1055 Using Kibana and ElasticSearch for Log Analysis with Fluentd on Docker Swarm Introduction In my previous post, I talked about how to configure fluentd for logging for multiple Docker containers. The post explained how to create a single file for each micro service irrespective of its multiple instances it could have. However, Log files have […]

The post Configuring Kibana and ElasticSearch for Log Analysis with Fluentd on Docker Swarm appeared first on Little Big Extra.

]]>
Share this article on

Using Kibana and ElasticSearch for Log Analysis with Fluentd on Docker Swarm

Introduction

In my previous post, I talked about how to configure fluentd for logging for multiple Docker containers. The post explained how to create a single file for each micro service irrespective of its multiple instances it could have.
However, Log files have limitations it is not easy to extract analysis or find any trends.

Elastic Search and Splunk have become very popular in recent years as they give you allow you to events in real-time, visualise trends and search through logs.

Elastic Search and Kibana

Elastic Search is an open source search engine based on Apache Lucene.It is an extremely fast search engine and is commonly used for log analytics, full-text search and much more.
Along with Kibana,  which is a visualisation tool, Elasticsearch can be used for real-time analytics. With Kibana you can create intuitive charts and reports, filters, aggregations and trends based on data.

Changing the fluent.conf

Since this post is continuation of previous post, I will show you how to modify the fluent.conf for elastic search changes

All we need to do is that we need to add another “store” block like below

<store>
    @type elasticsearch
    host elasticsearch
    port 9200
    logstash_format true
    logstash_prefix logstash
    logstash_dateformat %Y%m%d
    include_tag_key true
    tag_key @log_name
    flush_interval 1s
  </store>

In the above config, we are telling that elastic search is running on port 9200 and the host is elasticsearch (which is docker container name). Also we have defined the general Date format and flush_interval has been set to 1s which tells fluentd to send records to elasticsearch after every 1sec.

This is how the complete configuration will look like

<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>
<match tutum>
  @type copy
   <store>
    @type file
    path /fluentd/log/tutum.*.log
    time_slice_format %Y%m%d
    time_slice_wait 10m
    time_format %Y%m%dT%H%M%S%z
    compress gzip
    utc
    format json
  </store>
  <store>
    @type elasticsearch
    host elasticsearch
    port 9200
    logstash_format true
    logstash_prefix logstash
    logstash_dateformat %Y%m%d
    include_tag_key true
    tag_key @log_name
    flush_interval 1s
  </store>
</match>
<match visualizer>
  @type copy
   <store>
    @type file
    path /fluentd/log/visualizer.*.log
    time_slice_format %Y%m%d
    time_slice_wait 10m
    time_format %Y%m%dT%H%M%S%z
    compress gzip
    utc
    format json
  </store>
    <store>
    @type elasticsearch
    host elasticsearch
    port 9200
    logstash_format true
    logstash_prefix logstash
    logstash_dateformat %Y%m%d
    include_tag_key true
    tag_key @log_name
    flush_interval 1s
  </store>
</match>

Create Dockerfile with our custom configuration

So the next step is to create a custom image of fluentd which has the above configuration file.
Save above file as fluent.conf in a folder named conf and then create a file called DockerFile at the same level as conf folder

# fluentd/Dockerfile
FROM fluent/fluentd:v0.12-debian
RUN ["gem", "install", "fluent-plugin-elasticsearch", "--no-rdoc", "--no-ri", "--version", "1.9.2"]
RUN rm /fluentd/etc/fluent.conf
COPY ./conf/fluent.conf /fluentd/etc

In this Docker file as you can see we are replacing the fluent.conf in the base image with the version of ours and also installing elastic search plugin

Now let us create a Docker image by

run "docker build -t ##YourREPOname##/myfluentd:latest ."

and then push it to the docker hub repository
"docker push ##YourREPOname##/myfluentd"


Elastic Search and Kibana Docker Image

This is how the YAML configuration for ElasticSearch looks like

elasticsearch:
    image: elasticsearch
    ports:
      - "9200:9200"
    networks:
      - net
    environment:
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    logging:
        driver: "json-file"
        options:
          max-size: 10M
          max-file: 1  
    deploy:
      restart_policy:
        condition: on-failure
        delay: 20s
        max_attempts: 3
        window: 120s
      mode: replicated
      replicas: 1
      placement:
        constraints: [node.role == manager]
      update_config:
        delay: 2s
      resources:
        limits:
          memory: 1000M
    volumes:
      - ./esdata:/usr/share/elasticsearch/data

Things to note here for this elastic search configuration file

  • The name of the container is set to elasticsearch, this is same to whta has been mentioned in fluent.conf
  • Port 9200 has been exposed to access elastic search locally too
  • The environment variables are important here as they restrict the max heap space this container can have and disable any Elasticsearch memory from being swapped out.
  • Using Logging Driver as JSON File as I want to restrict the log file size
  • Storing the elastic search data on a directory called “esdata” so data can persist between container restarts

This is how the YAML configuration for Kibana looks like

kibana:
    image: kibana
    ports:
      - "5601:5601"
    networks:
      - net
    logging:
        driver: "json-file"
        options:
           max-size: 10M
           max-file: 1        
    deploy:
      restart_policy:
        condition: on-failure
        delay: 20s
        max_attempts: 3
        window: 120s
      mode: replicated
      replicas: 1
      placement:
        constraints: [node.role == manager]
      update_config:
        delay: 2s

Things to note here for this kibana configuration file

  • The port 5601 has been exposed locally to access Kibana dashboard

Complete Config file

So now we need a complete docker-compose file which will have whoami service with multiple instances, docker visualiser service along with elastic, kibana and fluentd services.
This is how the complete files looks like

version: "3"
 
services:
       
  whoami:
    image: tutum/hello-world
    networks:
      - net
    ports:
      - "80:80"
    logging:
      driver: "fluentd"# Logging Driver
      options:
        tag: tutum    # TAG 
    deploy:
      restart_policy:
           condition: on-failure
           delay: 20s
           max_attempts: 3
           window: 120s
      mode: replicated
      replicas: 4
      placement:
        constraints: [node.role == worker]
      update_config:
        delay: 2s
 
  vizualizer:
      image: dockersamples/visualizer
      volumes:
         - /var/run/docker.sock:/var/run/docker.sock
      ports:
        - "8080:8080"
      networks:
        - net
      logging:
        driver: "fluentd"
        options:
         tag: visualizer   #TAG 
      deploy:
          restart_policy:
             condition: on-failure
             delay: 20s
             max_attempts: 3
             window: 120s
          mode: replicated # one container per manager node
          replicas: 1
          update_config:
            delay: 2s
          placement:
             constraints: [node.role == manager]
 
        
  fluentd:
    image: abhishekgaloda/myfluentd
    volumes:
      - ./Logs:/fluentd/log
    ports:
      - "24224:24224"
      - "24224:24224/udp"
    networks:
      - net
    deploy:
      restart_policy:
           condition: on-failure
           delay: 20s
           max_attempts: 3
           window: 120s
      mode: replicated
      replicas: 1
      placement:
        constraints: [node.role == manager]
      update_config:
        delay: 2s
 
  elasticsearch:
    image: elasticsearch
    ports:
      - "9200:9200"
    networks:
      - net
    environment:
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    logging:
        driver: "json-file"
        options:
          max-size: 10M
          max-file: 1  
    deploy:
      restart_policy:
        condition: on-failure
        delay: 20s
        max_attempts: 3
        window: 120s
      mode: replicated
      replicas: 1
      placement:
        constraints: [node.role == manager]
      update_config:
        delay: 2s
      resources:
        limits:
          memory: 1000M
    volumes:
      - ./esdata:/usr/share/elasticsearch/data    
      
  kibana:
    image: kibana
    ports:
      - "5601:5601"
    networks:
      - net
    logging:
        driver: "json-file"
        options:
           max-size: 10M
           max-file: 1        
    deploy:
      restart_policy:
        condition: on-failure
        delay: 20s
        max_attempts: 3
        window: 120s
      mode: replicated
      replicas: 1
      placement:
        constraints: [node.role == manager]
      update_config:
        delay: 2s  
 
networks:
  net:

You can run above services on docker swarm by using below command, make sure you save the file by the name docker-swarm.yml

docker stack deploy -c docker-swarm.yml test

Accessing kibana and elasticsearch

Once you make sure that all services are up and running using

docker service ls

you can access Kibana on
http://##domain#Or#IP##:5601/
Once You see something similar like below click on Create Button and then Discover on top left, you should see some bars indicating logs

Kibana Dashboard on Docker Swarm
Now if you click on http://##domain#Or#IP##/hello the whomai container will generate some logs which should appear on Kibana provided the right time has been chosen and auto-refresh has been enabled. See the screenshot below.

Kibana offers a lot of ways to create the visualization like charts, graphs etc for log analysis which can be explored further.

Logs in Elastic search + Kibana dashboard

Drop in suggestions or comments for any feedback.

Follow this Video for demonstration

The post Configuring Kibana and ElasticSearch for Log Analysis with Fluentd on Docker Swarm appeared first on Little Big Extra.

]]>
http://littlebigextra.com/using-kibana-and-elasticsearch-for-log-analysis-with-fluentd-on-docker-swarm/feed/ 0
How to maintain Session Persistence (Sticky Session) in Docker Swarm http://littlebigextra.com/how-to-maintain-session-persistence-sticky-session-in-docker-swarm-with-multiple-containers/ http://littlebigextra.com/how-to-maintain-session-persistence-sticky-session-in-docker-swarm-with-multiple-containers/#comments Sun, 06 May 2018 15:19:01 +0000 http://littlebigextra.com/?p=1016 How to maintain Session Persistence(Sticky Session) in Docker Swarm with multiple containers Introduction Stateless services are in vogue and rightfully so as they are easy to scale up and are loosely coupled. However, it is practically impossible to stay away from stateful services completely. For example, say you might need a login application where user session […]

The post How to maintain Session Persistence (Sticky Session) in Docker Swarm appeared first on Little Big Extra.

]]>
Share this article on

How to maintain Session Persistence(Sticky Session) in Docker Swarm with multiple containers

Introduction

Stateless services are in vogue and rightfully so as they are easy to scale up and are loosely coupled. However, it is practically impossible to stay away from stateful services completely. For example, say you might need a login application where user session details need to be maintained across several pages.

Session state can be maintained either using

  • Session Replication
  • Session Stickiness

or a combination of both.

 

Maintaining a user session is relatively easy if you are using a typical monolithic architecture where your application is installed on a couple of servers and you can change the configuration in servers to facilitate session replication using some cache mechanism or session stickiness using a load balancer/reverse proxy.

However, In the case of Microservices, where the scale can be as large from 10 to 10000’s instances the session replication might slow up things as each and every service need to look up at the centralised cache to get session information.

The other approach Session Stickiness where each following request should keep going to the same server( Docker container)  and hence preserving the session will be looked at in this article.

Why session persistence is hard to maintain with containers

Load balancer typically works on Layer 7 OSI model, the application layer (HTTP protocol at this layer) and then distributes the data across multiple machines, but Docker ingress routing mesh works at level 4 in OSI layer.

Someone in StackOverflow has summarized the solution for above problem as- To implement sticky sessions, you would need to implement a reverse proxy inside of docker that supports sticky sessions and communicates directly to the containers by their container id (rather than doing a DNS lookup on the service name which would again go to the round robin load balancer). Implementing that load balancer would also require you to implement your own service discovery tool so that it knows which containers are available.

Possible options explored

Take -1

So I tried implementing the reverse proxy with Nginx and it worked with multiple containers on a single machine but when deployed on Docker Swarm it doesn’t work probably because I was using the service discovery by name and as suggested above, I should use containerId to communicate and not container names.

Take -2

Read about the Jwilder Nginx proxy which works for everyone and it worked on my local but when deployed on Swarm it won’t generate anything any container IP’s inside the

upstream{server}

Take -3

Desperate enough by this time I was going through all possible solutions people have to offer about on the internet (stack overflow, Docker community forums..) and one gentleman has mentioned something about Traefik. Eyes glittered when I read that it works on SWARM and here I go.

Sticky Session with Traefik in Docker Swarm with multiple containers

Even though  I was very comfortable with Nginx and assumed that learning will again be an overhead. It wasn’t the case Traefik is simple to learn and easy to understand and good thing is that you need not fiddle with any of the conf files.

The only constraint is that Traefik should run on manager node

I have tested the configuration with Docker compose version 3 which is the latest and deployed using Docker stack deploy

To start off you need to create a docker-compose.yml (version 3) and add the load balancer Traefik Image. This is how it looks like

loadbalancer:
    image: traefik
    command: --docker \
      --docker.swarmmode \
      --docker.watch \
      --web \
      --loglevel=DEBUG
    ports:
      - 80:80
      - 9090:8080
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    deploy:
      restart_policy:
        condition: any
      mode: replicated
      replicas: 1
      update_config:
        delay: 2s
      placement:
         constraints: [node.role == manager]
    networks:
      - net

Few things to note here

  • Traefik listens to Docker daemon on manager node and keeps aware of new worker nodes, so there is no need to restart if you scale your services.
    volumes: – /var/run/docker.sock:/var/run/docker.sock
  • Traefik provides a dashboard to check the worker nodes health so port 9090 can be kept inside a firewall for monitoring purpose.
    Also, note that
    placement: constraints: [node.role == manager]
     specifies that traefik run only on manager node.

Adding the Image for sticky session

To add a Docker Image which will hold session stickyness we need to add something like this

whoami:
    image: tutum/hello-world
    networks:
      - net
    ports:
      - "80"
    deploy:
      restart_policy:
        condition: any
      mode: replicated
      replicas: 5
      placement:
        constraints: [node.role == worker]
      update_config:
        delay: 2s
      labels:
        - "traefik.docker.network=test_net"
        - "traefik.port=80"
        - "traefik.frontend.rule=PathPrefix:/hello;"
        - "traefik.backend.loadbalancer.sticky=true"

This is a hello world image which displays the container name its running on. We are defining in this file to have 5 replicas of this container. The important section where traefik does the magic is in “labels”

  • - "traefik.docker.network=test_net"
    Tells on which network this image will run on. Please note that the network name is test_net, where test is the stack name. In the load balancer service we just gave net as name.
  • - "traefik.port=80"
    This Helloworld is running on docker port 80 so lets map the traefik port to 80
  • - "traefik.frontend.rule=PathPrefix:/hello"
    All URLs starting with {domainname}/hello/ will be redirected to this container/application
  • - "traefik.backend.loadbalancer.sticky=true"
    The magic happens here, where we are telling to make sessions sticky.

The Complete Picture

Try to use the below file as it is and see if it works, if it does then fiddle with it and make your changes accordingly.

You will need to create a file called docker-compose.yml on your Docker manager node and run this command

docker stack deploy -c docker-compose.yml test
wher the “test” is the namespace.

Read Here about deploying in Swarm: How to Install Stack of services in Docker Swarm

version: "3"

services:

  whoami:
    image: tutum/hello-world
    networks:
      - net
    ports:
      - "80"
    deploy:
      restart_policy:
        condition: any
      mode: replicated
      replicas: 5
      placement:
        constraints: [node.role == worker]
      update_config:
        delay: 2s
      labels:
        - "traefik.docker.network=test_net"
        - "traefik.port=80"
        - "traefik.frontend.rule=PathPrefix:/hello;"
        - "traefik.backend.loadbalancer.sticky=true"

  loadbalancer:
    image: traefik
    command: --docker \
      --docker.swarmmode \
      --docker.watch \
      --web \
      --loglevel=DEBUG
    ports:
      - 80:80
      - 9090:8080
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    deploy:
      restart_policy:
        condition: any
      mode: replicated
      replicas: 1
      update_config:
        delay: 2s
      placement:
         constraints: [node.role == manager]
    networks:
      - net

networks:
  net:

Now you can test this service by http://{Your-Domain-name}/hello and http://{Your-Domain-name}:9090 should show us a Traefik dashboard.

Though there are 5 replicas of above “whoami” service, it should always display the same container ID. If it does congratulations your session peristence is working.

This is how the dashboard of Traefik looks like

Testing session stickness in local machine

In case you don’t have a swarm node and just want to test it on your localhost machine. You can use the following docker-compose file. To run successfully create a directory called test( required for namespace, as we have given our network name as  test_net

- "traefik.docker.network=test_net"
, change the directory name if you have different network) and run
docker-compose up -d

version: "3"

services: 

  whoami:
    image: tutum/hello-world
    networks:
      - net
    ports:
      - "80"  
    labels:
        - "traefik.backend.loadbalancer.sticky=true"
        - "traefik.docker.network=test_net"
        - "traefik.port=80"
        - "traefik.frontend.rule=PathPrefix:/hello"
      
  
  loadbalancer:
    image: traefik
    command: --docker \
      --docker.watch \
      --web \
      --loglevel=DEBUG
    ports:
      - 80:80
      - 25581:8080
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    networks:
      - net

networks:
  net:

Docker-Compose should create the required services and whoami service should be available on http://localhost/hello.

Scale say this service to 5

docker-compose scale whoami=5
and test

Follow this Video to see things in action

The post How to maintain Session Persistence (Sticky Session) in Docker Swarm appeared first on Little Big Extra.

]]>
http://littlebigextra.com/how-to-maintain-session-persistence-sticky-session-in-docker-swarm-with-multiple-containers/feed/ 22
Designing Microservice Architecture with Docker Swarm http://littlebigextra.com/design-considerations-microservice-architecture-docker-swarm/ http://littlebigextra.com/design-considerations-microservice-architecture-docker-swarm/#comments Wed, 04 Oct 2023 15:40:27 +0000 http://littlebigextra.com/?p=1140 Design considerations for a microservice architecture with Docker Swarm Introduction When designing a microservice architecture there are various design considerations needs to be taken care of especially in terms of scalability, high availability, resilience and loose coupling. Recently we went live with our application which is based on microservices architecture and hosted on Docker Swarm. […]

The post Designing Microservice Architecture with Docker Swarm appeared first on Little Big Extra.

]]>
Share this article on

Design considerations for a microservice architecture with Docker Swarm

Introduction

When designing a microservice architecture there are various design considerations needs to be taken care of especially in terms of scalability, high availability, resilience and loose coupling.

Recently we went live with our application which is based on microservices architecture and hosted on Docker Swarm.  Here are some of the key learnings and  design consideration with Docker swarm microservices architecture which needs to be taken into account while architecting a docker swarm infrastructure

  1. Loosely Coupled Microservices 
  2. Manager nodes availability  and their location
  3. Stateless vs stateful
  4. Machines configurations
  5. Number of manager/workers
  6. Restricting the container’s memory
  7. Logging
  8. Autoscaling
  9. Avoiding downtime
  10. Deployment

Loosely Coupled Microservices

This is one of the most important design principles since ages and promotes resilience and better implementation. When you have a front-end application which talks to various backends or does some compute-intensive tasks it’s better to segregate the business logic into separate microservice. Front end should just act as a  view layer and all the business logic should be in the business layer.

Deciding when to create a new microservice is important and relies on the functionality and business purpose served.This will also determine that how many microservice you will end up with.

For e.g, Let’s Imagine a web application where the customer comes and checks if he is eligible for currency conversion. The application can be broken into  3 microservices like

  1. View –  layer which holds the application view layer
  2. Currency conversion
  3. Eligibility check

The advantage of separation are as follows

  • Say if 90% of the customer just uses the application for checking eligibility then you can scale that service alone on multiple machines based on usage and keep currency conversion instances low.
  • If currency conversion is down for some reason customer can still check their eligibility and other bits.

Stateless vs Stateful

The docker containers by design are supposed to be stateless. However many times the application may need to be stateful for e.g  login functionality, where the application needs to be aware of which user is logged on.  By default, the docker swarm uses the round-robin algorithm for traffic routing which means incoming request will be sent to different Docker container each time and thus losing the session information.

Session persistence might come as a feature in docker swarm in future but not available as of now.We had to implement Traefik as a load balancer for maintaining the sticky session,

Read here about  How to implement session persistence in Docker Swarm using Traefik

Manager nodes availability and their location

The managers in Docker Swarm needs to define a Quorum of managers which in simple terms means that the number of available manager nodes should be always greater or equal to (n+1)/2, where n is the number of manager nodes.

So if you have 3 manager nodes 2 should be always up and if you have 5 manager nodes 3 should be up. If the swarm loses the quorum of managers, the swarm cannot perform management tasks. Which means you can not add new nodes, run swarm commands until Quorum is not maintained again.

Also, another important attribute is the location of manager nodes, it is advisable that manager nodes are in the different geographic region so any outages in a particular region won’t affect the quorum of managers. For example, if you have 3 managers node then you can choose Asia, Europe and America as their geographic locations from any cloud provider.

On the positive side, even if the Quorom is lost, say due to 2 out of 3 managers being down.  The docker containers/services will still keep working and serving traffic. Once the machines are available the quorum will be maintained automatically.

Machines configurations

So the rosy picture which has been painted by containerization is that it is easy to scale using cheap machines  Now the problem with cheap machines is that they often have a poor configuration.

If the machine has only 1 CPU only and the microservice happens to be CPU intensive. Running multiple containers on that machine might even make things worse as the containers would be fighting for CPU allocation.

Similarly, if the microservices are memory intensive make sure the RAM is appropriate.

Docker service is a group of containers of the same image. Services make it simple to scale your application

Autoscaling

Autoscaling is not available with docker swarm as of now with version 17.06 and to add new machines to swarm you will have to use 

docker swarm join token
to add more managers and workers. Also adding new nodes doesn’t mean that the swarm will be auto rebalanced by itself, for e.g if you have 3 machines with each running 2 containers and then you decide to add 3 more machines so only 1 container should run on each machine. Unless and until you do a 
docker stack deploy
swarm won’t be auto-balanced, another trick which works well and I tend to use
docker service scale
 to scale up and bring service down, that way swarm rebalances itself.

Logging

Eventually, at some point, the services will fail or will have defects and you will need logs to debug things out. Having multiple services would mean that multiple log files and even if use 

docker service logs
 it may not be helpful if the service has multiple containers running.

The best way to log in a multiservice environment is to use a log aggregator like fluentd so logs are written at one place irrespective of scattered all over. Fluentd works well with Elasticsearch and Kibana where you can basically search through logs, filter and query. More can be found here.

  1. How to Collect logs from multiple containers and write to a single file
  2. Configuring Kibana and ElasticSearch for Log Analysis with Fluentd on Docker Swarm

Avoiding downtime

To avoid downtime there are a couple of things which can be done, first is to have multiple instances of the container. So any service should have at least 2 instances of containers running. Also, make effective use of  

update_config 
tattribute in docker compose where you can specify the delay between 2 restarts. For e.g below snippet of docker-compose will create 3 replicas of containers and if you choose to update your service ever each container will restart after a gap of 90 secs.

deploy:
         mode: replicated
         replicas: 3
         update_config:
            delay: 90s

Optimizing the Container limits

To make sure that one docker container/microservice doesn’t end up fighting up with other containers for resources like CPU, RAM and I/O. The containers can be limited to how much RAM can be allocated, how much CPU can be used by them for e.g the below lines in docker compose will limit the container to use only 2GB RAM een if the machine has 8GB or 16GB RAM.

resources:
         limits:
           memory: 2048M

Creation of Docker Swarm and Automated Deployments

Docker cloud seems to be capable of creating a new SWARM on Azure/AWS and also can potentially implement a continuous integration pipeline, but on the downside, it creates too many resources on Azure at least.

We found that it was simple and easy enough to create a SWARM within a matter of minutes once we have the docker installed.

docker swarm join token
can be used easily to bring new machines on SWARM. Also, the automated deployment is easy enough through Jenkins.

We use fabric8.io plugin to create docker images and push them to docker hub. Jenkins then does the deployment by running commands on manager node using remote SSH plugin.

  1. How to Automate Docker Swarm Service deployment using Jenkins
  2. How To Push Docker Images To Docker Hub Repository Using Docker Maven plugin

 

Conclusion

Docker Swarm works and fits well in a microservice architecture scheme of things. Some of the features which have really caught our eyes are are

  • Docker Swarm is very easy to create and can be set up in a matter of minutes. Ease of scaling up is immense, any new machine needs just a token to become a worker/manager node.
  • Scaling services is very easy docker scale service <servicename>=10, will create 10 instances of Docker containers in no time.
  • Its open source and community edition too works well in production, thus saving a lot of money for small enterprises.

Some of the features if added could be a good improvement

  • Session persistence in Docker swarm can be added as a feature in new releases.
  • Autoscaling can be added as a feature too, it would be good if swarm can add new machines from the pool and run the containers which are being used more or under stress on demand.
  • Rebalancing the services when new machines are added to SWARM would be a great addition too.

 

The post Designing Microservice Architecture with Docker Swarm appeared first on Little Big Extra.

]]>
http://littlebigextra.com/design-considerations-microservice-architecture-docker-swarm/feed/ 1
How to Automate Docker Swarm Service deployment using Jenkins http://littlebigextra.com/automate-service-deployment-docker-swarm-using-jenkins/ http://littlebigextra.com/automate-service-deployment-docker-swarm-using-jenkins/#respond Thu, 24 Aug 2023 16:30:05 +0000 http://littlebigextra.com/?p=1101 How to Automate service deployment to Docker Swarm using Jenkins Introduction Jenkins is a wonderful tool for continuous integration and continuous deployment. The plethora of plugins available makes it really powerful. In this tutorial, I will show you how to use Jenkins to automate swarm deployment. How to do it To do a Docker Swarm […]

The post How to Automate Docker Swarm Service deployment using Jenkins appeared first on Little Big Extra.

]]>
Share this article on

How to Automate service deployment to Docker Swarm using Jenkins

Introduction

Jenkins is a wonderful tool for continuous integration and continuous deployment. The plethora of plugins available makes it really powerful. In this tutorial, I will show you how to use Jenkins to automate swarm deployment.

How to do it

To do a Docker Swarm deployment all you need is a docker-compose file which will contain the references to docker images along with the configuration settings like port, network name, labels, constraints etc and to run this file you need to execute a command called “docker stack deployment”

I am assuming that you have set up a Docker Swarm and want to deploy the latest images from docker hub or other docker hub registries quite often and now focusing on automating this deployment process.

So all we need to do is to send this docker-compose.yml file over SSH to Manager node and execute the command 

docker stack deploy 
remotely. Let’s see how to achieve this.

Jenkins plugin – Publish over SSH

We need to install a Jenkins plugins “Publish over SSH”, this plugin will allows us to

  • Sends files over SSH(SFTP)
  • Execute commands on a remote server

To add this plugin you need to go to Jenkins -> Manage Jenkins -> Manage Plugins -> Available and search for “Publish
Over SSH”. Install this plugin and restart Jenkins.

Adding Remote Hosts

Navigate to Jenkins -> Manage Jenkins -> Configure System and scroll down until you find Publish over SSH.

Since we need to execute the docker stack deploy command on any manager node, we need to connect to a manager node in Docker Swarm. This plugin offers various ways to connect to remote hosts, I prefer SSH public/private key value pair. Keys can be generated with ssh_keygen. The private key must be kept on Jenkins server and the public key must be stored on the manager node.

Click on Test configuration and see if the connection is successful.
Have a look at the screenshot below ( please note that Remote Directory contains a swarm directory if it doesn’t exist either leave Remote Directory as blank or create a directory on manager node, in the next step we will use this directory to publish docker-compose file)

Adding docker Manager Node
Connecting to Manager Node

You can add multiple manager nodes if you want.

Configuring the Jenkins Job

In this step, we need to tell Jenkins from where to get our docker-compose file and how to transfer using SSH to remote server and execute subsequently.

    • Under Source Code Management and add the repository(GIT/SVN) where you have checked-in or stored the docker-compose file.
    • Under Build section Select “Add Build Step” -> “Send files or execute commands over SSH”

Now Under SSH server Select the manager node where you want to publish/send the docker-compose file. In the Transfers Set block,

  • In the Source, files enter the path for the docker-compose file. This folder is relative to your Jenkins Workspace so if say docker compose file exists in directory structure as swarm/dev/docker-compose.yml it can be written as swarm/dev/**/*
  • In the Remove Prefix, enter the path that should not be created on the remote server.
  • Now In exec Command, enter the command as shown below ( Please note that cd /swarm is only needed in if remote the configuration you have added swarm as a remote directory)
    cd /swarm docker stack deploy -c docker-compose.yml

This is how my configuration looks like for reference.

Publish over SSH settings

Run the Jenkins Job

Now run this Jenkins Job using Build now and then check in Console Output to see the output of remote server. Hopefully, it will run fine !!

In case you are using, private repositories from Docker Hub please read this article
Installing Docker Images from private repositories in Docker Swarm

The post How to Automate Docker Swarm Service deployment using Jenkins appeared first on Little Big Extra.

]]>
http://littlebigextra.com/automate-service-deployment-docker-swarm-using-jenkins/feed/ 0
How to use Spring Profiles with Docker Containers http://littlebigextra.com/use-spring-profiles-docker-containers/ http://littlebigextra.com/use-spring-profiles-docker-containers/#comments Fri, 11 Aug 2023 10:18:16 +0000 http://littlebigextra.com/?p=1089   How to use Spring Profiles with Docker Containers Introduction Spring Profiles are an effective way of implementing environment independent code. The properties file or @Beans can be selected dynamically at run time based on the profile injected. Assuming that you are quite familiar with the spring profiles and looking for injecting profiles in a […]

The post How to use Spring Profiles with Docker Containers appeared first on Little Big Extra.

]]>
Share this article on

 

How to use Spring Profiles with Docker Containers

Introduction

Spring Profiles are an effective way of implementing environment independent code. The properties file or @Beans can be selected dynamically at run time based on the profile injected.
Assuming that you are quite familiar with the spring profiles and looking for injecting profiles in a Docker environment. There are couple of ways of doing it namely

 

 

  • Passing Spring Profile in Dockerfile
  • Passing Spring Profile in Docker run command
  • Passing Spring Profile in DockerCompose

In this tutorial, I will try to capture all these 3 scenarios.

Read Here: How to create Docker image of Standalone Spring MVC project

Passing Spring Profile in a Dockerfile

From command prompt of your system, any spring boot application can be run with “java -jar” command.The profiles need to be passed as an argument like this “-Dspring.profiles.active=dev“. For Spring MVC applications other 2 below methods will work fine.

java -Djava.security.egd=file:/dev/./urandom -Dspring.profiles.active=dev -jar rest-api.jar

Similarly, when using dockerfile we need to pass the profile as an argument, have a look at one of the Dockerfile for creating a spring boot docker image

Below an example on spring boot project dockerfile

FROM java:8
ADD target/my-api.jar rest-api.jar
RUN bash -c 'touch /pegasus.jar'
ENTRYPOINT ["java", "-Djava.security.egd=file:/dev/./urandom","-Dspring.profiles.active=dev","-jar","/rest-api.jar"]

Pay attention to the last line ENTRYPOINT, in this line we are passing the java command to execute the jar file and all arguments have been passed as comma separated values. “-Dspring.profiles.active=dev” is where we are passing the dev profile, you can replace dev with the profile name you want.

Passing Spring Profile in Docker run

You can also pass spring profile as an environment variable while using docker run command using the -e flag. The option -e “SPRING_PROFILES_ACTIVE=dev” will inject the dev profile to the Docker container.

docker run -d -p 8080:8080 -e "SPRING_PROFILES_ACTIVE=dev" --name rest-api dockerImage:latest

 

Passing Spring Profile in DockerCompose

If you are on DockerSwarm or using compose file for deploying docker images the Spring profile can be passed using the environment: tag in a docker-compose file, as shown below

version: "3"
services:
  rest-api:
     image: rest-api:0.0.1
     ports:
       - "8080:8080" 
     environment:
       - "SPRING_PROFILES_ACTIVE=dev"

 

The post How to use Spring Profiles with Docker Containers appeared first on Little Big Extra.

]]>
http://littlebigextra.com/use-spring-profiles-docker-containers/feed/ 3
Installing Docker Images from private repositories in Docker Swarm http://littlebigextra.com/installing-docker-images-private-repositories-docker-swarm/ http://littlebigextra.com/installing-docker-images-private-repositories-docker-swarm/#comments Fri, 21 Jul 2023 15:00:34 +0000 http://littlebigextra.com/?p=1073 Installing Docker Images from private repositories in Docker Swarm Introduction On a Docker Swarm you can deploy only those images which come from a Docker repository as all the manager/worker nodes need to pull them separately. When the Docker images are pulled from public repository like DockerHub they can be easily deployed using a simple […]

The post Installing Docker Images from private repositories in Docker Swarm appeared first on Little Big Extra.

]]>
Share this article on

Installing Docker Images from private repositories in Docker Swarm

Introduction

On a Docker Swarm you can deploy only those images which come from a Docker repository as all the manager/worker nodes need to pull them separately.
When the Docker images are pulled from public repository like DockerHub they can be easily deployed using a simple command such as

docker service create nginx

and if you happen to use compose file you can use
docker stack deploy -c "name of the compose file"
.

However in the case of private repositories you need to provide Docker credentials and Docker repository details too.

In this tutorial, I will list out the steps needed for deploying Docker images from a DockerHub private repository.

Login to DockerHub

Considering that you have a substantial number of images and using a Docker-compose file where you have listed the Docker images needed from private repositories.The first step would be is to log on to DockerHub from all nodes(manager and worker)
Use this command

docker login 
and input your username and password.
The reason why this step is important (in my experience) is that we might want to use the deploy constraint available in docker compose(see below) to run Docker containers only on worker nodes. I believe in this cases the Docker pull is ran from worker machines only and that is why it is essential to log in on worker nodes. Not logging from worker nodes in this scenario failed to pull images for me in Worker nodes.

deploy:
      	placement:
        	constraints: [node.role == worker]

Deploying images on Swarm

When deploying on Swarm you need to run the following command, considering that you have created a compose file

docker login -u #DockerHub Username# -p #DockerHub Password# registry.hub.Docker.com/#Organization-Or-DockerHubUserName# && Docker stack deploy -c Docker-swarm.yml #STACK-NAME# --with-registry-auth

Breaking above command for clarification

  • -u #DockerHub Username# : The DockerHub Username
  • -p #DockerHub Password#: The DockerHub Password
  • #Organization-Or-DockerHubUserName#: In case you have created a team/organisation on DockerHub and you have pushing images like organisation/Docker-Image. This is a scenario where you can have multiple team members working on same Docker image and pushing the image using something like TEAM/DockerImage.In case you are a single user and don’t have any Team defined in DockerHub you can use username
  • #STACK-NAME#: The stack name, could be just a “test” or something more meaningful name suited to requirement
  • –with-registry-auth: This option tells that you are pulling images from a docker hub registry with authorization.

a simple example of above command would be like

docker login -u username -p password registry.hub.Docker.com/myproject && Docker stack deploy -c Docker-swarm.yml test --with-registry-auth

The post Installing Docker Images from private repositories in Docker Swarm appeared first on Little Big Extra.

]]>
http://littlebigextra.com/installing-docker-images-private-repositories-docker-swarm/feed/ 2
Docker Swarm : How to Collect logs from multiple containers and write to a single file http://littlebigextra.com/how-to-collect-logs-from-multiple-containers-in-docker-swarm/ http://littlebigextra.com/how-to-collect-logs-from-multiple-containers-in-docker-swarm/#respond Tue, 04 Jul 2023 16:45:22 +0000 http://littlebigextra.com/?p=1036 Write multiple docker container logs into a single file in Docker Swarm Introduction So recently I had deployed scalable micro services using Docker stack deploy on Docker swarm. Now I had multiple micro services running on multiple nodes. To analyse any micro service I had to log on to the manager node and find out […]

The post Docker Swarm : How to Collect logs from multiple containers and write to a single file appeared first on Little Big Extra.

]]>
Share this article on

Write multiple docker container logs into a single file in Docker Swarm

Introduction

So recently I had deployed scalable micro services using Docker stack deploy on Docker swarm. Now I had multiple micro services running on multiple nodes.

To analyse any micro service I had to log on to the manager node and find out on which node(manager/worker) the service is running. If the service was scaled to more than 1 that would mean I would have to log on to more than a machine, check the Docker container(micro service) logs to get a glimpse of an exception. That seems quite annoying and time-consuming.

Fluentd to the rescue

Fluentd is an open source data collector for unified logging layer. We can collect logs from various backends and stream it to various outputs mechanism like MongoDB, ElasticSearch, File etc.
In this tutorial, I will create a single logging file for each service in a separate folder irrespective of the fact that service has 1 or more instances.

Setting the Fluent Conf

So to start with we need to override the default fluent.conf with our custom configuration. More about config file can be read about on the fluentd website.

<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>
<match tutum>
  @type copy
   <store>
    @type file
    path /fluentd/log/tutum.*.log
    time_slice_format %Y%m%d
    time_slice_wait 10m
    time_format %Y%m%dT%H%M%S%z
    compress gzip
    utc
    format json
  </store>
</match>
<match visualizer>
  @type copy
   <store>
    @type file
    path /fluentd/log/visualizer.*.log
    time_slice_format %Y%m%d
    time_slice_wait 10m
    time_format %Y%m%dT%H%M%S%z
    compress gzip
    utc
    format json
  </store>
</match>

In the above config, we are listening to anything being forwarded on port 24224 and then do a match based on the tag. So for a log message with tag tutum create a tututm.log and for logs matching visualizer tag create another file called visualizer.log
In the tag we have mentioned that create a file called tutum.*.log this * will be replaced with a date and buffer so finally the file will be something like this tutum.20230630.b5532f4bcd1ec79b0..log

Create Dockerfile with our custom configuration

So the next step is to create a custom image of fluentd which has the above configuration file.
Save above file as fluent.conf in a folder named conf and then create a file called DockerFile at the same level as conf folder

# fluentd/Dockerfile
FROM fluent/fluentd:v0.12-debian
RUN rm /fluentd/etc/fluent.conf
COPY ./conf/fluent.conf /fluentd/etc

In this Docker file as you can see we are replacing the fluent.conf in the base image with the version of ours.

Now let us create a Docker image by

run "docker build -t ##YourREPOname##/myfluentd:latest ."


and then push it to the dockerhub repository
"docker push ##YourREPOname##/myfluentd"

Fluentd as logging driver

So now we need to tell our docker service to use fluentd as logging driver
In this case, I am using autumn/hello-world which displays the container name on the page.

whoami:
    image: tutum/hello-world
    networks:
      - net
    ports:
      - "80:80"
    logging:
      driver: "fluentd"
      options:
        tag: tutum      
    deploy:
      restart_policy:
        max_attempts: 5
      mode: replicated
      replicas: 5
      placement:
        constraints: [node.role == worker]
      update_config:
        delay: 2s

So in this line, we are defining that our service should use fluentd as logging driver

logging:      driver: "fluentd"

you might have also noticed
options:    tag: tutum
so this tag will be used as an identifier to distinguish various services. Remember the match tag in the config file fluent.conf .

We need to define our fluentd Image too in the docker-compose file

fluentd:
    image: ##YourRepo##/myfluentd
    volumes:
      - ./Logs:/fluentd/log
    ports:
      - "24224:24224"
      - "24224:24224/udp"
    networks:
      - net
    deploy:
      restart_policy:
           condition: on-failure
           delay: 20s
           max_attempts: 3
           window: 120s
      mode: replicated
      replicas: 1
      placement:
        constraints: [node.role == manager]
      update_config:
        delay: 2s

As you might have noticed above we are storing the logs in the

volumes: - ./Logs:/fluentd/log

Logs directory so you need to create a “Logs” directory at the same path from where you will run the docker-compose file on your manager node

This is how the complete file will look like

version: "3"
 
services:
       
  whoami:
    image: tutum/hello-world
    networks:
      - net
    ports:
      - "80:80"
    logging:
      driver: "fluentd"
      options:
        tag: tutum    
    deploy:
      restart_policy:
           condition: on-failure
           delay: 20s
           max_attempts: 3
           window: 120s
      mode: replicated
      replicas: 4
      placement:
        constraints: [node.role == worker]
      update_config:
        delay: 2s
 
  vizualizer:
      image: dockersamples/visualizer
      volumes:
         - /var/run/docker.sock:/var/run/docker.sock
      ports:
        - "8080:8080"
      networks:
        - net
      logging:
        driver: "fluentd"
        options:
         tag: visualizer    
      deploy:
          restart_policy:
             condition: on-failure
             delay: 20s
             max_attempts: 3
             window: 120s
          mode: replicated # one container per manager node
          replicas: 1
          update_config:
            delay: 2s
          placement:
             constraints: [node.role == manager]
 
        
  fluentd:
    image: ##YOUR-REPO##/myfluentd
    volumes:
      - ./Logs:/fluentd/log
    ports:
      - "24224:24224"
      - "24224:24224/udp"
    networks:
      - net
    deploy:
      restart_policy:
           condition: on-failure
           delay: 20s
           max_attempts: 3
           window: 120s
      mode: replicated
      replicas: 1
      placement:
        constraints: [node.role == manager]
      update_config:
        delay: 2s
 
 
networks:
  net:


You can run above services on docker swarm by using below command, make sure you save the file by the name docker-swarm.yml

docker stack deploy -c docker-swarm.yml test

In your Logs directory now there should be 2 log files something like tutum.*.*.log and visualizer.*.*.log

Fluentd Log files in Docker Swarm

Though Log analyses become much easier when used with ElasticSearch and Kibana as it eliminates the needs to login to the machine and also the log searches, filtering and analyses can be done more easily. I intend to cover it in my next blog.

 

Follow the video to see things in action

The post Docker Swarm : How to Collect logs from multiple containers and write to a single file appeared first on Little Big Extra.

]]>
http://littlebigextra.com/how-to-collect-logs-from-multiple-containers-in-docker-swarm/feed/ 0
How to Create a Docker Swarm and deploy stack of services http://littlebigextra.com/how-to-create-a-docker-swarm-and-deploy-stack-of-services/ http://littlebigextra.com/how-to-create-a-docker-swarm-and-deploy-stack-of-services/#respond Fri, 26 May 2023 15:14:17 +0000 http://littlebigextra.com/?p=1000 Create Docker Swarm and install services to Docker swarm using docker stack deploy Introduction Setting up of cluster node topology and managing nodes have always been a pain for any developer or infrastructure engineers. Docker swarm makes it ridiculously easy to create a node cluster topology and get a service up and running in a […]

The post How to Create a Docker Swarm and deploy stack of services appeared first on Little Big Extra.

]]>
Share this article on

Create Docker Swarm and install services to Docker swarm using docker stack deploy

Introduction

Setting up of cluster node topology and managing nodes have always been a pain for any developer or infrastructure engineers. Docker swarm makes it ridiculously easy to create a node cluster topology and get a service up and running in a matter of minutes.

Docker Swarm is a cluster of Docker nodes, where you deploy services. In case your next question was – what is a Docker service. Then please note that a Docker service is a group of containers of the same image. So basically it’s all about deploying multiple containers on multiple nodes.
For this tutorial, I will demonstrate how to create a docker swarm using 1 manager node and a couple of worker nodes.

Manager nodes maintain cluster services and schedule services whereas worker nodes just run Docker containers.

Before proceeding ahead make sure that Docker has been installed on these machines. I had installed Docker on Ubuntu Server VM’s from Microsoft Azure Marketplace, which comes with Docker installation.

Creating the manager node

To create a manager node log on to the machine terminal using ssh or the bash terminal provided over browser and run the following command

docker swarm init

If you got following output that would mean the manager node is created

root@nevado-docker:~# docker swarm init
Swarm initialized: current node (21pszxtzjslvkbm62qjmf2r37) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-0h02u7on5rwlknkz4u6nvyjcxi90n0jtqi4ct1p9rf67y9rfqo-eg0kbjnaru5g0wusm4im8zhwl \
    10.0.0.4:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Creating Worker nodes

To create a worker node all you need is to log on to machine and copy and run the output from node manager

docker swarm join \
    --token SWMTKN-1-0h02u7on5rwlknkz4u6nvyjcxi90n0jtqi4ct1p9rf67y9rfqo-eg0kbjnaru5g0wusm4im8zhwl \
    10.0.0.4:2377

That’s all, you can add as many worker nodes as you want by using above command.

Checking Docker nodes

Use the command

docker node ls
to check if all the nodes and worker manager are part of swarm

ID                           HOSTNAME       STATUS  AVAILABILITY  MANAGER STATUS
21pszxtzjslvkbm62qjmf2r37 *  docker-manage  Ready   Active        Leader
ma14oo7jf32gwnb4y4x5b591w    docker-slave   Ready   Active        
yhifs9of76pr0w89egpbt4333    docker-slave2  Ready   Active

Deploying service to Docker swarm

Though we can deploy Docker service using something like

$ docker service create
a better is to deploy using
docker deploy
. The configuration can be handled in a YAML file and multiple depending services can be added at one go.
So log on to Docker -manager node and create a file name docker-stack.yml

For this tutorial and simplicity sake, I will use nginx image and deploy 3 containers of that image. This is how my docker-compose.yml looks like

Please note that Docker deploy only works with Version-3 of composing file

version: "3"

services:

  nginx:
    image: nginx
    ports:
      - 80:80
      - 443:443
    deploy:
      mode: replicated
      replicas: 3
      restart_policy:
        condition: on-failure
        delay: 30s
        max_attempts: 3
        window: 120s

Now run command

docker stack deploy --compose-file docker-compose.yml TEST

and if you see output like this that would mean your service has been deployed

Creating network TEST_default
Creating service TEST_nginx

Now to check on which nodes the service has been installed run

docker service ps TEST_nginx

You should see some output like this

yd22w8q9bnjm  TEST_nginx.1  nginx:latest  docker-manager  Running        Running 3 minutes ago         
n7ucjsckf2yz  TEST_nginx.2  nginx:latest  docker-slave   Running        Running 3 minutes ago         
u986n00r0126  TEST_nginx.3  nginx:latest  docker-slave2  Running        Running 3 minutes ago

There are several options available in the deploy where you can say restrict some containers only to run on Worker Nodes

constraints: [node.role == worker]
.
For more options, you can refer to page.

To uninstall the whole stack use

dokcer stack rm TEST
this would remove all the services and network created.

Please follow below video for above steps.

The post How to Create a Docker Swarm and deploy stack of services appeared first on Little Big Extra.

]]>
http://littlebigextra.com/how-to-create-a-docker-swarm-and-deploy-stack-of-services/feed/ 0
How to install Nginx as a reverse proxy server with Docker http://littlebigextra.com/install-nginx-reverse-proxy-server-docker/ http://littlebigextra.com/install-nginx-reverse-proxy-server-docker/#comments Fri, 19 May 2023 15:17:10 +0000 http://littlebigextra.com/?p=989 How to install Nginx as a reverse proxy server with Docker Introduction On a single docker host machine, we can run 100’s of containers and each container can be accessed by exposing a port on the host machine and binding it to the docker port. This is the most standard practice which is used and […]

The post How to install Nginx as a reverse proxy server with Docker appeared first on Little Big Extra.

]]>
Share this article on

How to install Nginx as a reverse proxy server with Docker

Introduction

On a single docker host machine, we can run 100’s of containers and each container can be accessed by exposing a port on the host machine and binding it to the docker port.

This is the most standard practice which is used and we use docker run command with -p option to bind docker port with and host machine port. Now if we have to do this with a couple of services this process might work well but if we had to cater a large number of containers, remembering port numbers and managing them could be a hurricane task.

This problem can be dealt by installing Nginx, which is a reverse proxy server and directs the client requests to the appropriate docker container

Installing Nginx Base Image

Nginx Image can be downloaded from docker hub and can be installed by simply using.

docker run nginx
Nginx Configuration is stored in file /etc/nginx/nginx.conf. This file holds a reference to default.conf
include /etc/nginx/conf.d/*.conf;

Follow the below steps to run a nginx server and have a peek around nginx configuration

  • Run the latest Nginx docker Image using

docker run -d --name nginx nginx

  • Open the bash console for accessing Nginx configuration

bash -c "clear && docker exec -it nginx sh"

Navigate to /etc/nginx/conf.d and 

cat default.conf
 copy file contents.We will use this file contents in next steps.

Creating our own custom Nginx Image

In this step we will try to modify the base nginx image, with changes to default.conf

Create a simple project in eclipse(File->New ->General-> Project) and create a new file called default.conf in the project directory.
In this file add the location block where

location /<URL-To-BE-ACCESSED> {  
        proxy_pass http://<DOCKER_CONTAINER_NAME>:<DOCKER-PORT>;  
    }

for eg,

location /app1 {  
        proxy_pass http://microservice1:8080;  
    }

where the app1 is the URL and microservice1 is the docker container name and 8080 is the docker port , this info can be found using

docker ps-a

While running a docker container make sure that you use — name attribute so the docker container name remains consistent. If no name is given then docker usually assign a random name to container

This is how the default.conf looks like for 2 docker containers named microservice1 and microservice2

server {
    listen       80;
    server_name  nginxserver;
    
    location /app1 {  
        proxy_pass http://microservice1:8080;  
    }
    
    location /app2 {  
        proxy_pass http://microservice2:8080;  
    }

    
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

   
}

Creating Docker Image

Next step is to create a Dockerfile where we will replace the default configuration file default.conf in the nginx base image with our version of default.conf

Create a file called Dockerfile and add below contents. Make sure the file is at same level as default.conf and under project root directory.

#GET the base default nginx image from docker hub
FROM nginx

#Delete the Existing default.conf
RUN rm /etc/nginx/conf.d/default.conf 

#Copy the custom default.conf to the nginx configuration
COPY default.conf /etc/nginx/conf.d

Now all we need is to build this docker image. Open terminal/command prompt and navigate to project directory

docker build -t  mynginx .

If above command was suucessful, our own custom nginx image is ready with our configuration.

Running our own Custom Nginx Image

Each Docker Container is a seperate process and is unawaare of other docker container. However docker has –link attribute which can be used to create links between 2 docker containers and make them aware about the existence of other containers.

Before we run our image we need to make sure that the services mentioned in the location block are up and running , in our case microservice1 and microservice2.

docker ps -a

Next we need to link these 2 docker containers with our nginx container using this command

docker run -d --name mynginx -p 80:80 -p 443:443 --link=microservice1 --link=microservice2 mynginx

Make sure that you stop the default nginx image, created in Step-1 as it might be running on port 80 and 443

If the above command was successful with no errors we have successfully installed nginx as reverse proxy server and can be tested by opening a browser and accessing

http://localhost/app1

http://localhost/app2

For reference , please see below video.


 

The post How to install Nginx as a reverse proxy server with Docker appeared first on Little Big Extra.

]]>
http://littlebigextra.com/install-nginx-reverse-proxy-server-docker/feed/ 7