#Docker

Starting a Miroservices Project with NodeJS and Docker

If you are planning to build a project with Microservices pattern and DevOps culture, you can use the source code here.

In this source code, I built two services, one for Frontend AngularJS and Bootstrap, one for Backend ExpressJS and Mongoose. All services in this project is dockerized and pushed to Docker Hub. You can read Dockerfile in each service for further details. To create a new service, you just create a new directory, writing source code for that service and update docker-compose.yml file.

Run

The Orchestration of Project is written in docker-compose.yml file. So it is so easily to understand and run the project.

Using Vagrant and Virtualbox

We also start from Virtual Machine layer with Vagrant.

After that, you can access the application via link: http://172.20.20.20

You can read Vagrantfile to understand what we need to prepare for VMs.

System Architecture

With the source code, we will have a system architecture as below

Microservices DevOps Starter Project

Microservices DevOps Starter Project

 

Monitor & Logs

This starter project also fully supports monitor by using Telegraf, InfluxDB, Grafana and Kapacitor. Supports centralizing Logs with fluentd, Kibana and Elasticsearch.

Contents

Part Title Git Tag
1 Starting with ExpressJS 1.0
2 Logger with Winston 1.1
4 Config Management with Node-Config 1.2
5 Building Create User API 2.1
6 Adding Swagger Documents 2.2
7 Building Login API 2.3
8 Building Get User List/Detail API 2.4
9 Authorization all APIs 2.5
10 Unit Test 2.6
11 Building Config API 3.0
12 Using Cache 3.1
13 Using Queue 3.2
14 Starting AngularJS with Yeoman 4.0
15 Config Management for AngularJS 4.1
16 Building Login Page 4.2
17 Building List User Page 4.2
18 Pagination with AngularJS and Bootstrap 4.3
19 Multiple Languages 4.4
20 AngularJS Unit Test 4.5
21 Dockerize Aplication 5.0
22 Orchestration with Docker Compose 5.1

Install Docker and Docker Swarm on CentOS7

This tutorial will show you how to install Docker and Docker Swarm on CentOS7. This tutorial is for Docker Version 1.11 and I use Vagrant and VirtualBox to create CentOS 7 in Virtual Machine.

Stop worrying about what you have to loose and start focusing on what you have to gain

Stop worrying about what you have to loose and start focusing on what you have to gain

Create a Vagrantfile for CentOS7

Turn on your Virtual Machine with command:

After finishing creating CentOS server, you access the server via SSH by command:

Docker

Go to root account:

In the next step, we will install Docker Engine following the official tutorial of Docker.

Add Docker Repository  into CentOS

Running the installation

Start Docker Service

Enable Docker run on OS Start

Docker Swarm

Installing Docker Swarm via Docker Pull

Open Docker Service Configuration file

In the line contains ExecStart, we add text as below

And now, reload Docker Service

Now we can run Swarm Node

And running Swarm Manage

Finally, verify your work

Video

Thanks for your reading!!!

Centralize Nginx Logs with FluentD, Kibana and Elasticsearch

As you know, FluentD is a great tool to collect the logs, ElasticSearch supports store and search the log data and Kibana helps you to view and search logs in web interface.

Nginx Logs is a good thing help you monitor, debug, and troubleshoot your application. So in this post, I will show  you how to Centralize Nginx Logs with FluentD, Kibana and Elasticsearch.

Before reading this post, please read Centralize Docker Logs with FluentD, Kibana and Elasticsearch to know how to install Fluent, Kibana and Elasticsearch in Ubuntu 14.04.

In the next step, we add content below to /etc/td-agent/td-agent.conf file:

And Restart FluentD:

For Debugging:

If you got an error:

We just need add td-agent user to adm group:

Finally, access Kibana to view the logs as image below:

2016-02-16_1310

Centralize Nginx Logs with FluentD, Kibana and Elasticsearch

 

Centralize Docker logs with FluentD, ElasticSearch and Kibana

Beside monitor topic, Log also is a important issue we need to concern. In this post, I just mention the way how to centralize Docker Logs using FluentD, Elasticsearch and Kibana

4a2ee5ea-f25a-4af5-b56b-4ad90a87a985-medium

Try not to become a man of success, but rather try to become a man of value

Secenario

We will install FluentD, ElasticSearch and Kibana in the same machine.

  • FluentD : Collect and Transfer log data to Elasticseach
  • Elasticsearch: Store and indexing log data to support searching/filtering log data
  • Kibana: A web view supports you search/filter and virtualize the log data

Prerequisites

  • We have a machine installed Ubuntu 14.04 with IP 192.168.1.191
  • We already installed Docker, Wget

Now, I will show you step by step to get stated to centralize the log data with FluentD

Elasticsearch

Download:

You should check the latest version at https://www.elastic.co/downloads/elasticsearch

Uncompress:

Run:

Or run as daemon:

Now, we have Elasticsearch run on port 9200.

FluentD

Add the lines below to /etc/security/limits file:

Open new terminal and type command below, make sure the output is correct:

Install FluentD (uses Treasure Data):

For other Ubuntu version, please read: http://docs.fluentd.org/articles/install-by-deb

Now, we need to install Elasticsearch Plugin for FluentD:

Add the content below to /etc/td-agent/td-agent.conf to setup Fluentd transfer all docker logs to Elasticsearch :

And restart FluentD:

Docker

Now, we change docker configuration file to use Fluent as a Log Driver. Open /etc/default/docker, and add the line below:

Add restart docker to apply the change:

Kibana

We will run Kibana in Docker Container with command:

Now, you can access http://192.168.1.191:5601 to see Docker Logs in Kibana.

Tips

Delete a index in Elasticsearch:

List all Indexes in Elasticsearch:

Monitor Nginx with CollectD, InfluxDB and Grafana

Monitor is the best solution to know your system is working well or not.  If your system is complicated,  you will have many things need to be monitored. In this post, I just show you a simple way to monitor Nginx with CollectD, InfluxDB and Grafana.

Before reading this post, make sure that you are take a look Monitor server with CollectD, InfluxDB and Grafana to get started with CollectD, InfluxDB and Grafana.

Nginx

To monitor Nginx, your nginx have to be enabled http_stub_status_module. At the first, you should check your Nginx contains http_stub_status_module or not by command:

In the default, if you are using Ubuntu 14.04, and install nginx by apt-get command, then you do not worry about the step above.

And now, you should change nginx config with the content below:

And restart your nginx:

Now, you can get nginx status by url http://127.0.0.1/nginx_status:

The output should be:

CollectD

In the next step, we will enable Nginx plugin of CollectD in /opt/collectd/etc/collectd.conf:

Restart your CollectD:

InfluxDB

After restarting CollectD, you wait a minute then checking InfluxDB to make sure that Nginx monitor data is stored.

Grafana

In the last step, we create a Graph Panel in Grafana to monitor how many “requests per second” . And Switch editor Mode to input the query below:

If you do not know how to “Switch editor mode”, you should see the image below:

14-01-2016 9-17-47 SA

Grafana – Switch editor mode

Finally, you will get the graph below:

13-01-2016 10-17-47 SA

Nginx – Request Per Second

You can also use data stored in InfluxDB to create the output by your way.

 

Microservice with Docker Swarm and Consul – Part 2

In the previous part, I showed you step by step install Node Gateway and Node Agent-One. In this post, we will continue to install Node Agent-Two. After finishing this step, you will understand how to scale or replicate you service in Docker Swarm, and how to Nginx load-balance your services with Consul and Consul-Template.

Nginx Load-balancer, Docker Swarm Cluster

Nginx Load-balancer, Docker Swarm Cluster

Node Agent-Two

In this node, we also need to install Consul and Swarm. Similar with the last post,  we run Consul with command:

And run Registrator:

Restart Docker Daemon:

And run Swarm node to join Swarm Cluster:

Now, the node is ready for running you microservices:

After command above, you are scale your services from one container to two containers running in Docker Swarm

You can check with consul members command:

Scale

To scale your service (in my case is angular-admin-seed service), you should access either Node Agent-One or Agent-Two with root privilege, and type commands:

And check with docker ps command, the output should be:

And check /etc/nginx/sites-available/default at node API Gateway:

It means that Nginx is loading balance between three running services.

Conclusions

After reading, the series of post, we know that there are many open sources support us implement Microservices. However, we can not own a microservices system with one step at this time. We need to deeply understand about tools, open source and Linux as well to implement it.

Microservices with Docker Swarm and Consul – Part 1

In this post, I will show you step by step install Node Gateway and Node Agent One.  After finishing your installation, you will understand the way how to Consul, Swarm, Consul-Template, and Registrator work together.

Prerequisites

  • Three Nodes at address:
    • 172.20.20.10 named Node Gateway
    • 172.20.20.11 named Node Agent One
    • 172.20.20.12 named Node Agent Two
  • Three Nodes with configuration Ram 2G, 2 CPUs are installed Ubuntu 14.04
  • Node Agent One and Node Agent Two are installed Docker and Docker Swarm
  • Three Nodes connected each others
Microservice with Docker Swarm and Consul

Microservice with Docker Swarm and Consul

Node Gateway

As my scenario, we will run consul, consul-template and Nginx on this node.

For downloading and installing Consul, Consul-Template on Ubuntu 14.04, please see the link: Install Consul and Consul Template in Ubuntu 14.04

And it is easy to install Nginx:

Now, you are ready to run Consul and Consul-Template in Node Gateway

Run Consul as a master node:

Now, we create a template file for Consul-Template at /opt/consul-template directory

And create file /opt/consul-template/nginx.ctmpl with the content:

Next step, run Consul-Template:

The guide above, I just show you the way to run Consul, Consul-Template. In fact, we should run them in background and on startup. To do that please see link: Setup Consul, Consul-Template run in background

To verify Consul, you should access http://172.20.20.10:8500 to see Consul Web Interface

Now, you are done with Node Gateway. Do the next step with Node Agent-One

Node Agent-One

With Node Agent-One, we need to run Docker, Swarm Joiner, Swarm Manager, Consul Agent, Registrator, (cAdvisor for monitor as needed)

At the first, We install Consul on this node (It is same as you do with Node Gateway). After that, we run Consul Agent with command:

Next step, we need to restart Docker Daemon to run on address tcp://0.0.0.0:2375. So we run command:

After running Docker Daamon, we should run Registrator.

And you should run cAdvisor to monitor this node:

Now, It is time to run Swarm Manager and Swarm Joiner (Node). But first of all, we need to have a token for Swarm Cluster, you can get it by command:

So we can run Swarm Joiner:

And run Swarm Manager:

Swarm is ready to use for now.  If you need to run Docker Swarm in background, you should read this post: Run Docker Swarm with Upstart

And finally, I will run a service on this node. Try command:

Now you can take a rest and test your work by access link: http://172.20.20.10

clotify_microservice (6)

Nginx, Consul, Consul-Template, Docker Swarm

In the next post, I will show you how to scale a service by using Docker Swarm, Nginx and Consul

 

Microservices with Docker Swarm and Consul

Recently, perhaps you hear too much about Microservices includes the concepts, cons, pros … I sure that you will ask yourself that how to implement it for your project? It is not easy to answer your question. In this post, i will tell you a part of my story after six month implement a social network project.

The story I would like to tell you is how to deploy a microservices with Docker Swarm and Consul  (step by step instruction). With , my scenario, I have three nodes, installed Ubuntu 14.04 with 2G Ram and 2 CPUs. I used Vagrant and Virtualbox to build it.

Microservice with Docker Swarm and Consul

Microservice with Docker Swarm and Consul

As the image above, We will install Consul in all nodes. With, Node Gateway, It is installed Nginx to load-balance and used as a API Gateway for your services. And we also install Consul-Template to help you change Nginx configuration once there is an new service is on or down via Consul.

With Node Agent-One and Node Agent Two, we will use it as Swarm Clusters. So we will install Swarm, and run Swarm Join for all of them. Beside that, we have to install Consul and Registrator. Consul is a Service Discovery, and Registrator will help you announce for Consul know if there is any service is up or down.

There are many steps we need to do to build this system. So I deviced my post to three parts:

I also publish all scripts to build this system to Github. You can check out it microservices-swarm-consul

So you just can download it and run:

Note: Make sure you installed Virtualbox and Vagrant in your machine. And if there is any issue in my code. Please report me. Thank you

 

Before you reading my series of the posts, you should read about the concept of the tools we will use for deploy microservices

Docker & Docker Swarm

Docker is a platform for developers and sysadmins to develop, ship, and run applications. Docker lets you quickly assemble applications from components and eliminates the friction that can come when shipping code. Docker lets you get your code tested and deployed into production as fast as possible.

Docker Swarm is native clustering for Docker. It allows you create and access to a pool of Docker hosts using the full suite of Docker tools. Because Docker Swarm serves the standard Docker API, any tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts

Consul

  • Service Discovery
  • Key/Value Storage
  • Failure Detection
  • Multiple Datacenter

Consul-Template

  • Listen/Query the updates from Consul
  • Updates Configuration File by the templates provided
  • Executes commands

Registrator

Registrator automatically registers and deregisters services for any Docker container by inspecting containers as they come online. Registrator supports pluggable service registries, which currently includes Consul, etcd and SkyDNS 2.

In next step, please go to Install Load Balance, API Gateway and Docker Swarm

Run Docker Swarm with Upstart

In Ubuntu 14.04, to run a job in background an on startup, you should use Upstart. In this post, I will show you the way to run Swarm with Upstart.  I think that it is more complicated that SystemD used by CoreOS or Ubuntu 15.04.

At the first, we need to open /etc/default/docker file and add a line:

And restart Docker:

We need to do this step so that Docker Daemon run at all interfaces and on port 2375. Now, you ready to run Swarm on the machine.

To run Swarm Join, you create file /etc/init/swarm-join.conf with content:

Now, run Swarm Join job:

To run Swarm Manage, you create file /etc/init/swarm-manage.conf with content:

Now run Swarm Manage job:

Done