#docker

Starting a Miroservices Project with NodeJS and Docker

If you are planning to build a project with Microservices pattern and DevOps culture, you can use the source code here.

In this source code, I built two services, one for Frontend AngularJS and Bootstrap, one for Backend ExpressJS and Mongoose. All services in this project is dockerized and pushed to Docker Hub. You can read Dockerfile in each service for further details. To create a new service, you just create a new directory, writing source code for that service and update docker-compose.yml file.

Run

The Orchestration of Project is written in docker-compose.yml file. So it is so easily to understand and run the project.

Using Vagrant and Virtualbox

We also start from Virtual Machine layer with Vagrant.

After that, you can access the application via link: http://172.20.20.20

You can read Vagrantfile to understand what we need to prepare for VMs.

System Architecture

With the source code, we will have a system architecture as below

Microservices DevOps Starter Project

Microservices DevOps Starter Project

 

Monitor & Logs

This starter project also fully supports monitor by using Telegraf, InfluxDB, Grafana and Kapacitor. Supports centralizing Logs with fluentd, Kibana and Elasticsearch.

Contents

Part Title Git Tag
1 Starting with ExpressJS 1.0
2 Logger with Winston 1.1
4 Config Management with Node-Config 1.2
5 Building Create User API 2.1
6 Adding Swagger Documents 2.2
7 Building Login API 2.3
8 Building Get User List/Detail API 2.4
9 Authorization all APIs 2.5
10 Unit Test 2.6
11 Building Config API 3.0
12 Using Cache 3.1
13 Using Queue 3.2
14 Starting AngularJS with Yeoman 4.0
15 Config Management for AngularJS 4.1
16 Building Login Page 4.2
17 Building List User Page 4.2
18 Pagination with AngularJS and Bootstrap 4.3
19 Multiple Languages 4.4
20 AngularJS Unit Test 4.5
21 Dockerize Aplication 5.0
22 Orchestration with Docker Compose 5.1

Install Docker and Docker Swarm on CentOS7

This tutorial will show you how to install Docker and Docker Swarm on CentOS7. This tutorial is for Docker Version 1.11 and I use Vagrant and VirtualBox to create CentOS 7 in Virtual Machine.

Stop worrying about what you have to loose and start focusing on what you have to gain

Stop worrying about what you have to loose and start focusing on what you have to gain

Create a Vagrantfile for CentOS7

Turn on your Virtual Machine with command:

After finishing creating CentOS server, you access the server via SSH by command:

Docker

Go to root account:

In the next step, we will install Docker Engine following the official tutorial of Docker.

Add Docker Repository  into CentOS

Running the installation

Start Docker Service

Enable Docker run on OS Start

Docker Swarm

Installing Docker Swarm via Docker Pull

Open Docker Service Configuration file

In the line contains ExecStart, we add text as below

And now, reload Docker Service

Now we can run Swarm Node

And running Swarm Manage

Finally, verify your work

Video

Thanks for your reading!!!

Centralize Docker logs with FluentD, ElasticSearch and Kibana

Beside monitor topic, Log also is a important issue we need to concern. In this post, I just mention the way how to centralize Docker Logs using FluentD, Elasticsearch and Kibana

4a2ee5ea-f25a-4af5-b56b-4ad90a87a985-medium

Try not to become a man of success, but rather try to become a man of value

Secenario

We will install FluentD, ElasticSearch and Kibana in the same machine.

  • FluentD : Collect and Transfer log data to Elasticseach
  • Elasticsearch: Store and indexing log data to support searching/filtering log data
  • Kibana: A web view supports you search/filter and virtualize the log data

Prerequisites

  • We have a machine installed Ubuntu 14.04 with IP 192.168.1.191
  • We already installed Docker, Wget

Now, I will show you step by step to get stated to centralize the log data with FluentD

Elasticsearch

Download:

You should check the latest version at https://www.elastic.co/downloads/elasticsearch

Uncompress:

Run:

Or run as daemon:

Now, we have Elasticsearch run on port 9200.

FluentD

Add the lines below to /etc/security/limits file:

Open new terminal and type command below, make sure the output is correct:

Install FluentD (uses Treasure Data):

For other Ubuntu version, please read: http://docs.fluentd.org/articles/install-by-deb

Now, we need to install Elasticsearch Plugin for FluentD:

Add the content below to /etc/td-agent/td-agent.conf to setup Fluentd transfer all docker logs to Elasticsearch :

And restart FluentD:

Docker

Now, we change docker configuration file to use Fluent as a Log Driver. Open /etc/default/docker, and add the line below:

Add restart docker to apply the change:

Kibana

We will run Kibana in Docker Container with command:

Now, you can access http://192.168.1.191:5601 to see Docker Logs in Kibana.

Tips

Delete a index in Elasticsearch:

List all Indexes in Elasticsearch:

Microservices with Docker Swarm and Consul

Recently, perhaps you hear too much about Microservices includes the concepts, cons, pros … I sure that you will ask yourself that how to implement it for your project? It is not easy to answer your question. In this post, i will tell you a part of my story after six month implement a social network project.

The story I would like to tell you is how to deploy a microservices with Docker Swarm and Consul  (step by step instruction). With , my scenario, I have three nodes, installed Ubuntu 14.04 with 2G Ram and 2 CPUs. I used Vagrant and Virtualbox to build it.

Microservice with Docker Swarm and Consul

Microservice with Docker Swarm and Consul

As the image above, We will install Consul in all nodes. With, Node Gateway, It is installed Nginx to load-balance and used as a API Gateway for your services. And we also install Consul-Template to help you change Nginx configuration once there is an new service is on or down via Consul.

With Node Agent-One and Node Agent Two, we will use it as Swarm Clusters. So we will install Swarm, and run Swarm Join for all of them. Beside that, we have to install Consul and Registrator. Consul is a Service Discovery, and Registrator will help you announce for Consul know if there is any service is up or down.

There are many steps we need to do to build this system. So I deviced my post to three parts:

I also publish all scripts to build this system to Github. You can check out it microservices-swarm-consul

So you just can download it and run:

Note: Make sure you installed Virtualbox and Vagrant in your machine. And if there is any issue in my code. Please report me. Thank you

 

Before you reading my series of the posts, you should read about the concept of the tools we will use for deploy microservices

Docker & Docker Swarm

Docker is a platform for developers and sysadmins to develop, ship, and run applications. Docker lets you quickly assemble applications from components and eliminates the friction that can come when shipping code. Docker lets you get your code tested and deployed into production as fast as possible.

Docker Swarm is native clustering for Docker. It allows you create and access to a pool of Docker hosts using the full suite of Docker tools. Because Docker Swarm serves the standard Docker API, any tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts

Consul

  • Service Discovery
  • Key/Value Storage
  • Failure Detection
  • Multiple Datacenter

Consul-Template

  • Listen/Query the updates from Consul
  • Updates Configuration File by the templates provided
  • Executes commands

Registrator

Registrator automatically registers and deregisters services for any Docker container by inspecting containers as they come online. Registrator supports pluggable service registries, which currently includes Consul, etcd and SkyDNS 2.

In next step, please go to Install Load Balance, API Gateway and Docker Swarm

Flask RabbitMQ Celery example

If you are learning the way how to work with RABBITMQ + CELERY. This source code may help you. I wrote a small example to upload image to web server and use Celery to generate a thumbnail.

Difficult doesn’t mean impossible. It simply means that you have to work hard

This source code supports 2 ways to run it.

You can check the source code in Github: flask-celery-rabbitmq-generate-thumbnail

And the image at docker: flask-celery-rabbitmq-example

At the first, clone this source code to your local:

Using Docker

  1. Build from Dockerfile

Or pull from Docker Repo

Run Docker image

After running the docker image, you should wait for the output as below:

Install packets normally (Ubuntu 14.04)

I will show you how to run this source code from scratch. And i am using ubuntu server 14.04, installed virtualenv, pip.

Install RabbitMQ Server:

Fix the issue of PIL

Create environment to run the application with virtualenv:

Install all packets required:

Run web server to upload files

Run the “generate thumbnail” task in Celery

Now, it is ready for testing. (http://localhost:5000). The page should be as image below:

generate thumbnail with celery rabbitmq

Generate thumbnail with celery rabbitmq

Install Cloudera Hadoop in Ubuntu 14.04

As you know,  Install Hadoop is not easy. And It require a virtual machine with high configuration.  In my post, I will give you the shortest way to have Hadoop in your machine. It is installation via Docker. I assume that you had Docker in your Ubuntu 14.04

c85fe663-679e-4fe7-9f5f-f413af3f15cc-medium

If your dreams do not scare you. They are too small.

Prerequisites

  • Docker
  • Ubuntu 14.04
  • Ram 4G
  • CPU 2 core

Install

To install the docker-cloudera-quickstart from docker-hub, simply use the following command:

Use

To start an instance in BACKGROUND (as daemon):

To start an instance in FOREGROUND:

To open more terminal instances for the running instance:

Test MapReduce with WordCount example

Refer to: http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html

Refer to:

Docker Build a Ubuntu Image

In the previous article, I showed you how to install Docker and clone first image to local machine. So now, I will show you how to push a Ubuntu image to Docker Repository

It’s not about “having” time. It’s about making time

 

At first, please make sure that you had a account at docker.com (if not, please create one, it is free)

Lets start with Docker Login command in local machine:

For example, you need build a Ubuntu image that contains Apache2 web server. You create a Docker build file firstly.

Add content below to Docker build file:

Run build a Docker Ubuntu Image:

And test and install some packet if you want:

Finnaly, commit & push your Ubuntu Image to your Docker Repository:

Note:  thanhson1085/webserver was created in your repositories at docker.com

That is all, love Docker at the first eye.

 

Install Docker in Ubuntu 14.04

Docker is a platform of developer and sysadmins .  With Docker, you will be able to push whole your virtual machines  to cloud.  So you could easily use it everywhere and every time, or share to other people. I will share you the details of Docker installation in Ubuntu 14.04

Install Docker via apt-get:

Run bash completion:

Install Apt Transport Https:

Then, add the Docker repository key to your local keychain.

Open /etc/apt/sources.list.d/docker.list and add text:

And now, install lxc-docker package:

Test: