#nginx

MQTT, HTTP Haproxy Configuration

If you are building a system that serves both HTTP and MQTT, I sure that you have to use HAProxy. So in this post, I will share you the way to setup HAProxy to serve Nginx and MQTT Broker.

Your haproxy.cfg file should contain the source code below:

Haproxy MQTT Configuration

Note: broker is MQTT Broker’s hostname

Haproxy Nginx

For monitoring Haproxy, You can add Haproxy Stats Configuration

Final Result haproxy.cfg

 

 

Monitor Nginx Response Time with FluentD, Kibana and Elasticsearch

In my previous post, I showed you How to centralize Nginx logs.  So now, I will use FluentD, Kibana and ElasticSearch to collect Nginx Response Time.

To implement it, we have to change Nginx Log Format. Because, in the default, Nginx does not store Response Time to access.log file. So we change nginx.conf as below:

After reload Nginx, you can try to tail access.log. The result should be as below:

Now, we will create a regex string that matches with the log format above:

And Insert to td-agent.conf as below:

After restarting td-agent. We can wait a minute and then view kibana. The outfit should be:

2016-02-26_1800

Monitor Nginx Response Time with FluentD, Kibana and Elasticsearch

For basic installation, please refer to https://sonnguyen.ws/centralize-docker-logs-with-fluentd-elasticsearch-and-kibana/

Centralize Nginx Logs with FluentD, Kibana and Elasticsearch

As you know, FluentD is a great tool to collect the logs, ElasticSearch supports store and search the log data and Kibana helps you to view and search logs in web interface.

Nginx Logs is a good thing help you monitor, debug, and troubleshoot your application. So in this post, I will show  you how to Centralize Nginx Logs with FluentD, Kibana and Elasticsearch.

Before reading this post, please read Centralize Docker Logs with FluentD, Kibana and Elasticsearch to know how to install Fluent, Kibana and Elasticsearch in Ubuntu 14.04.

In the next step, we add content below to /etc/td-agent/td-agent.conf file:

And Restart FluentD:

For Debugging:

If you got an error:

We just need add td-agent user to adm group:

Finally, access Kibana to view the logs as image below:

2016-02-16_1310

Centralize Nginx Logs with FluentD, Kibana and Elasticsearch

 

Microservice with Docker Swarm and Consul – Part 2

In the previous part, I showed you step by step install Node Gateway and Node Agent-One. In this post, we will continue to install Node Agent-Two. After finishing this step, you will understand how to scale or replicate you service in Docker Swarm, and how to Nginx load-balance your services with Consul and Consul-Template.

Nginx Load-balancer, Docker Swarm Cluster

Nginx Load-balancer, Docker Swarm Cluster

Node Agent-Two

In this node, we also need to install Consul and Swarm. Similar with the last post,  we run Consul with command:

And run Registrator:

Restart Docker Daemon:

And run Swarm node to join Swarm Cluster:

Now, the node is ready for running you microservices:

After command above, you are scale your services from one container to two containers running in Docker Swarm

You can check with consul members command:

Scale

To scale your service (in my case is angular-admin-seed service), you should access either Node Agent-One or Agent-Two with root privilege, and type commands:

And check with docker ps command, the output should be:

And check /etc/nginx/sites-available/default at node API Gateway:

It means that Nginx is loading balance between three running services.

Conclusions

After reading, the series of post, we know that there are many open sources support us implement Microservices. However, we can not own a microservices system with one step at this time. We need to deeply understand about tools, open source and Linux as well to implement it.

Run a Flask application in Nginx uwsgi

As you know uWSGI is the most popular to deploy a Python Application.  And Nginx is a powerful web server to run website in production. So this post will show a simple way to run your application (uses Flask Framework)  with WSGI and Nginx

1007966f-b1c3-4b52-9082-5e3f2f4b85d0-medium

Peace. It does not mean to be in a place where there is no noise, trouble or hard work. It mean to be in the midst of those things and still be calm in your heart

Create wsgi.py in your root directory of the application:

(Please make note that you understand the meaning of the source code above. So you can modify it to match your situation) 

And a config file for WSGI – wsgi.ini:

Install uwsgi with PIP:

Test your uWSGI config with command below:

Make sure that WSGI works with your application properly. Now, you can config NGINX to run over socket file(your_app_name_here.sock). Please create nginx config file with content below

Enable your site in Nginx:

And restart Nginx to apply your changes:

Test your site now!!!

Create upstart script:

With content:

 

Logrotate Nginx Custom Logs

As you know, when you create a site in nginx webserver, you will want to add custom logs for that site. So this post is a small tip to help you how to do it perfectly.

In your nginx site config (e.g /etx/nginx/site-availables/example.com), you add custom logs as below:

So now, your log files will be bigger day by day. You will have to remember to delete the log files if you do not want to your hard disk be full. What should you do now? Fortunately, Logrotate will help you compress. clean and backup it daily.

You just have to add (append) the content below to /etc/logrotate.d/nginx:

Finaly, run the command below to force the update.

It is done.

LEMP (Linux, Nginx, Mysql, PHP) in Ubuntu 14.04

In this post, I will show step to run a simple site in nginx server, use FPM.

Install all packet with apt-get:

Create example.com file in /etc/nginx/site-available/

Add the content below to “example.com” file

And enable the site you created

Now for the test