Howto Configure Elasticsearch, Logstash & Kibana on Ubuntu 15.04

This tutorial is all about an Open Source tool that will index and search in your logs to extract the valuable information for you to visualize. We will guide you with the setup of ELK installations to configure with simple steps that will be helpful for your to setup your own ELK stack t to collect, manage and visualize big data and Logs with Elastic search 1.5.2, Logstash 1.5.0 and Kibana 4.0.2 on one centralized log server. That will quickly identify the problems on your servers or with multiple applications running on it to be looked at one centralized location.

System Resources

The basic system resources for the setup of centralized log server depend on the environment and the logs level that we need to manage on it.

Elasticsearch, Logstash and Kibana
Base Operating SystemUbuntu 15.04 (GNU/Linux 3.19.0-15-generic x86_64)
Java VersionOpenJDK "1.7.0_79"
RAM and CPU2 GB  , 1.0 GHZ
Hard Disk30 GB

Basic Setup

Before starting the installation make sure to perform all steps with root user and update your system.
We don’t need any extra software packages other than Java and ELK stack .

kashif@ubuntu-15:~$ sudo -i
[sudo] password for kashif:

Step 2: System Update

root@ubuntu-15:~# apt-get update

Step 3: Java Installation

root@ubuntu-15:~# apt-get install default-jre-headless

Starting ELK Setup

Let’s get started with the installation of Elastic search and Logstash. We need to add its repositories first from its official website to download and install Public Signing Keys as.

Step 1: Create a new folder and get the repository

root@ubuntu-15:~# mkdir /backup
root@ubuntu-15:~# cd /backup/
root@ubuntu-15:/backup# wget -O - | sudo apt-key add -

Step 2: Add repositories to end of sources.list file

root@ubuntu-15:/backup# vi /etc/apt/sources.list
deb stable main
deb stable main

Step 3: Run update after adding new repositories

root@ubuntu-15:/backup# apt-get update

Install Elasticsearch

We are now ready to start installation of Elasticsearch and to configure it for Real-Time Data and Real-Time Analytics.

Step 1: Run apt-get command to install the package

root@ubuntu-15:/backup# apt-get install elasticsearch=1.5.2

Step 2: Start Service and Enable Start at Boot by default

root@ubuntu-15:/backup# service elasticsearch start
root@ubuntu-15:/backup#update-rc.d elasticsearch defaults 95 10

Common Error with Start and Stop Single Elasticsearch Instance



If you come across with the Elsaticsearch service Failed status because of Java_Home as shown then follow the simple step to resolve this.

Step 1: Open .bashrc file in home directory

root@ubuntu-15:/backup# cd
root@ubuntu-15:~# ls -a
. .. .aptitude .bashrc .profile .viminfo

Step 2: Edit .bashrc and add following lines at the end of file

root@ubuntu-15:~# vi .bashrc
export JAVA_HOME
root@ubuntu-15:~# source ~/.bashrc

Step 3: Now uncomment the paths in deafult elasticsearch file as

root@ubuntu-15:~# vi /etc/default/elasticsearch
# Run Elasticsearch as this user ID and group ID

# Heap Size (defaults to 256m min, 1g max)
# Heap new generation
# max direct memory
# Maximum number of open files, defaults to 65535.
# Maximum locked memory size. Set to "unlimited" if you use the
# bootstrap.mlockall option in elasticsearch.yml. You must also set
# Maximum number of VMA (Virtual Memory Areas) a process can own
# Elasticsearch log directory
# Elasticsearch data directory
# Elasticsearch work directory
# Elasticsearch configuration directory
# Elasticsearch configuration file (elasticsearch.yml)
# Additional Java OPTS
# Configure restart on package upgrade (true, every other setting will lead to not restarting)

Step 4: Now restart Elasticsearch service and then check its status

Elasticsearch Status

Elasticsearch Configurations

Lets configure the Elasticsearch.yml if you want to allow or restric access to the Elasticsearch instance.

Step 1: To allow access clients on different IPs

root@ubuntu-15:~# vi /etc/elasticsearch/elasticsearch.yml
http.cors.enabled: true
http.cors.allow-origin: "*"

Step 2:Run following commands to Get Elasticsearch Test Results

root@ubuntu-15:~# curl http://localhost:9200
root@ubuntu-15:~# curl 'http://localhost:9200/_search?pretty'
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 0,
"successful" : 0,
"failed" : 0
"hits" : {
"total" : 0,
"max_score" : 0.0,
"hits" : [ ]

Elasticsearch Curl Test

Elasticsearch Plugins Installation

The installation of plugins is simple. Plugins provides an admin Graphical User Interface for Elasticsearch that helps in debugging and managing clusters and nodes.

Step 1: Install Plugin

root@ubuntu-15:~#/usr/share/elasticsearch/bin/plugin -install lukas-vlcek/bigdesk/2.4.0

Step 2: Open Dashboard on Web

The installed plugin can be accessed by the following URL:

Elasticsearch Web Interface

Installation of Logstash

Now we will start the installation of Logstash that will be used to Centralize data processing of logs and other events from other sources.

Step 1: Get the Logstash Installation package from the Source

root@ubuntu-15:/backup# cd /var/cache/apt/archives/
root@ubuntu-15:/var/cache/apt/archives# wget

Step 2: Install Logstash with dpkg command

root@ubuntu-15:/var/cache/apt/archives# dpkg -i logstash_1.5.0-1_all.deb

Step 3: Start Logstash Service and Enable it to Startup at Boot by default

Logstash Status

Configure Logstash

Logstash filters default behavior is only for single thread only, so to increase these limits we will edit the default conf file of logstash and set its parameters as defined.

root@ubuntu-15:~# vi /etc/default/logstash
# Arguments to pass to logstash agent
LS_OPTS="-w 2"
# Arguments to pass to java

Changes made in the Logstash configuration file will be effective after restart of its service.

root@ubuntu-15:~# systemctl restart logstash.service

Kibana Installation Setup

Lets start the installation of Kibana, we also need to install a web server to host Kibana. So we will install Nginx web server as per our installation setup.

Nginx Installation
Lets start the installation of Nginx web server to access data and host Kibana.

root@ubuntu-15:~# apt-get install nginx
root@ubuntu-15:/backup# vi /etc/nginx/sites-available/default
# Default server configuration
server {
listen 80 default_server;
listen [::]:80 default_server;

# SSL configuration
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
# include snippets/snakeoil.conf;

root /srv/www;

# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;

server_name _ localhost;

location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;

# pass the PHP scripts to FastCGI server listening on
root@ubuntu-15:/backup# service nginx reload
root@ubuntu-15:/backup# service nginx status

Nginx Status

Step 1: Get the Kibana package from the source

root@ubuntu-15:/backup# wget

Extract the Kibana into /opt directory

root@ubuntu-15:/backup# tar xf kibana-4.0.2-linux-x64.tar.gz -C /opt
root@ubuntu-15:/backup#cd /opt/kibana-4.0.2-linux-x64
root@ubuntu-15:/opt/kibana-4.0.2-linux-x64# ls
bin config LICENSE.txt node plugins README.txt src
root@ubuntu-15:/opt/kibana-4.0.2-linux-x64# ln -s kibana-4.0.2-linux-x64 kibana

Kibana Configurations

# Kibana is served by a back end server. This controls which port to use.
port: 5601

# The host to bind the server to.
host: ""

# The Elasticsearch instance to use for all your queries.
elasticsearch_url: "http://localhost:9200"

# preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. If you set it to false,
# then the host you use to connect to *this* Kibana instance will be sent.
elasticsearch_preserve_host: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations
# and dashboards. It will create a new index if it doesn't already exist.
kibana_index: ".kibana"

# If your Elasticsearch is protected with basic auth, this is the user credentials
# used by the Kibana server to perform maintence on the kibana_index at statup. Your Kibana
# users will still need to authenticate with Elasticsearch (which is proxied thorugh
# the Kibana server)
# kibana_elasticsearch_username: user
# kibana_elasticsearch_password: pass

# If your Elasticsearch requires client certificate and key
# kibana_elasticsearch_client_crt: /path/to/your/client.crt
# kibana_elasticsearch_client_key: /path/to/your/client.key

# If you need to provide a CA certificate for your Elasticsarech instance, put
# the path of the pem file here.
# ca: /path/to/your/CA.pem
# The default application to load.
default_app_id: "discover"
# Time in milliseconds to wait for responses from the back end or elasticsearch.
# This must be > 0
request_timeout: 300000
# Time in milliseconds for Elasticsearch to wait for responses from shards.
# Set to 0 to disable.
shard_timeout: 0

Elasticsearch KOPF

Start Manual Kibana Service

root@ubuntu-15:/opt# ./kibana/bin/kibana
{"@timestamp":"2015-06-05T06:41:43.998Z","level":"info","message":"Found kibana index","node_env":"production"}
{"@timestamp":"2015-06-05T06:41:44.014Z","level":"info","message":"Listening on","node_env":"production"}
{"@timestamp":"2015-06-05T06:41:58.801Z","level":"info","message":"GET / 304 - 7ms","node_env":"production","request":{"method":"GET","url":"/","headers":{"host":"","connection":"keep-alive","accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8","user-agent":"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.89 Safari/537.36","accept-encoding":"gzip, deflate, sdch","accept-language":"en-US,en;q=0.8","if-none-match":"W/\"717-1535301999\"","if-modified-since":"Fri, 05 Jun 2015 06:05:05 GMT"},"remoteAddress":"","remotePort":64958},"response":{"statusCode":304,"responseTime":7,"contentLength":0}}

Browse Kibana in your web browser


Kibana Interface


The apparent Centralized Log Server has been setup successfully using Elasticsearch and Logstash that generates and visualize logs from different sources into a one central location from where we are now able to visualize and store a specific amount of logs history that easily manageable using Kibana Dashboad. There are still couple of plugins available that we can choose and install of your own choice. So the journey with Centralized Logs Server has just been started there are still couple of things to do for gathering and filtering the specific logs and to create our customized dashboards.
Lets enjoy with ELK stack and manage your valuable logs.

Kashif Siddique 3:00 am


Your email address will not be published. Required fields are marked *

1 Comment