How to Setup ELK Stack to Centralize Logs on Ubuntu 16.04

The ELK stack consists of Elasticsearch, Logstash, and Kibana used to centralize the the data. ELK is mainly used for log analysis in IT environments. The ELK stack makes it easier and faster to search and analyze large volume of data to make real-time decisions-all the time.

In this tutorial we will use the following versions of ELK stack.

Elasticsearch 2.3.4
Logstash 2.3.4
Kibana 4.5.3
Oracle Java version 1.8.0_91
Filebeat version 1.2.3 (amd64)

Before you start installing ELK stack, check the LSB release of Ubuntu server.

# lsb_release -a
Ubuntu LSB release

1. Install Java

The requirement for elasticsearch and logstash is to first install Java. We will install Oracle java since elasticsearch recommends it. However it works with OpenJDK also.

Add the Oracle Java PPA to apt:

# sudo add-apt-repository -y ppa:webupd8team/java
Add Oracle JAVA to apt repository

Update apt database

# sudo apt-get update

Now install the latest stable version of Oracle Java 8 using following command.

# sudo apt-get -y install oracle-java8-installer
Accept JAVA licence

Java 8 is installed, Check the version of Java using the command java -version

Check JAVA Version

2. Install Elasticsearch

To install Elasticsearch, first import  its public GPG key into apt database. Run the following command to import the Elasticsearch public GPG key into apt

# wget -qO - | sudo apt-key add -

Now create the Elasticsearch source list

# echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/elasticsearch-2.x.list
Create elasticsearch source list

Update apt  database

# sudo apt-get update

Now install Elasticsearch using following command

# sudo apt-get -y install elasticsearch
Install elasticsearch using apt get

Next edit the elasticsearch configuration file

# sudo vi /etc/elasticsearch/elasticsearch.yml

To restrict outside access to Elasticsearch instance (port 9200) uncomment the line that says and replace its value with "localhost" . localhost
Edit elasticsearch network host

Now start Elasticsearch

# sudo service elasticsearch restart

To start Elasticsearch on boot up, execute the following command.

# sudo update-rc.d elasticsearch defaults 95 10

Test elasticsearch using following command.

# curl localhost:9200
Test Elasticsearch using CURL

3. Install logstash

Create the Logstash source list. We have already imported public key as logstash and elasticsearch are from same repository.

# wget
Download Logstash using WGET
# dpkg -i logstash_2.3.4-1_all.deb
Install Logstash using dpkg
# sudo update-rc.d logstash defaults 97 8

# sudo service logstash start

To check the status of logstash, execute the following command in the terminal.

# sudo service logstash status
Start Logstash

You may find that the logstash is active but you cannot stop/restart logstash properly using service or systemctl command.  In that case you have to configure the systemd logstash daemon script by yourself. First, backup the logstash startup script inside /etc/init.d/ and /etc/systemd/system and remove it from there. Now install this “pleaserun” script from The prerequisite for installing this script is ruby.

Install Ruby

# sudo apt install ruby
Install Ruby

Now install please run gem

# gem install pleaserun
Install Pleaserun

You are now ready to create the systmd daemon file for logstash. Use the following command to do this.

# pleaserun -p systemd -v default --install /opt/logstash/bin/logstash agent -f /etc/logstash/logstash.conf

Now that systemd daemon for logstash has been created, start it and check the status of logstash.

# sudo systemctl start logstash

# sudo systemctl status logstash
Logstash Status

4. Configure logstash

Let us now configure Logstash. The logstash configuration  files  resides inside /etc/logstash/conf.d and are in JSON-format.  The configuration consists of three parts and they are  inputs, filters, and outputs. First, create a directory for storing certificate and key for logstash.

# mkdir -p /var/lib/logstash/private

# sudo chown logstash:logstash /var/lib/logstash/private

# sudo chmod go-rwx /var/lib/logstash/private
Change ownership of logstash dir

Now create certificates and key for logstash

# openssl req -config /etc/ssl/openssl.cnf -x509  -batch -nodes -newkey rsa:2048 -keyout /var/lib/logstash/private/logstash-forwarder.key -out /var/lib/logstash/private/logstash-forwarder.crt -subj /CN=

Change /CN= to your server's private IP address. To avoid “TLS handshake error” add  the following line in /etc/ssl/openssl.cnf.

[v3_ca] subjectAltName = IP:
Create SSL certificate for ELK server

Keep in mind that we have to copy this certificate to every clients whose logs you want send to ELK server through filebeat.

Next, we will first create “filebeat” input by the name 02-beats-input.conf

# sudo vi /etc/logstash/conf.d/02-beats-input.conf

input {
beats {
port => 5044
ssl => true
ssl_certificate => "/var/lib/logstash/private/logstash-forwarder.crt"
ssl_key => "/var/lib/logstash/private/logstash-forwarder.key"
Filebeat input section

Now we will create “filebeat” filter by the name 10-syslog-filter.conf to add a filter for syslog messages.

# sudo vi /etc/logstash/conf.d/10-syslog-filter.conf

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
Filebeat Filter section

At last, we will create “filebeat” output by the name 30-elasticsearch-output.conf

# sudo vi /etc/logstash/conf.d/30-elasticsearch-output.conf

output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
Filebeat output section

Test your Logstash configuration with the following command.

# sudo service logstash configtest

It will display Configuration OK  if there are no syntax errors otherwise check the logstash log files in /var/log/logstash

Logstash configuration test

To test the logstash, execute the following command from the terminal.

# cd /opt/logstash/bin && ./logstash -f /etc/logstash/conf.d/02-beats-input.conf

You will find that the logstash has started a pipeline and processing the syslogs. Once you are sure that logstash is processing the syslogs- combine  02-beats-input.conf, 10-syslog-filter.conf and 30-elasticsearch-output.conf as a single logstash conf file in the directory /etc/logstash/conf.d

Restart logstash to reload new configuration.

# sudo systemctl restart logstash

5. Install sample dashboard

Download sample Kibana dashboards and Beats index patterns. We are not going to use this dashboard but will load them so that we can use filebeat index pattern in it. Download the sample dashboards and unzip it.

# curl -L -O

# unzip
Download sample beat dashboard

Load the sample dashboards, visualizations and Beats index patterns into Elasticsearch using following commands.

# cd beats-dashboards-1.1.0 # ./

You will find the following index patterns in the the kibana dashboard's left sidebar. We will use only filebeat index pattern.


Since we will use filebeat to forward logs to Elasticsearch, therefore we will load a filebeat index template into the elasticsearch.

First, download the filebeat index template

# curl -O raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json

Now load the following template with the following CURL command.

# curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' -d@filebeat-index-template.json

If the template loaded properly, you should see a message like this:

"acknowledged" : true

xput JSON template using CURL

The ELK Server is now ready to receive filebeat data, let's configure filebeat in client server. For more information about loading beat dashboard check this link

6. Install filebeat in clients

Create the Beats source list in the clients whose logs you want send to ELK server. Update the apt database and install filebeat using apt-get

# echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/beats.list # sudo apt-get update && sudo apt-get install filebeat
Install filebeat using apt-get

Start filebeat

# /etc/init.d/filebeat start
Start filebeat

Now edit the file /etc/filebeat/filebeat.yml . Modify the existing prospector to send syslog to logstash. In the paths section, comment out the - /var/log/*.log file and add new entries for syslog - /var/log/syslog.

Edit filebeat syslog path

Next, specifies that the logs in the prospector are of type syslog.

Edit filebeat document type

Uncomment the Logstash: output section and hosts: ["SERVER_PRIVATE_IP:5044"] section. Edit localhost to the private IP address or hostname of your ELK server. Now uncomment the line that says certificate_authorities, and modify its value to  /var/lib/logstash/private/logstash-forwarder.crt that we have created in the ELK server in step you must copy this certificate to all client machine.

Restart filebeat and check its status.

# sudo /etc/init.d/filebeat restart # sudo service filebeat status
Check filebeat status

To test the filebeat, execute the following command from the terminal.

# filebeat -c /etc/filebeat/filebeat.yml -e -v
Test filebeat from terminal

The filebeat will send the logs to logstash for indexing the logs. Enable the filebeat to start during every boot.

# sudo update-rc.d filebeat defaults 95 10

Now open your favorite browser and point URL to http://ELK-SERVER-IP:5601 or http://ELK-SERVER-DOMAIN-NAME:5601, you will find the syslogs when clicked file-beats-* in the left sidebar.

This is our final filebeat configuration for filebeat -

- /var/log/auth.log
- /var/log/syslog

input_type: log

document_type: syslog

registry_file: /var/lib/filebeat/registry

hosts: [""]
bulk_max_size: 1024

certificate_authorities: ["/var/lib/logstash/private/logstash-forwarder.crt"]


rotateeverybytes: 10485760
Final filebeat configuration

7. Configure firewall

Add firewall rules to allow traffic to the following ports.

The four IPTABLE command will be

# iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 5601 -j ACCEPT
# iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 9200 -j ACCEPT
# iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
# iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 5044 -j ACCEPT

Save the rules.

# service iptables save


# service iptables restart

For UFW users:

# sudo ufw allow 5601/tcp

# sudo ufw allow 9200/tcp

# sudo ufw allow 80/tcp

# sudo ufw allow 5044/tcp

# sudo ufw reload

8. Install / Configure Kibana

Download latest kibana fom

# cd /opt

# wget

# tar -xzf kibana-4.5.3-linux-x64.tar.gz # cd kibana-4.5.3-linux-x64/

# mv kibana-4.5.3-linux-x64 kibana # cd /opt/kibana/config # vi kibana.yml

Now change these parameter in /opt/kibana/config/kibana.yml

server.port: 5601 ""
elasticsearch.url: "http://localhost:9200"
Edit Kibana configuration file

For testing purpose you can run the kibana using following commands.

# cd /opt/kibana/bin

# ./kibana & # netstat -pltn
Start kibana from terminal

Now we will create systemd daemon for kibana using “pleaserun” in the same way that we have created for logstash.

# pleaserun -p systemd -v default –install /opt/kibana/bin/kibana -p 5601 -H -e http://localhost:9200

-p specify the port no that kibana will bind
-H specify the host IP address where Kibana will run.
-e option specify the  elasticsearch IP address.

Start the kibana

# systemctl start kibana

Check the status of kibana

# systemctl status kibana

Check whether port no 5601 has been occupied by kibana

# netstat -pltn| grep '5601'
Create systemd daemon script for kibana

9. Install/Configure NGINX

Since Kibana is configured to listen on localhost, we need to set up a reverse proxy to allow external access to it. We will use NGINX as a reverse proxy. Install NGINX and apache utils using following command.

# sudo apt-get install nginx apache2-utils php-fpm
Install NGINX and Apache utils

Edit php-fpm configuration file www.conf  inside /etc/php/7.0/fpm/pool.d

listen.allowed_clients =,
Configure php fpm

Restart php-fpm

# sudo service php-fpm restart

Using htpasswd create an admin user by the name "kibana"  to access the Kibana web interface.

# sudo htpasswd -c /etc/nginx/htpasswd.users kibana
Create kibana http user

Enter a password at the prompt. Remember this password, we will use it to access the Kibana web interface.
Create a certificate for NGINX

# sudo openssl req -x509 -batch -nodes -days 365 -newkey rsa:2048  -out /etc/ssl/certs/nginx.crt -keyout /etc/ssl/private/nginx.key -subj /
Create NGINX SSL certificate and key

Edit NGINX default server block .

# sudo vi /etc/nginx/sites-available/default

Delete the file's contents, and paste the following configuration into the file.

server_tokens off;
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
server {
listen 443 ssl;

auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
ssl_certificate /etc/ssl/certs/nginx.crt;
ssl_certificate_key /etc/ssl/private/nginx.key;

location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
add_header Strict-Transport-Security "max-age=31536000;";
server {
listen 80;
listen [::]:80 default_server ipv6only=on;
return 301 https://$host$request_uri;
NGINX configuration file

We are not using the server_name directive as we have configured our domain name in /etc/hosts and /etc/hostname as Also since, we have edited the NGINX default host (  /etc/nginx/sites-available/default ). Therefore once NGINX started will be available in the browser.

Save and exit. From now onward NGINX will direct server's HTTP traffic to the Kibana application in port no 5601.
Now restart NGINX to put our changes into effect:

# sudo service nginx restart

Now you can access Kibana by visiting the FQDN or the public IP address of your ELK Server i.e. http://elk_server_public_ip/. Enter "kibana" credentials that you have created earlier, you will be redirected to Kibana welcome page which will ask you to configure an index pattern.

Login to Kibana dashboard

Click filebeat* in the top left sidebar, you will see the logs from the clients flowing into the dashboard.

View Kibana Dashboard

Click the status of the ELK server

ELK Server Status


That's all for ELK server, install filebeat in any number of client systems and ship the logs to the ELK server for analysis. To make the unstructured log data more functional, parse it properly and make it structured using grok. There are also few awesome plug-ins available for use along with kibana, to visualize the logs in a systematic way.

16 Comments... add one

  1. Can you help me?
    where did I make a mistake?

    root@elk:/home/phantom# pleaserun -p systemd -v default --install /opt/logstash/bin/logstash agent -f /etc/logstash/logstash.conf
    No name given, setting reasonable default based on the executable {:name=>"logstash", :level=>:warn}
    An error occurred: File already exists: /etc/default/logstash {:level=>:error}

    root@elk:/home/phantom# sudo systemctl start logstash
    root@elk:/home/phantom# sudo systemctl status logstatus
    ● logstatus.service
    Loaded: not-found (Reason: No such file or directory)
    Active: inactive (dead)

  2. I am trying to setup filebeat on an aws client/instance to ship the logs to my ELK server but I am getting this error message on starting the filebeat service after doing all the configurations as mentioned in the steps:

    sudo /etc/init.d/filebeat restart
    * Restarting Sends log files to Logstash or directly to Elasticsearch. filebeat 2017/03/01 15:40:19.276332 transport.go:125: ERR SSL client failed to connect with: x509: cannot validate certificate for because it doesn't contain any IP SANs

    Can someone please help me with this issue?


    • Hi
      This happens because certificate is valid for hostname present in the Subject field. You need to add hostname field in /etc/hosts to map it to IP of the server.

      let me know if the error still persists.


  3. How do you mean "combine 02-beats-input.conf, 10-syslog-filter.conf and 30-elasticsearch-output.conf as logstash.conf in the directory /etc/logstash/"?

    Can you show me an example?


    • Hi Cooper, The line should be read as "combine 02-beats-input.conf, 10-syslog-filter.conf and 30-elasticsearch-output.conf as a single logstash conf file in the directory /etc/logstash/conf.d"

      Thanks for pointing out the typos and missing words and The article will get corrected.

  4. This worked perfectly for me. One question though: Is there a way to include hostname instead of ip address in the url? How can I make it working? Thanks.


Leave a Comment