Phot Courtesy - medium.com |
I will install ELK stack that
is ElasticSearch 5.2.x, Logstash 5.2.0 and Kibana 5.2.x on my macOS. We will also
configure whole stack together so that our logs can be visualized on single
place using Filebeat 5.4.3.
What is ELK stack?
ELK stack is a combination of
three services: ElasticSearch, Logstash and Kibana. ElasticSearch is an open
source, distributed, Restful text based search engine. In ELK stack, it is used
to store logs, so that they can be easily searched and retrieved.
Logstash is an open source tool for collecting, parsing, and storing logs
for future use. Kibana is a web interface that can be used to search and
view the logs that Logstash has indexed.
ELK Stack (Photo Courtesy - digitalocean.com) |
Why ELK stack?
Centralised logging is
useful when you have a critical website running on multiple servers. Manually
searching logs on different servers takes lots of time to debug the problem.
ELK stack allows you to search through all server logs at one place, hence
makes debugging easy and timeless. With ELK stack you can identify issues
that span multiple servers by correlating their logs during a specific time
frame.
Usage of tools
- ElasticSearch: Stores all the logs.
- Logstash: Processes incoming logs from different sources.
We will use the log files here.
- Kibana: Web interface for searching and
visualizing logs, which will be proxied through Nginx web server.
- Filebeat: Log shipping agent, will be installed on servers,
that will send logs to Logstash. For simplicity I will be installing
Filebeat in my local machine.
Prerequisites:
- Any server machine like windows, mac or Ubuntu. I
am using my local MacOS.
- 2 GB RAM and 2 CPU.
- Java 8 Or latest
Downloads:
As I am installing the ELK stack in my
MacOS, I am downloading the compressed binaries. In case of
Ubuntu you can use the debian installers to install. You may also use homebrew
to install in MacOS.
Please down load the binaries
from the specified locations. You need to download the one specific to your
OS.
- Elasticsearch 5.3.0 - https://www.elastic.co/downloads/elasticsearch
- Kibana 5.4.3 - https://www.elastic.co/downloads/kibana
- Logstash 5.4.3 - https://www.elastic.co/downloads/logstash
- Filebeat 5.4.3 - https://www.elastic.co/downloads/beats/filebeat
Post download extract each
package and place that to /opt/. In addition, I have also changed the owner of the
folders to current user.
Configure Elasticsearch
Post installation do the following
changes to the configuration file /opt/elasticsearch/config/elasticsearch.yml
Find the line that
specifies network.host, uncomment it, and replace its value with
localhost so it looks like this:
network.host: localhost
you can start Elasticsearch
from /opt/elasticseatch/bin.
./elasticsearch
Configure Kibana
Post installation do the
following changes to the configuration file /opt/kibana/config/kibana.yml
Find the line that
specifies server.host, and replace the IP address ("0.0.0.0" by
default) with localhost. This setting will allow Kibana to be
accessible from localhost only. This is fine because we will use an Nginx
reverse proxy to allow external access.
server.host: "localhost"
you can start kibana from /opt/kibana/bin.
./kibana
Install Nginx
Since, we configured
Kibana to listen on localhost, we will set up a reverse proxy via Nginx to
allow external access to it. You can also do the same using apache httpd. Here
I will be using homebrew to install the same.
brew install nginx
After installation run the
following command to start Nginx server
sudo nginx
Open the url and hit the
following url to check the installation.
The default path of
configuration file nginx.conf in mac is
/usr/local/etc/nginx/nginx.conf
You may replace the server
part of the configuration file with the below.
server {
listen
80;
server_name
localhost;
auth_basic
"Restricted Access";
auth_basic_user_file
htpasswd.users;
location
/ {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
now restart nginx and hit the
below URL to open Kibana UI.
Configure Logstash
Post installation create a
new file logstash.conf in /opt/logstash/config and add the following
configuration.
input {
beats {
port => 5044
}
}
filter {
if [type] ==
"syslog" {
grok {
match
=> { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp}
%{SYSLOGHOST:syslog_hostname}
%{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?:
%{GREEDYDATA:syslog_message}" }
add_field
=> [ "received_at", "%{@timestamp}" ]
add_field
=> [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match
=> [ "syslog_timestamp", "MMM d HH:mm:ss",
"MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts =>
["localhost:9200"]
sniffing =>
true
manage_template
=> false
index =>
"%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type
=> "%{[@metadata][type]}"
}
}
You may create individual
files for each of these three categories. In the runtime, all the configuration
files will be get merged to one content.
You can go to /opt/logstash/bin
and run the following command to start the logstash process.
./logstash -f
../config/logstash.conf
Load Kibana Dashboard
Here, we will load filebeat index
pattern in kibana dashboard. For this, download the file into your home
directory.
cd ~
curl -L -O
https://download.elastic.co/beats/dashboards/beats-dashboards-1.2.2.zip
Install the unzip package to
extract above downloaded package
unzip beats-dashboards-*.zip
cd beats-dashboards-*
./load.sh
It will load four index
patterns that are as follow:
- packetbeat-*
- topbeat-*
- filebeat-*
- winlogbeat*
When we start using Kibana,
we will select the Filebeat index pattern as our default.
Load filebaet index template in ElasticSearch
Since we will be using
filebeat to ship logs to ElasticSearch, we need to load filebeat index
template. For this, download filebeat index template into your home directory.
cd ~
curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json
Then load this template
curl -XPUT 'http://localhost:9200/_template/filebeat?pretty'
-d@filebeat-index-template.json
If everything will be fine, you will see the output "acknowledged: true".
ELK server is all setup. Now we will need to setup file beat on the source server to send logs to ELK server. I will use my local machine for the same. But practically you need to install filebeat agent in all the servers, from where you what to pull logs for analysis.
Configure Filebeat
Post installation uncomment
the following changes to the configuration file /opt/filebeat/filebeat.yml
filebeat.prospectors:
paths:
-
/var/log/*.log
output.logstash:
hosts:
["localhost:5044"]
Change the owner of
filebeat.yml to root.
Start Filebeat using the
following command.
sudo filebeat -e -c
filebeat.yml
To test our filebeat
installation, On ELK server run this command
curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
Since, filebeat on client server is sending logs to our ELK server, you must get log data in the output. If your output shows 0 total hits then there is something wrong with your configuration. Check it and correct it. Now, continue to the next step.
Setup Kibana dashboard
Browse your ELK server's
IP into your browser. Now you will see the Kibana dashboard, prompting
you to select default index pattern.
Go ahead and select filebeat-* from
the Index Patterns menu (left side), then click the Star (Set as
default index) button to set the Filebeat index as the default.
Now click the Discover link
in the top navigation bar. By default, this will show you all of the log data
over the last 15 minutes. You should see a histogram with log events, with log
messages. Now, you have all the logs at once place. Congrats, you have successfully
setup the ELK 5 stack!
Defaulting to an index |
Discover Logs |
Apart from Filebeat there are other beat products to extract different other purposes. We can configure them to get the real time status of the infrastructure.
0 comments:
Post a Comment