msgbartop
MAC OS X, Linux, Windows and other IT Tips and Tricks
msgbarbottom

29 Aug 17 Installing Filebeat, Logstash, ElasticSearch and Kibana in Ubuntu 14.04

PREPARATIONS

#Ref: https://www.elastic.co/guide/en/logstash/current/installing-logstash.html
First install Java 8 in Ubuntu 14.04

# Ref: https://www.liquidweb.com/kb/how-to-install-oracle-java-8-on-ubuntu-14-04-lts/
apt-get install python-software-properties software-properties-common
apt-add-repository ppa:webupd8team/java
apt-get update
apt-get install oracle-java8-installer
java -version

Result:
java version "1.8.0_144"
Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)

Facilitate updating of all packages via APT repositories

apt-get install apt-transport-https
Save the repository definition to /etc/apt/sources.list.d/elastic-5.x.list:
echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
apt-get update

FILEBEAT

Installing filebeat

Filebeat reads lines from defined logs, formats them properly and forwards them to logstash while maintaining a non-clogging pipeline stream
Ref: https://github.com/elastic/beats/tree/master/filebeat
Ref: https://www.elastic.co/guide/en/beats/filebeat/5.5/filebeat-getting-started.html
Ref: https://www.elastic.co/products/beats/filebeat

apt-get install filebeat
mv /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml.orig
touch /etc/filebeat/filebeat.yml
mcedit /etc/filebeat/filebeat.yml

(content)
————————

filebeat.prospectors:
- input_type: log
paths:
- /var/log/apache2/access.log
output.logstash:
hosts: ["localhost:5044"]

————————
service filebeat restart

LOGSTASH

Download logstash debian install package and configure it

# Ref: https://www.elastic.co/downloads/logstash
apt-get install logstash

# Result:
.......
Using provided startup.options file: /etc/logstash/startup.options
Successfully created system startup script for Logstash

Preparing Logstash

mcedit /etc/logstash/startup.options
(add the following line at the beginning)
LS_CONFIGS_DIR=/etc/logstash/conf.d/

(Then adjust the following line as follows)
from:
LS_OPTS="--path.settings ${LS_SETTINGS_DIR}"
to:
LS_OPTS="--path.settings ${LS_SETTINGS_DIR} --path.config ${LS_CONFIGS_DIR}"

Start/Stop/Restart logstash
service logstash {start|stop|restart}

Testing logstash

cd /etc/logstash/ ; /usr/share/logstash/bin/logstash -e 'input { stdin { } } output { stdout {} }'

Type: hello world
and and press CTRL-D

(Logstash adds timestamp and IP address information to the message. Exit Logstash by issuing a CTRL-D command in the shell where Logstash is running.)

Results:

ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using –path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console
11:22:59.822 [[main]-pipeline-manager] INFO logstash.pipeline – Starting pipeline {“id”=>”main”, “pipeline.workers”=>2, “pipeline.batch.size”=>125, “pipeline.batch.delay”=>5, “pipeline.max_inflight”=>250}
11:22:59.847 [[main]-pipeline-manager] INFO logstash.pipeline – Pipeline main started
The stdin plugin is now waiting for input:
2017-08-23T09:22:59.878Z h270746.stratoserver.net test 1
11:22:59.946 [Api Webserver] INFO logstash.agent – Successfully started Logstash API endpoint {:port=>9601}
11:23:02.861 [LogStash::Runner] WARN logstash.agent – stopping pipeline {:id=>”main”}

The errors and warnings are OK for now. The main result line above that is significant is:
2017-08-23T09:22:59.878Z h270746.stratoserver.net test 1
which adds a timestamp and server name to the input string (test 1)

Configuring logstash
# Note: this test configuration will get input from filebeat and output into a log file which can be watched with tail -f …..
mcedit /etc/logstash/conf.d/apache2.conf
(content)
input {
beats {
port => 5044
type => "apache"
}
}
filter {
if [type] == "apache" {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}"}
}
geoip {
source => "clientip"
}
}
}
output {
file {
path => "/var/log/logstash_output.log"
}
}

In order to have the proper output sent to elasticsearch then use this output configuration instead:
———————————-
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

Securing Filebeat => Logstash with SSL

Ref: https://www.elastic.co/guide/en/beats/filebeat/current/configuring-ssl-logstash.html#configuring-ssl-logstash
Note: Typing by hand below is shown in bold.

Prepare the ertificates directories:

mkdir -p /etc/logstash/certs/Logstash/ /etc/logstash/certs/Beats/
Create client certificates for FileBeat:
/usr/share/elasticsearch/bin/x-pack/certgen

........
Let's get started...

Please enter the desired output file [/etc/elasticsearch/x-pack/certificate-bundle.zip]: /etc/logstash/certs/Beats/certificate-bundle_Beats.zip
Enter instance name: Beats
Enter name for directories and files : Beats
Enter IP Addresses for instance (comma-separated if more than one) []:
Enter DNS names for instance (comma-separated if more than one) []: localhost
Certificates written to /etc/logstash/certs/Beats/certificate-bundle_Beats.zip

Create client certificates for Logstash:
/usr/share/elasticsearch/bin/x-pack/certgen

........
Let's get started...

Please enter the desired output file [/etc/elasticsearch/x-pack/certificate-bundle.zip]: /etc/logstash/certs/Logstash/certificate-bundle_Logstash.zip
Enter instance name: Logstash
Enter name for directories and files : Logstash
Enter IP Addresses for instance (comma-separated if more than one) []:
Enter DNS names for instance (comma-separated if more than one) []: localhost
Certificates written to /etc/logstash/certs/Logstash/certificate-bundle_Logstash.zip

This file should be properly secured as it contains the private keys for all
instances and the certificate authority.

After unzipping the file, there will be a directory for each instance containing
the certificate and private key. Copy the certificate, key, and CA certificate
to the configuration directory of the Elastic product that they will be used for
and follow the SSL configuration instructions in the product guide.

For client applications, you may only need to copy the CA certificate and
configure the client to trust this certificate.

Extract certificates:
unzip /etc/logstash/certs/Beats/certificate-bundle_Beats.zip -d /etc/logstash/certs/Beats/
unzip /etc/logstash/certs/Logstash/certificate-bundle_Logstash.zip -d /etc/logstash/certs/Logstash/

Convert the Logstash key Logstash.key from PKCS#1 to PKCS#8 format:
Reason: the following error message in the logstash.log occured when using the PKCS1 format:
[ERROR][logstash.inputs.beats ] Looks like you either have an invalid key or your private key was not in PKCS8 format. {:exception=>java.lang.IllegalArgumentException: File does not contain valid private key: /etc/logstash/certs/Logstash/Logstash/Logstash.key}

See: https://github.com/spujadas/elk-docker/issues/112

Command:
openssl pkcs8 -in /etc/logstash/certs/Logstash/Logstash/Logstash.key -topk8 -nocrypt -out /etc/logstash/certs/Logstash/Logstash/Logstash.key.PKCS8

Configure Beats for SSL

Content of /etc/filebeat/filebeat.yml
filebeat.prospectors:
- input_type: log
paths:
- /var/log/apache2/access.log
output.logstash:
hosts: ["localhost:5044"]
ssl.certificate_authorities: ["/etc/logstash/certs/Logstash/ca/ca.crt"]
ssl.certificate: "/etc/logstash/certs/Beats/Beats/Beats.crt"
ssl.key: "/etc/logstash/certs/Beats/Beats/Beats.key"

Content of /etc/logstash/conf.d/apache.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate_authorities => ["/etc/logstash/certs/Logstash/ca/ca.crt"]
ssl_certificate => "/etc/logstash/certs/Logstash/Logstash/Logstash.crt"
ssl_key => "/etc/logstash/certs/Logstash/Logstash/Logstash.key.PKCS8"
ssl_verify_mode => "force_peer"
}
}

Restart both Logstash and Filebeat
service logstash restart
service filebeat restart

NOTE: I’m still having problems with the SSL connection of Filebeat to Logstash where Filebeat throws this error in (/var/log/logstash/logstash-plain.log):
TLS internal error.
The following URL seems to have found some similar problems but because of lack of time I haven’t figured it out yet.
https://discuss.elastic.co/t/mutual-tls-filebeat-to-logstash-fails-with-remote-error-tls-internal-error/85271/3

X-Pack for Logstash

INSTALL X-Pack for logstash

X-Pack is an Elastic Stack extension that bundles security, alerting, monitoring, reporting, machine learning, and graph capabilities into one easy-to-install package.
X-Pack also provides a monitoring UI for Logstash.

/usr/share/logstash/bin/logstash-plugin install x-pack

Result:

Downloading file: https://artifacts.elastic.co/downloads/logstash-plugins/x-pack/x-pack-5.5.2.zip
Downloading [=============================================================] 100%
Installing file: /tmp/studtmp-bc1c884de6d90f1aaa462364e5895b6b08b050f0b64587b4f5e0a8ec5300/x-pack-5.5.2.zip
Install successful

Configuring X-Pack in Logstash:

The defaults settings created during the installation works best for most cases. For more information see:
https://www.elastic.co/guide/en/logstash/5.5/settings-xpack.html

To Prevent generation of monitoring error messages in logstash.log edit /etc/logstash/logstash.yml and add the following line at the end:
(Ref: https://discuss.elastic.co/t/logstash-breaks-when-disabling-certain-x-pack-features/89511)

xpack.monitoring.enabled: false

ElasticSearch

Installation:
apt-get install elasticsearch

Start/Stop/Restart Elastic search:
/etc/init.d/elasticsearch {start|stop|restart}

To check if elasticsearch has been started:
ps aux | grep $(cat /var/run/elasticsearch/elasticsearch.pid)

Example of result(truncated):
elastic+ 10978 3.2 55.2 4622152 2319168 pts/3 Sl 15:44 0:10 /usr/lib/jvm/java-8-oracle/bin/java ........

The check the Elasticsearch log file:
tail -f /var/log/elasticsearch/elasticsearch.log

NOTE 1:
If you see the line:
[WARN ][o.e.b.BootstrapChecks ] [wJdCtOd] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
and the result of the following command is empty,

grep vm.max_map_count /etc/sysctl.conf

Solution:
Raise the max virtual memory areas vm.max_map_count to 262144 as follows:
Add the following line in the file /etc/sysctl.conf

vm.max_map_count=262144

And run the command:
sysctl -w vm.max_map_count=262144
OR
echo 262144 > /proc/sys/vm/max_map_count

ALSO Make sure the elasticsearch config file (/etc/elasticsearch/jvm.options) has the following entries:
-Xms2g
-Xmx2g

IMPORTANT:
if the following commands are failing it might be because some Virtual Servers are not allowing such changes in the kernel:
eg.
sysctl -w vm.max_map_count=262144
sysctl: permission denied on key ‘vm.max_map_count’
echo 262144 > /proc/sys/vm/max_map_count
-bash: /proc/sys/vm/max_map_count: Permission denied

Elastic search should be able to run anyway but might be limited in performance and may have other issues because of these limitations.
There is no known remedies to this for Strato VM servers.

NOTE 2:
If you see the line:
[WARN ][i.n.u.i.MacAddressUtil ] Failed to find a usable hardware address from the network interfaces; using random bytes: ……..

Solution:
No need to worry, the accuracy of the MAC address is not so important in this installation.

NOTE 3:
If you see the line:
[WARN ][o.e.b.BootstrapChecks ] [wJdCtOd] system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
If this problem occurs elasticsearch will start but not get initialised properly and most likely not function properly.

Solution:
If elasticsearch is accessed only in a protected environment, disabling this installation of system call filters should be no problem
by editing the file /etc/elasticsearch/elasticsearch.yml and adding the following line:
bootstrap.system_call_filter: false
Restart elasticsearch:
service elasticsearch restart

————————————————————————

X-Pack for elasticsearch

X-Pack is an Elastic Stack extension that bundles security, alerting, monitoring, reporting, machine learning, and graph capabilities into one easy-to-install package.

Installation:
/usr/share/elasticsearch/bin/elasticsearch-plugin install x-pack

Results:
-> Downloading x-pack from elastic
[=================================================] 100%
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: plugin requires additional permissions @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
* java.io.FilePermission \\.\pipe\* read,write
* java.lang.RuntimePermission accessClassInPackage.com.sun.activation.registries
* java.lang.RuntimePermission getClassLoader
* java.lang.RuntimePermission setContextClassLoader
* java.lang.RuntimePermission setFactory
* java.security.SecurityPermission createPolicy.JavaPolicy
* java.security.SecurityPermission getPolicy
* java.security.SecurityPermission putProviderProperty.BC
* java.security.SecurityPermission setPolicy
* java.util.PropertyPermission * read,write
* java.util.PropertyPermission sun.nio.ch.bugLevel write
* javax.net.ssl.SSLPermission setHostnameVerifier
See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html
for descriptions of what these permissions allow and the associated risks.

Continue with installation? [y/N]y
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: plugin forks a native controller @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
This plugin launches a native controller that is not subject to the Java
security manager nor to system call filters.

Continue with installation? [y/N]y
-> Installed x-pack

KIBANA

Install kibana package
apt install kibana
Install X-Pack for logstash
X-Pack is an Elastic Stack extension that bundles security, alerting, monitoring, reporting, machine learning, and graph capabilities into one easy-to-install package.
/usr/share/kibana/bin/kibana-plugin install x-pack
Change built-in users password
Ref: https://www.elastic.co/guide/en/x-pack/5.5/setting-up-authentication.html#reset-built-in-user-passwords
change passwords

curl -XPUT 'localhost:9200/_xpack/security/user/elastic/_password?pretty' -H 'Content-Type: application/json' -d'
{
"password": "elasticpassword"
}
'

curl -XPUT 'localhost:9200/_xpack/security/user/kibana/_password?pretty' -H 'Content-Type: application/json' -d'
{
"password": "kibanapassword"
}
'

curl -XPUT 'localhost:9200/_xpack/security/user/logstash_system/_password?pretty' -H 'Content-Type: application/json' -d'
{
"password": "logstashpassword"
}
'

Update the Kibana server with the new password /etc/kibana/kibana.yml
elasticsearch.password: kibanapassword
Update the Logstash configuration with the new password /etc/logstash/logstash.yml
xpack.monitoring.elasticsearch.password: logstashpassword
Disable Default Password Functionality /etc/elasticsearch/elasticsearch.yml
xpack.security.authc.accept_default_password: false

Start/Stop/Restart kibana
service kibana {start|stop|restart}

30 Oct 16 Monitoring Linux server with iPhone/iPad

Introduction:

Although Apple doesn’t have too many apps that support Linux admins, here is one that just came back on the market with a rebound on 26 Oct. 2016 with a new look, features and bug fixes: The iStat3 Server for Linux and iStat3 for iOS made by Bjango PTY Ltd. This app will display live the following characteristics of a Linux server.
– Uptime
– CPU usage
– System Load
– Disk space and disk activity
– Network traffic load
– Processes list(top)
– Sensors: Memory and CPU temperature

Read more about it on https://bjango.com/ios/istat/

In order for the iOS app to get this information from the Linux servers, it needs a connection to its colleague the iStat3 server, which is an agent running in each targeted Linux server. The agent is a daemon which runs in the background and listens on a standard port 5109(configurable). Since there are so many different Linux distributions the agent needs to be compiled in each targeted Linux server. In order to facilitate this process I wrote this article.

Note: I only mention the steps for Debian 6/7/8 and Ubuntu 12.x/14.x/16.x

Steps:

Installing the needed packages:
apt-get update && apt-get install build-essential g++ autoconf libxml2-dev libssl-dev libsqlite3-dev fancontrol libsensors4:amd64 libsensors4-dev lm-sensors
Download the software:
wget https://download.bjango.com/istatserverlinux -O istatserver-linux_3.0.tar.gz
or if changed address or not available
wget http://public.itmatrix.eu/istatserver-linux_3.0.tar.gz
Compiling and installing the software:
tar fvxz istatserver-linux_3.0.tar.gz
cd istatserver-3.01
./configure && make && make install

Configuring the istatserver:
Here you mostly need to modify the 5 digit server_code.
vim /usr/local/etc/istatserver/istatserver.conf

Extra preparations for Debian 6/7 or Ubuntu 12.x/14.x which are using the SysV init

Getting the start script from my repos:
wget http://public.itmatrix.eu/istatserver -O /etc/init.d/istatserver
chmod 755 /etc/init.d/istatserver
update-rc.d istatserver defaults
service istatserver start ; sleep 1 ; ps aux | grep -v grep | grep istat

Result should be:
istat 17891 0.0 0.2 42108 2332 ? R 18:39 0:00 /usr/local/bin/istatserver -d

Extra preparations for Debian 8 or Ubuntu 16.x which are using the Systemd init

vim /etc/systemd/system/istatserver.service
istatserver.service file content:
[Unit]
Description=istatserver server daemon
Documentation=man:istatserver(8)
After=network.target
#
[Service]
Type=simple
EnvironmentFile=/etc/default/istatserver
ExecStart=/usr/local/bin/istatserver $OPTIONS
ExecReload=/bin/kill -HUP $MAINPID
KillMode=control-group
Restart=on-failure
RestartSec=30s
#
[Install]
WantedBy=multi-user.target

touch /etc/default/istatserver
systemctl daemon-reload
systemctl enable istatserver.service
service istatserver start ; sleep 1 ; ps aux | grep -v grep | grep istat

Result should be:
istat 1507 43.0 0.0 118844 7120 ? Ssl 19:02 0:00 /usr/local/bin/istatserver

General Note:

Makes sure your firewall is allowing in the port 5109(or whatever the port you are using).
I’m using ufw, so for example the command would be:
ufw allow from any to any port 5109
Result:
Rule added
Rule added (v6)

UPGRADING from ISTATD to ISTATSERVER:

In case you had already the older version of this agent(istatd) running here are the steps to stop using it:
ps aux | grep istat
killall istatd ; sleep 2 ; killall istatd
update-rc.d -f istatd remove

Getting the iPad/iPhone APP:

Concerning the iOS app, you need to buy it on Apple store and its name is: iStat 3 from Bjango PTY Ltd.
This app allows to monitor multiple Linux servers with very pretty graphs.
If you have a Mac you can also buy the similar APP called iStat from Apple Store. It displays the exact same thing as with iPad and adds a few small extra features.
screen-shot-2016-10-31-at-19-41-18

15 Apr 16 Installing Webmin in Debian 8(Jessie)

These instructions are a ‘Plagiat’ of the site:
http://www.christophe-casalegno.com/2015/07/14/how-to-install-webmin-on-debian-8/

To install webmin on Debian 8 just follow this instructions :
cd /root
wget http://www.webmin.com/jcameron-key.asc
apt-key add jcameron-key.asc
echo "deb http://download.webmin.com/download/repository sarge contrib" >> /etc/apt/sources.list
echo "deb http://webmin.mirror.somersettechsolutions.co.uk/repository sarge contrib" >> /etc/apt/sources.list
apt-get update
apt-get -y install webmin

If itโ€™s too long for you, you can also just do this :
wget http://www.christophe-casalegno.com/tools/install_webmin.sh
chmod +x install_webmin.sh
./install_webmin.sh

13 Feb 16 Verifying the validity of an NFS mount

Introduction:
Every now and then if an NFS mount is no more connected to the server or something goes wrong with the NFS connection, running the command ‘ls mountpoint’ hangs the terminal till I press CTRL-C. So I tried to figure out a script that will be run as cron job and will tell me when an NFS mount has gone wrong. I had to revert to unorthodox tricks since doing a simple command ‘stat mountpoint &’ within the script would also hang the script. So I use the command ‘at now’ which runs the command independently for the script that initiated it. Here is an example of such script.

#!/bin/bash
# Name: MOUNT_CHECK.sh
# Purpose: Checks the health of the NFS mountpoint given by argument
# it kills the at/stat process and exits with an exit code 2 if the timeout has expired.
#-------------------------------------------------------------------
startdelay=3
timeout=10
# processes to be excluded in the 'ps | grep' test
excludes="openvpn|istatd|rpc.statd"
if [ $# -ne 1 ]; then
echo "ERROR: Needs mountpoint as argument"
echo "Usage: MOUNT_CHECK.sh MountPoint"
exit 2
fi
#
echo "/usr/bin/stat $1" | /usr/bin/at now
sleep $startdelay
while (ps ax | egrep -v "grep|$excludes" | grep -q stat); do
let count=${count}+1
sleep 1
if [ $count -ge $timeout ]; then
kill $(pidof stat)
#echo "Mountpoint $1 : FAILED to connect before timeout of $timeout sec."
exit 2
fi
done

18 Jan 16 Reporting SMART status of RAID disks

Reference site: http://www.cyberciti.biz/faq/linux-checking-sas-sata-disks-behind-adaptec-raid-controllers/

Note: Although Hardware RAID controllers made by other hardware manufacturers here I use Adaptec as an example:

Install the software:
apt- get install smartmontools
Curious which company the RAID controller is from?
Find out which RAID controller you have:
lspci | grep 'RAID'
Result: 01:00.0 RAID bus controller: Adaptec Device 028b (rev 01)
# Check out if the controller is supported and which devices it sees:
smartctl --scan
Example output:
/dev/sda -d scsi [SCSI]
/dev/sdb -d scsi [SCSI]

Check the SMART overall-health test of the drives :
smartctl -d scsi -H /dev/sda | grep 'SMART'
smartctl -d scsi -H /dev/sdb | grep 'SMART'

Result example:
/dev/sda: SMART Health Status: OK
/dev/sdb: SMART Health Status: OK

Checking Individual drives behind the RAID controller
The individual drives behind the controller are usually named sequentially according to the order of the simulated drives:
eg.
/dev/sda (2 drives behind controller): /dev/sg1 /dev/sg2
/dev/sdb (2 drives behind controller): /dev/sg3 /dev/sg4

Commands for doing those checks:
smartctl -d scsi --all -T permissive /dev/sg1
smartctl -d scsi --all -T permissive /dev/sg2
smartctl -d scsi --all -T permissive /dev/sg3
smartctl -d scsi --all -T permissive /dev/sg4

Create a script that will be run by cron regularly and send the results by email:
Script:
#!/bin/bash
# Name: SMART-report.sh
# Purpose: Sends report of SMART status of RAID hard disks
# Syntax: SMART-report.sh
#--------------------------------------------------------
(. ~/.bashrc
echo -n "/dev/sda: "
smartctl -d scsi -H /dev/sda | grep 'SMART'
echo -n "/dev/sdb: "
smartctl -d scsi -H /dev/sdb | grep SMART
echo "Individual drives behind the RAID controller";echo
echo "============== /dev/sda ===> /dev/sg1 ============="
smartctl -d scsi --all -T permissive /dev/sg1 | grep 'SMART';echo
echo "============== /dev/sda ===> /dev/sg2 ============="
smartctl -d scsi --all -T permissive /dev/sg2 | grep 'SMART';echo
echo "============== /dev/sdb ===> /dev/sg3 ============="
smartctl -d scsi --all -T permissive /dev/sg3 | grep 'SMART';echo
echo "============== /dev/sdb ===> /dev/sg4 ============="
smartctl -d scsi --all -T permissive /dev/sg4 | grep 'SMART'
) | mail -s "SMART Result of $(hostname -f)" user@my-email.com

14 Jan 16 Preventing a bash script from running concurrently

Introduction: In order to prevent a bash script instance from running more than once concurrently, here is a small tip on how to write the script.

Script template:
#!/bin/bash
# Prevents that an instance of the script starts while another instance of it is still running
scriptname=$(basename $0)
lockfile="/tmp/${scriptname}.lock"
if [ -e $lockfile ]; then exit 1 ; fi
touch $lockfile.lock
# Delete lock file if CTRL-C typed from the keyboard
trap 'rm $lockfile ; exit' SIGINT SIGQUIT
#----------------------------------
# ############ Put your script code here #####################
#----------------------------------
# delete the lock file
rm $lockfile
# .eof

16 Jun 15 Install TeamViewer in Debian Wheezy

Teamviewer is a very good and stable remote desktop with many clients software form almost any platform. Here I explain how I got TeamViewer to run on a headless Debian Wheezy server.
Reference: https://www.teamviewer.com/en/help/363-Wie-installiere-ich-TeamViewer-auf-meiner-Linux-Distribution.aspx#multiarch

Steps:
– Install the VNC desktop on the Debian Server for a particular user as per the instructions shown here:
//tipstricks.itmatrix.eu/installing-linux-remote-terminal-using-vnc-on-a-debian-server/
Note: this above VNC server will not be directly conecte to via VNC as remote desktop here, but as an Xserver based virtual desktop screen for TeamViewer server to mirror it to TeamViewer clients. At start we will need the VNC connection to the first time start of TeamViewer. Afterwards no more. Therefore the port 5901 should be afterwards blocked by a Firewall.

– Install TeamVierwer i386(32 Bit) in Wheezy. Because of problems with dependencies the 32 bit version of TeamViewer needs to be installed as follows:

– Install the i386 MultiArch environment and dependencies packages:
dpkg --add-architecture i386
apt-get update
apt-get install links:i386
apt-get install libasound2-plugins:i386 glibc-doc:i386 locales:i386 ia32-libs lib32z1 lib32asound2 libc6-i386 ia32-libs-i386

– Install TeamViewer
wget http://download.teamviewer.com/download/teamviewer_i386.deb
dpkg -i teamviewer_i386.deb
teamviewer daemon enable

…. not finished!! To be continued ๐Ÿ™‚

04 May 15 Using CURL for sending crafted HTTP POST authenticated queries

CHALLENGE:
I came across a situation where I needed to send an HTTP request using the POST method with some POST data but after I have authenticated with name and password.

SOLUTION:(using curl tool)
The trick here is to preserve the SESSIONID of the authenticated response for the second POST request.

EXAMPLE:
I needed to go into my account in domain-hoster.net and request the CSV file which lists all my registered domains.

COMMANDS:
curl -v --user-agent "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:37.0) Gecko/20100101 Firefox/37.0" -c cookies.txt -d "username=myuser&password={html_encoded_password}" http://login.domain-hoster.net/index/login
curl -v --user-agent "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:37.0) Gecko/20100101 Firefox/37.0" -b cookies.txt -d "orderField=&orderDir=&name=&state=&owner=&sedo=&lock=&date_expire=&renewal=&itemsPerPage=&csv=CSV" http://login.domain-hoster.net/domain

In the above example I simulate a Firefox Browser (–user-agent), save the cookies (includes the SESSIONID) in the file cookies.txt and use it in my second POST request to get the content of the requested CSV file into the terminal.

IMPORTANT NOTE: The password must be in proper HTML-encoded format to be accepted. This is applicable for any chars. that is not a-z or A-Z. There are many ways to convert the password in HTML-Encoded format. The most reliable way I found, is to manually login with a proper browser with name and password and look at the request headers using a browser plugin that lets you see the headers contents. The password will then be shown properly in the header.
Examples of password characters and their HTML-Encoded equivalents:
& = %26
! = %21, etc.
So a password like: Tw&Ui8vH!
would look like this: Tw%26Ui8vH%21

31 Mar 15 Monitoring latency time of http requests

Here is a simple but useful command which shows the latency time of http requests. You can adjust the delay between repeats as well as the URL being queried.
Reference: http://www.shellhacks.com/en/Check-a-Website-Response-Time-from-the-Linux-Command-Line

host="www.google.de"; delay=5; while true ; do echo -n "Response time for http://$host:" ;curl -s -w %{time_total}\\n -o /dev/null http://$host ;sleep $delay; done

Results:
Response time for http://www.google.de:0,025
Response time for http://www.google.de:0,024
Response time for http://www.google.de:0,024
Response time for http://www.google.de:0,024
Response time for http://www.google.de:0,024
Response time for http://www.google.de:0,026
Response time for http://www.google.de:0,024
Response time for http://www.google.de:0,024
Response time for http://www.google.de:0,024
.......

ADVANCED:
Here is a more advance version which performs more timing tests:

host="www.google.de"; delay=5; while true ; do echo "------"; curl -s -w '\nLookup time:\t%{time_namelookup}\nConnect time:\t%{time_connect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' -o /dev/null http://$host; sleep $delay; done

Results:
Lookup time: 0,002
Connect time: 0,011
PreXfer time: 0,011
StartXfer time: 0,022
.
Total time: 0,023
------
.
Lookup time: 0,001
Connect time: 0,012
PreXfer time: 0,013
StartXfer time: 0,023
.
Total time: 0,023
------
.......

Meanings:
Lookup time: The time, in seconds, it took from the start until the name resolving was completed.
Connect time: The time, in seconds, it took from the start until the TCP connect to the remote host was completed.
PreXfer time: The time, in seconds, it took from the start until the file transfer was just about to begin. This includes all ‘pre-transfer’ commands and negotiations that are specific to the particular protocol(s) involved.
StartXfer time: The time, in seconds, it took from the start until the first byte was just about to be transferred. This includes ‘time_pretransfer’ and also the time the server needed to calculate the result.

13 Sep 14 Installing Shinken in Debian Wheezy

Debian Wheezy does offer the installation of the full (a bit old)shinken, BUT it doesn’t offer the Installation of the WebUI.
Here is a better way to install everything including pnp4nagios and check_mk in one go:

STEPS:
Install Shinken
wget http://www.shinken-monitoring.org/install -O /tmp/install_shinken.sh
cd /tmp && sh install_shinken.sh

Configure shinken
vim /usr/local/shinken/etc/shinken-specific.cfg
Change the http://YOURSERVERNAME/ in:
define module {
module_name GRAPHITE_UI
module_type graphite_webui
uri http://YOURSERVERNAME/
templates_path /usr/local/shinken/share/templates/graphite/
}

Because of a bug in the stop arbiter script we need to make some tiny changes:
Edit /usr/local/shinken/bin/stop_arbiter.sh
change the line
kill $(cat "$DIR"/../var/arbiterd.pid)
to
kill $(cat "$DIR"/../var/arbiter.pid)

Unfortunately the process skonf doesn’t want to stop when I run the script: /usr/local/shinken/bin/stop_all.sh
So we need to doctor the script:
vim /usr/local/shinken/bin/stop_skonf.sh
Change the following:
FROM:
kill $(cat "$DIR"/../var/skonfd.pid)
TO:
kill -9 $(ps aux | grep skonf.cfg | grep -v 'grep' | awk '{print $2}')

Start the whole shinken
cd /usr/local/shinken/bin
./launch_all.sh

WebUI


Change the password of the admin user:
cd /usr/local/shinken/etc/
htpasswd htpasswd.users admin


Now connect to the WebUI

In browser:
http://yourserver.name:7727/
To Stop all the services
/usr/local/shinken/bin/stop_all.sh
To de-install shinken
cd /usr/local/shinken/
./install -u