MAC OS X, Linux, Windows and other IT Tips and Tricks

20 Nov 17 Some Zabbix tools

In order to debug some Zabbix problems here are some tools I gathered to help.

Installation of the package zabbix-get in the monitoring server
apt-get install zabbix-get
Installation of the package zabbix-agent in the monitored hosts.
apt-get install zabbix-agent

TIP: In order to programmatically (using bash for example) create scripts that monitor anything in remote hosts. Then:
– Install the package zabbix-agent in the watched hosts
– Configure /etc/zabbix/zabbix-agentd.conf to accept requests from the monitoring host (eg. Directive: ‘’)
– Restart the zabbix agent(service zabbix-agent restart)
– Open their firewall on port 10050
– Install the package zabbix-get in the monitoring host(apt-get install zabbix-get)
– And use the same commands below inside your scripts to get this information required from the monitored hosts.

The following commands are given on the Zabbix server and the monitored host is eg. ‘’


Verify the availability of the zabbix agent on monitored host:
zabbix_get -s -k
Show the number of running processes on monitored host:
zabbix_get -s -k proc.num[,,,]
Show the number of daemons up and running called ‘apache2’
zabbix_get -s -k proc.num[,,,apache2]
Show free disk space mounted on ‘/’
zabbix_get -s -k vfs.fs.size[/,free]

29 Aug 17 Installing Filebeat, Logstash, ElasticSearch and Kibana in Ubuntu 14.04


#Ref: //
First install Java 8 in Ubuntu 14.04

# Ref: //
apt-get install python-software-properties software-properties-common
apt-add-repository ppa:webupd8team/java
apt-get update
apt-get install oracle-java8-installer
java -version

java version "1.8.0_144"
Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)

Facilitate updating of all packages via APT repositories

apt-get install apt-transport-https
Save the repository definition to /etc/apt/sources.list.d/elastic-5.x.list:
echo "deb // stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
wget -qO - // | sudo apt-key add -
apt-get update


Installing filebeat

Filebeat reads lines from defined logs, formats them properly and forwards them to logstash while maintaining a non-clogging pipeline stream
Ref: //
Ref: //
Ref: //

apt-get install filebeat
mv /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml.orig
touch /etc/filebeat/filebeat.yml
mcedit /etc/filebeat/filebeat.yml


- input_type: log
- /var/log/apache2/access.log
hosts: ["localhost:5044"]

service filebeat restart


Download logstash debian install package and configure it

# Ref: //
apt-get install logstash

# Result:
Using provided startup.options file: /etc/logstash/startup.options
Successfully created system startup script for Logstash

Preparing Logstash

mcedit /etc/logstash/startup.options
(add the following line at the beginning)

(Then adjust the following line as follows)
LS_OPTS="--path.settings ${LS_SETTINGS_DIR}"
LS_OPTS="--path.settings ${LS_SETTINGS_DIR} --path.config ${LS_CONFIGS_DIR}"

Start/Stop/Restart logstash
service logstash {start|stop|restart}

Testing logstash

cd /etc/logstash/ ; /usr/share/logstash/bin/logstash -e 'input { stdin { } } output { stdout {} }'

Type: hello world
and and press CTRL-D

(Logstash adds timestamp and IP address information to the message. Exit Logstash by issuing a CTRL-D command in the shell where Logstash is running.)


ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using –path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/ Using default config which logs to console
11:22:59.822 [[main]-pipeline-manager] INFO logstash.pipeline – Starting pipeline {“id”=>”main”, “pipeline.workers”=>2, “pipeline.batch.size”=>125, “pipeline.batch.delay”=>5, “pipeline.max_inflight”=>250}
11:22:59.847 [[main]-pipeline-manager] INFO logstash.pipeline – Pipeline main started
The stdin plugin is now waiting for input:
2017-08-23T09:22:59.878Z test 1
11:22:59.946 [Api Webserver] INFO logstash.agent – Successfully started Logstash API endpoint {:port=>9601}
11:23:02.861 [LogStash::Runner] WARN logstash.agent – stopping pipeline {:id=>”main”}

The errors and warnings are OK for now. The main result line above that is significant is:
2017-08-23T09:22:59.878Z test 1
which adds a timestamp and server name to the input string (test 1)

Configuring logstash
# Note: this test configuration will get input from filebeat and output into a log file which can be watched with tail -f …..
mcedit /etc/logstash/conf.d/apache2.conf
input {
beats {
port => 5044
type => "apache"
filter {
if [type] == "apache" {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}"}
geoip {
source => "clientip"
output {
file {
path => "/var/log/logstash_output.log"

In order to have the proper output sent to elasticsearch then use this output configuration instead:
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"

Securing Filebeat => Logstash with SSL

Ref: //
Note: Typing by hand below is shown in bold.

Prepare the ertificates directories:

mkdir -p /etc/logstash/certs/Logstash/ /etc/logstash/certs/Beats/
Create client certificates for FileBeat:

Let's get started...

Please enter the desired output file [/etc/elasticsearch/x-pack/]: /etc/logstash/certs/Beats/
Enter instance name: Beats
Enter name for directories and files : Beats
Enter IP Addresses for instance (comma-separated if more than one) []:
Enter DNS names for instance (comma-separated if more than one) []: localhost
Certificates written to /etc/logstash/certs/Beats/

Create client certificates for Logstash:

Let's get started...

Please enter the desired output file [/etc/elasticsearch/x-pack/]: /etc/logstash/certs/Logstash/
Enter instance name: Logstash
Enter name for directories and files : Logstash
Enter IP Addresses for instance (comma-separated if more than one) []:
Enter DNS names for instance (comma-separated if more than one) []: localhost
Certificates written to /etc/logstash/certs/Logstash/

This file should be properly secured as it contains the private keys for all
instances and the certificate authority.

After unzipping the file, there will be a directory for each instance containing
the certificate and private key. Copy the certificate, key, and CA certificate
to the configuration directory of the Elastic product that they will be used for
and follow the SSL configuration instructions in the product guide.

For client applications, you may only need to copy the CA certificate and
configure the client to trust this certificate.

Extract certificates:
unzip /etc/logstash/certs/Beats/ -d /etc/logstash/certs/Beats/
unzip /etc/logstash/certs/Logstash/ -d /etc/logstash/certs/Logstash/

Convert the Logstash key Logstash.key from PKCS#1 to PKCS#8 format:
Reason: the following error message in the logstash.log occured when using the PKCS1 format:
[ERROR][ ] Looks like you either have an invalid key or your private key was not in PKCS8 format. {:exception=>java.lang.IllegalArgumentException: File does not contain valid private key: /etc/logstash/certs/Logstash/Logstash/Logstash.key}

See: //

openssl pkcs8 -in /etc/logstash/certs/Logstash/Logstash/Logstash.key -topk8 -nocrypt -out /etc/logstash/certs/Logstash/Logstash/Logstash.key.PKCS8

Configure Beats for SSL

Content of /etc/filebeat/filebeat.yml
- input_type: log
- /var/log/apache2/access.log
hosts: ["localhost:5044"]
ssl.certificate_authorities: ["/etc/logstash/certs/Logstash/ca/ca.crt"]
ssl.certificate: "/etc/logstash/certs/Beats/Beats/Beats.crt"
ssl.key: "/etc/logstash/certs/Beats/Beats/Beats.key"

Content of /etc/logstash/conf.d/apache.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate_authorities => ["/etc/logstash/certs/Logstash/ca/ca.crt"]
ssl_certificate => "/etc/logstash/certs/Logstash/Logstash/Logstash.crt"
ssl_key => "/etc/logstash/certs/Logstash/Logstash/Logstash.key.PKCS8"
ssl_verify_mode => "force_peer"

Restart both Logstash and Filebeat
service logstash restart
service filebeat restart

NOTE: I’m still having problems with the SSL connection of Filebeat to Logstash where Filebeat throws this error in (/var/log/logstash/logstash-plain.log):
TLS internal error.
The following URL seems to have found some similar problems but because of lack of time I haven’t figured it out yet.

X-Pack for Logstash

INSTALL X-Pack for logstash

X-Pack is an Elastic Stack extension that bundles security, alerting, monitoring, reporting, machine learning, and graph capabilities into one easy-to-install package.
X-Pack also provides a monitoring UI for Logstash.

/usr/share/logstash/bin/logstash-plugin install x-pack


Downloading file: //
Downloading [=============================================================] 100%
Installing file: /tmp/studtmp-bc1c884de6d90f1aaa462364e5895b6b08b050f0b64587b4f5e0a8ec5300/
Install successful

Configuring X-Pack in Logstash:

The defaults settings created during the installation works best for most cases. For more information see:

To Prevent generation of monitoring error messages in logstash.log edit /etc/logstash/logstash.yml and add the following line at the end:
(Ref: //

xpack.monitoring.enabled: false


apt-get install elasticsearch

Start/Stop/Restart Elastic search:
/etc/init.d/elasticsearch {start|stop|restart}

To check if elasticsearch has been started:
ps aux | grep $(cat /var/run/elasticsearch/

Example of result(truncated):
elastic+ 10978 3.2 55.2 4622152 2319168 pts/3 Sl 15:44 0:10 /usr/lib/jvm/java-8-oracle/bin/java ........

The check the Elasticsearch log file:
tail -f /var/log/elasticsearch/elasticsearch.log

If you see the line:
[WARN ][o.e.b.BootstrapChecks ] [wJdCtOd] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
and the result of the following command is empty,

grep vm.max_map_count /etc/sysctl.conf

Raise the max virtual memory areas vm.max_map_count to 262144 as follows:
Add the following line in the file /etc/sysctl.conf


And run the command:
sysctl -w vm.max_map_count=262144
echo 262144 > /proc/sys/vm/max_map_count

ALSO Make sure the elasticsearch config file (/etc/elasticsearch/jvm.options) has the following entries:

if the following commands are failing it might be because some Virtual Servers are not allowing such changes in the kernel:
sysctl -w vm.max_map_count=262144
sysctl: permission denied on key ‘vm.max_map_count’
echo 262144 > /proc/sys/vm/max_map_count
-bash: /proc/sys/vm/max_map_count: Permission denied

Elastic search should be able to run anyway but might be limited in performance and may have other issues because of these limitations.
There is no known remedies to this for Strato VM servers.

If you see the line:
[WARN ][i.n.u.i.MacAddressUtil ] Failed to find a usable hardware address from the network interfaces; using random bytes: ……..

No need to worry, the accuracy of the MAC address is not so important in this installation.

If you see the line:
[WARN ][o.e.b.BootstrapChecks ] [wJdCtOd] system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
If this problem occurs elasticsearch will start but not get initialised properly and most likely not function properly.

If elasticsearch is accessed only in a protected environment, disabling this installation of system call filters should be no problem
by editing the file /etc/elasticsearch/elasticsearch.yml and adding the following line:
bootstrap.system_call_filter: false
Restart elasticsearch:
service elasticsearch restart


X-Pack for elasticsearch

X-Pack is an Elastic Stack extension that bundles security, alerting, monitoring, reporting, machine learning, and graph capabilities into one easy-to-install package.

/usr/share/elasticsearch/bin/elasticsearch-plugin install x-pack

-> Downloading x-pack from elastic
[=================================================] 100%
@ WARNING: plugin requires additional permissions @
* \\.\pipe\* read,write
* java.lang.RuntimePermission
* java.lang.RuntimePermission getClassLoader
* java.lang.RuntimePermission setContextClassLoader
* java.lang.RuntimePermission setFactory
* createPolicy.JavaPolicy
* getPolicy
* putProviderProperty.BC
* setPolicy
* java.util.PropertyPermission * read,write
* java.util.PropertyPermission write
* setHostnameVerifier
See //
for descriptions of what these permissions allow and the associated risks.

Continue with installation? [y/N]y
@ WARNING: plugin forks a native controller @
This plugin launches a native controller that is not subject to the Java
security manager nor to system call filters.

Continue with installation? [y/N]y
-> Installed x-pack


Install kibana package
apt install kibana
Install X-Pack for logstash
X-Pack is an Elastic Stack extension that bundles security, alerting, monitoring, reporting, machine learning, and graph capabilities into one easy-to-install package.
/usr/share/kibana/bin/kibana-plugin install x-pack
Change built-in users password
Ref: //
change passwords

curl -XPUT 'localhost:9200/_xpack/security/user/elastic/_password?pretty' -H 'Content-Type: application/json' -d'
"password": "elasticpassword"

curl -XPUT 'localhost:9200/_xpack/security/user/kibana/_password?pretty' -H 'Content-Type: application/json' -d'
"password": "kibanapassword"

curl -XPUT 'localhost:9200/_xpack/security/user/logstash_system/_password?pretty' -H 'Content-Type: application/json' -d'
"password": "logstashpassword"

Update the Kibana server with the new password /etc/kibana/kibana.yml
elasticsearch.password: kibanapassword
Update the Logstash configuration with the new password /etc/logstash/logstash.yml
xpack.monitoring.elasticsearch.password: logstashpassword
Disable Default Password Functionality /etc/elasticsearch/elasticsearch.yml false

Start/Stop/Restart kibana
service kibana {start|stop|restart}

30 Oct 16 Monitoring Linux server with iPhone/iPad


Although Apple doesn’t have too many apps that support Linux admins, here is one that just came back on the market with a rebound on 26 Oct. 2016 with a new look, features and bug fixes: The iStat3 Server for Linux and iStat3 for iOS made by Bjango PTY Ltd. This app will display live the following characteristics of a Linux server.
– Uptime
– CPU usage
– System Load
– Disk space and disk activity
– Network traffic load
– Processes list(top)
– Sensors: Memory and CPU temperature

Read more about it on //

In order for the iOS app to get this information from the Linux servers, it needs a connection to its colleague the iStat3 server, which is an agent running in each targeted Linux server. The agent is a daemon which runs in the background and listens on a standard port 5109(configurable). Since there are so many different Linux distributions the agent needs to be compiled in each targeted Linux server. In order to facilitate this process I wrote this article.

Note: I only mention the steps for Debian 6/7/8 and Ubuntu 12.x/14.x/16.x


Installing the needed packages:
apt-get update && apt-get install build-essential g++ autoconf libxml2-dev libssl-dev libsqlite3-dev fancontrol libsensors4:amd64 libsensors4-dev lm-sensors libssl1.0-dev
Download the software:
wget // -O istatserver-linux_3.02.tar.gz
or if changed address or not available
wget //
Compiling and installing the software:
tar fvxz istatserver-linux_3.02.tar.gz
cd istatserver-3.02
./configure && make && make install

Configuring the istatserver:
Here you mostly need to modify the 5 digit server_code.
vim /usr/local/etc/istatserver/istatserver.conf

Extra preparations for Debian 6/7 or Ubuntu 12.x/14.x which are using the SysV init

Getting the start script from my repos:
wget // -O /etc/init.d/istatserver
chmod 755 /etc/init.d/istatserver
update-rc.d istatserver defaults
service istatserver start ; sleep 1 ; ps aux | grep -v grep | grep istat

Result should be:
istat 17891 0.0 0.2 42108 2332 ? R 18:39 0:00 /usr/local/bin/istatserver -d

Extra preparations for Debian 8 or Ubuntu 16.x which are using the Systemd init

vim /etc/systemd/system/istatserver.service
istatserver.service file content:
Description=istatserver server daemon
ExecStart=/usr/local/bin/istatserver $OPTIONS
ExecReload=/bin/kill -HUP $MAINPID

Make sure the environment file exists, even if it’s empty, otherwise the service will not want to start!!
touch /etc/default/istatserver
systemctl daemon-reload
systemctl enable istatserver.service
service istatserver start ; sleep 1 ; ps aux | grep -v grep | grep istat

Result should be:
istat 1507 43.0 0.0 118844 7120 ? Ssl 19:02 0:00 /usr/local/bin/istatserver

General Note:

Makes sure your firewall is allowing in the port 5109(or whatever the port you are using).
I’m using ufw, so for example the command would be:
ufw allow from any to any port 5109
Rule added
Rule added (v6)


In case you had already the older version of this agent(istatd) running here are the steps to stop using it:
ps aux | grep istat
killall istatd ; sleep 2 ; killall istatd
update-rc.d -f istatd remove

Getting the iPad/iPhone APP:

Concerning the iOS app, you need to buy it on Apple store and its name is: iStat 3 from Bjango PTY Ltd.
This app allows to monitor multiple Linux servers with very pretty graphs.
If you have a Mac you can also buy the similar APP called iStat from Apple Store. It displays the exact same thing as with iPad and adds a few small extra features.

15 Apr 16 Installing Webmin in Debian 8(Jessie)

These instructions are a ‘Plagiat’ of the site:

To install webmin on Debian 8 just follow this instructions :
cd /root
wget //
apt-key add jcameron-key.asc
echo "deb // sarge contrib" >> /etc/apt/sources.list
echo "deb // sarge contrib" >> /etc/apt/sources.list
apt-get update
apt-get -y install webmin

If it’s too long for you, you can also just do this :
wget //
chmod +x

13 Feb 16 Verifying the validity of an NFS mount

Every now and then if an NFS mount is no more connected to the server or something goes wrong with the NFS connection, running the command ‘ls mountpoint’ hangs the terminal till I press CTRL-C. So I tried to figure out a script that will be run as cron job and will tell me when an NFS mount has gone wrong. I had to revert to unorthodox tricks since doing a simple command ‘stat mountpoint &’ within the script would also hang the script. So I use the command ‘at now’ which runs the command independently for the script that initiated it. Here is an example of such script.

# Name:
# Purpose: Checks the health of the NFS mountpoint given by argument
# it kills the at/stat process and exits with an exit code 2 if the timeout has expired.
# processes to be excluded in the 'ps | grep' test
if [ $# -ne 1 ]; then
echo "ERROR: Needs mountpoint as argument"
echo "Usage: MountPoint"
exit 2
echo "/usr/bin/stat $1" | /usr/bin/at now
sleep $startdelay
while (ps ax | egrep -v "grep|$excludes" | grep -q stat); do
let count=${count}+1
sleep 1
if [ $count -ge $timeout ]; then
kill $(pidof stat)
#echo "Mountpoint $1 : FAILED to connect before timeout of $timeout sec."
exit 2

18 Jan 16 Reporting SMART status of RAID disks

Reference site: //

Note: Although Hardware RAID controllers made by other hardware manufacturers here I use Adaptec as an example:

Install the software:
apt- get install smartmontools
Curious which company the RAID controller is from?
Find out which RAID controller you have:
lspci | grep 'RAID'
Result: 01:00.0 RAID bus controller: Adaptec Device 028b (rev 01)
# Check out if the controller is supported and which devices it sees:
smartctl --scan
Example output:
/dev/sda -d scsi [SCSI]
/dev/sdb -d scsi [SCSI]

Check the SMART overall-health test of the drives :
smartctl -d scsi -H /dev/sda | grep 'SMART'
smartctl -d scsi -H /dev/sdb | grep 'SMART'

Result example:
/dev/sda: SMART Health Status: OK
/dev/sdb: SMART Health Status: OK

Checking Individual drives behind the RAID controller
The individual drives behind the controller are usually named sequentially according to the order of the simulated drives:
/dev/sda (2 drives behind controller): /dev/sg1 /dev/sg2
/dev/sdb (2 drives behind controller): /dev/sg3 /dev/sg4

Commands for doing those checks:
smartctl -d scsi --all -T permissive /dev/sg1
smartctl -d scsi --all -T permissive /dev/sg2
smartctl -d scsi --all -T permissive /dev/sg3
smartctl -d scsi --all -T permissive /dev/sg4

Create a script that will be run by cron regularly and send the results by email:
# Name:
# Purpose: Sends report of SMART status of RAID hard disks
# Syntax:
(. ~/.bashrc
echo -n "/dev/sda: "
smartctl -d scsi -H /dev/sda | grep 'SMART'
echo -n "/dev/sdb: "
smartctl -d scsi -H /dev/sdb | grep SMART
echo "Individual drives behind the RAID controller";echo
echo "============== /dev/sda ===> /dev/sg1 ============="
smartctl -d scsi --all -T permissive /dev/sg1 | grep 'SMART';echo
echo "============== /dev/sda ===> /dev/sg2 ============="
smartctl -d scsi --all -T permissive /dev/sg2 | grep 'SMART';echo
echo "============== /dev/sdb ===> /dev/sg3 ============="
smartctl -d scsi --all -T permissive /dev/sg3 | grep 'SMART';echo
echo "============== /dev/sdb ===> /dev/sg4 ============="
smartctl -d scsi --all -T permissive /dev/sg4 | grep 'SMART'
) | mail -s "SMART Result of $(hostname -f)"

14 Jan 16 Preventing a bash script from running concurrently

Introduction: In order to prevent a bash script instance from running more than once concurrently, here is a small tip on how to write the script.

Script template:
# Prevents that an instance of the script starts while another instance of it is still running
scriptname=$(basename $0)
if [ -e $lockfile ]; then exit 1 ; fi
touch $lockfile.lock
# Delete lock file if CTRL-C typed from the keyboard
trap 'rm $lockfile ; exit' SIGINT SIGQUIT
# ############ Put your script code here #####################
# delete the lock file
rm $lockfile
# .eof

16 Jun 15 Install TeamViewer in Debian Wheezy

Teamviewer is a very good and stable remote desktop with many clients software form almost any platform. Here I explain how I got TeamViewer to run on a headless Debian Wheezy server.
Reference: //

– Install the VNC desktop on the Debian Server for a particular user as per the instructions shown here:
Note: this above VNC server will not be directly conecte to via VNC as remote desktop here, but as an Xserver based virtual desktop screen for TeamViewer server to mirror it to TeamViewer clients. At start we will need the VNC connection to the first time start of TeamViewer. Afterwards no more. Therefore the port 5901 should be afterwards blocked by a Firewall.

– Install TeamVierwer i386(32 Bit) in Wheezy. Because of problems with dependencies the 32 bit version of TeamViewer needs to be installed as follows:

– Install the i386 MultiArch environment and dependencies packages:
dpkg --add-architecture i386
apt-get update
apt-get install links:i386
apt-get install libasound2-plugins:i386 glibc-doc:i386 locales:i386 ia32-libs lib32z1 lib32asound2 libc6-i386 ia32-libs-i386

– Install TeamViewer
wget //
dpkg -i teamviewer_i386.deb
teamviewer daemon enable

…. not finished!! To be continued 🙂

04 May 15 Using CURL for sending crafted HTTP POST authenticated queries

I came across a situation where I needed to send an HTTP request using the POST method with some POST data but after I have authenticated with name and password.

SOLUTION:(using curl tool)
The trick here is to preserve the SESSIONID of the authenticated response for the second POST request.

I needed to go into my account in and request the CSV file which lists all my registered domains.

curl -v --user-agent "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:37.0) Gecko/20100101 Firefox/37.0" -c cookies.txt -d "username=myuser&password={html_encoded_password}" //
curl -v --user-agent "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:37.0) Gecko/20100101 Firefox/37.0" -b cookies.txt -d "orderField=&orderDir=&name=&state=&owner=&sedo=&lock=&date_expire=&renewal=&itemsPerPage=&csv=CSV" //

In the above example I simulate a Firefox Browser (–user-agent), save the cookies (includes the SESSIONID) in the file cookies.txt and use it in my second POST request to get the content of the requested CSV file into the terminal.

IMPORTANT NOTE: The password must be in proper HTML-encoded format to be accepted. This is applicable for any chars. that is not a-z or A-Z. There are many ways to convert the password in HTML-Encoded format. The most reliable way I found, is to manually login with a proper browser with name and password and look at the request headers using a browser plugin that lets you see the headers contents. The password will then be shown properly in the header.
Examples of password characters and their HTML-Encoded equivalents:
& = %26
! = %21, etc.
So a password like: Tw&Ui8vH!
would look like this: Tw%26Ui8vH%21

31 Mar 15 Monitoring latency time of http requests

Here is a simple but useful command which shows the latency time of http requests. You can adjust the delay between repeats as well as the URL being queried.
Reference: //

host=""; delay=5; while true ; do echo -n "Response time for //$host:" ;curl -s -w %{time_total}\\n -o /dev/null //$host ;sleep $delay; done

Response time for //,025
Response time for //,024
Response time for //,024
Response time for //,024
Response time for //,024
Response time for //,026
Response time for //,024
Response time for //,024
Response time for //,024

Here is a more advance version which performs more timing tests:

host=""; delay=5; while true ; do echo "------"; curl -s -w '\nLookup time:\t%{time_namelookup}\nConnect time:\t%{time_connect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' -o /dev/null //$host; sleep $delay; done

Lookup time: 0,002
Connect time: 0,011
PreXfer time: 0,011
StartXfer time: 0,022
Total time: 0,023
Lookup time: 0,001
Connect time: 0,012
PreXfer time: 0,013
StartXfer time: 0,023
Total time: 0,023

Lookup time: The time, in seconds, it took from the start until the name resolving was completed.
Connect time: The time, in seconds, it took from the start until the TCP connect to the remote host was completed.
PreXfer time: The time, in seconds, it took from the start until the file transfer was just about to begin. This includes all ‘pre-transfer’ commands and negotiations that are specific to the particular protocol(s) involved.
StartXfer time: The time, in seconds, it took from the start until the first byte was just about to be transferred. This includes ‘time_pretransfer’ and also the time the server needed to calculate the result.