MAC OS X, Linux, Windows and other IT Tips and Tricks

21 Dec 17 Blocking reception of full TLDs

Lately I was receiving a lot of spam from a ‘.date’ TLD sources and wanted to block all these emails using Postfix.
Here is a solution found at: //

Install the Postfix PCRE dictionary
apt-get install postfix-pcre
Configure postfix
postconf -e smtpd_sender_restrictions=pcre:/etc/postfix/rejected_domains
postconf -e reject_unauth_destinations=pcre:/etc/postfix/rejected_domains

Edit the new file /etc/postfix/rejected_domains with the following content:
/\.date$/ REJECT All Date Domains
Reload Postfix
service postfix reload

11 Dec 17 OpenDKIM doesn’t start after Upgrade from Jessie to Stretch

After having done a dist-upgrade fo Jessie to Stretch OpenDKIM didn’t start any more.
After research I found the answer which worked for me in this site:

I’m using the ‘inet’ socket for the communication between Postfix and OpenDKIM at port 12345.
eg. My config in of OpenDKIM in Postfix:
milter_default_action = accept
milter_protocol = 6
smtpd_milters = inet:localhost:12345
non_smtpd_milters = inet:localhost:12345

Solution found in the above web site:
systemctl daemon-reload
service opendkim restart

Note: There are other solutions for the ones that use other kind of communication sockets for the communication between Postfix and OpenDKIM found in the same above site.

Solution: Regenerate the proper

29 Nov 17 Verifying PHP syntax.

After an upgrade from php 5.6 to 7.0/7.1 many php scripts gave me trouble. So I looked for a way to test the php syntax before errors showed up later when the sites are live. I found this one which is quite helpful:
find . -name "*.php" -exec php -l {} \; 1>/dev/null

20 Nov 17 Some Zabbix tools

In order to debug some Zabbix problems here are some tools I gathered to help.

Installation of the package zabbix-get in the monitoring server
apt-get install zabbix-get
Installation of the package zabbix-agent in the monitored hosts.
apt-get install zabbix-agent

TIP: In order to programmatically (using bash for example) create scripts that monitor anything in remote hosts. Then:
– Install the package zabbix-agent in the watched hosts
– Configure /etc/zabbix/zabbix-agentd.conf to accept requests from the monitoring host (eg. Directive: ‘’)
– Restart the zabbix agent(service zabbix-agent restart)
– Open their firewall on port 10050
– Install the package zabbix-get in the monitoring host(apt-get install zabbix-get)
– And use the same commands below inside your scripts to get this information required from the monitored hosts.

The following commands are given on the Zabbix server and the monitored host is eg. ‘’


Verify the availability of the zabbix agent on monitored host:
zabbix_get -s -k
Show the number of running processes on monitored host:
zabbix_get -s -k proc.num[,,,]
Show the number of daemons up and running called ‘apache2’
zabbix_get -s -k proc.num[,,,apache2]
Show free disk space mounted on ‘/’
zabbix_get -s -k vfs.fs.size[/,free]

09 Nov 17 piwik: Could not open input file: ./console

In order to know the location of the visits your website received before you started using Piwik with GeoIP you need to run a command.
The reference to this command is at: //

Unfortunately after having logged in as root in the server this command gave me the following error:
Could not open input file: ./console
After doing research and using my own Linux experience here is a(the?) solution:
Ref: //

# Make temporarily the www-data user login possible
usermod -s /bin/bash www-data
# Login as www-data
sudo su - www-data
# Change the htdocs directory to the installed Piwik.
cd /var/www/
# Run the command
php ./console usercountry:attribute 2012-01-01,2013-01-01
Re-attribution for date range: 2012-01-01 to 2013-01-01. 0 visits to process with provider "ip2location".
Completed. Time elapsed: 0.819s

# Get out of www-data user login and back to root login
# Prevent back login of the user www-data(as it was originally)
usermod -s /usr/sbin/nologin www-data
Important Note:
In the command given you need to give the exact date range (eg. 2012-01-01,2017-11-01) which needs to be evaluated in your Piwick reports.

08 Sep 17 Prepare Debian Stretch for Installing GlusterFS 3.12

In order to install this version of GlusterFS we need to add the repositories:
Ref: //
echo deb [arch=amd64] // stretch main > /etc/apt/sources.list.d/gluster.list
wget -O - // | apt-key add -
apt-get update
apt-get install glusterfs-server xfsprogs

Format the dedicated partition for GlusterFS synchronized data:
eg. /dev/xvda3
mkfs.xfs -f -i size=512 /dev/xvda3
Example of result:
meta-data=/dev/xvda3 isize=512 agcount=4, agsize=655360 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=0, rmapbt=0, reflink=0
data = bsize=4096 blocks=2621440, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

Regarding the configuration steps see:
OR better:

02 Sep 17 Transferring IMAP account mails and folders to another IMAP account on another server

The other day I was asked to install a completely new email server and transfer all the email accounts from the old mail server to the new one. I noticed that since the new mail server was using a different mail INBOX format I had to do some research and found this really good tool to do exactly what I needed called: imapsync

Installing the tool:
This tool programmed in Perl and is not free. It can be bought at //
Note: It does a great job and it’s really worth its price when you think of the time and hassle saved by using it.

Using the tool:
Example 1: Copying all the mails in folder INBOX from jim account on localhost to another server with the same credentials:
– First we do a dry-run to see what will be transferred when I run it normally:

imapsync --dry \
--host1 localhost --user1 jim --password1 'secret1' --folder INBOX --tls2 \
--host2 --user2 jim --password2 'secret1' --nofoldersizes --nofoldersizesatend

Example 2: Copying all the mails and folders(no dry-run) from account on localhost to a new account on another server with different credentials:
imapsync \
--host1 localhost --user1 --password1 secret1 \
--host2 --user2 --password2 secret2

29 Aug 17 Installing Filebeat, Logstash, ElasticSearch and Kibana in Ubuntu 14.04


#Ref: //
First install Java 8 in Ubuntu 14.04

# Ref: //
apt-get install python-software-properties software-properties-common
apt-add-repository ppa:webupd8team/java
apt-get update
apt-get install oracle-java8-installer
java -version

java version "1.8.0_144"
Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)

Facilitate updating of all packages via APT repositories

apt-get install apt-transport-https
Save the repository definition to /etc/apt/sources.list.d/elastic-5.x.list:
echo "deb // stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
wget -qO - // | sudo apt-key add -
apt-get update


Installing filebeat

Filebeat reads lines from defined logs, formats them properly and forwards them to logstash while maintaining a non-clogging pipeline stream
Ref: //
Ref: //
Ref: //

apt-get install filebeat
mv /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml.orig
touch /etc/filebeat/filebeat.yml
mcedit /etc/filebeat/filebeat.yml


- input_type: log
- /var/log/apache2/access.log
hosts: ["localhost:5044"]

service filebeat restart


Download logstash debian install package and configure it

# Ref: //
apt-get install logstash

# Result:
Using provided startup.options file: /etc/logstash/startup.options
Successfully created system startup script for Logstash

Preparing Logstash

mcedit /etc/logstash/startup.options
(add the following line at the beginning)

(Then adjust the following line as follows)
LS_OPTS="--path.settings ${LS_SETTINGS_DIR}"
LS_OPTS="--path.settings ${LS_SETTINGS_DIR} --path.config ${LS_CONFIGS_DIR}"

Start/Stop/Restart logstash
service logstash {start|stop|restart}

Testing logstash

cd /etc/logstash/ ; /usr/share/logstash/bin/logstash -e 'input { stdin { } } output { stdout {} }'

Type: hello world
and and press CTRL-D

(Logstash adds timestamp and IP address information to the message. Exit Logstash by issuing a CTRL-D command in the shell where Logstash is running.)


ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using –path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/ Using default config which logs to console
11:22:59.822 [[main]-pipeline-manager] INFO logstash.pipeline – Starting pipeline {“id”=>”main”, “pipeline.workers”=>2, “pipeline.batch.size”=>125, “pipeline.batch.delay”=>5, “pipeline.max_inflight”=>250}
11:22:59.847 [[main]-pipeline-manager] INFO logstash.pipeline – Pipeline main started
The stdin plugin is now waiting for input:
2017-08-23T09:22:59.878Z test 1
11:22:59.946 [Api Webserver] INFO logstash.agent – Successfully started Logstash API endpoint {:port=>9601}
11:23:02.861 [LogStash::Runner] WARN logstash.agent – stopping pipeline {:id=>”main”}

The errors and warnings are OK for now. The main result line above that is significant is:
2017-08-23T09:22:59.878Z test 1
which adds a timestamp and server name to the input string (test 1)

Configuring logstash
# Note: this test configuration will get input from filebeat and output into a log file which can be watched with tail -f …..
mcedit /etc/logstash/conf.d/apache2.conf
input {
beats {
port => 5044
type => "apache"
filter {
if [type] == "apache" {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}"}
geoip {
source => "clientip"
output {
file {
path => "/var/log/logstash_output.log"

In order to have the proper output sent to elasticsearch then use this output configuration instead:
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"

Securing Filebeat => Logstash with SSL

Ref: //
Note: Typing by hand below is shown in bold.

Prepare the ertificates directories:

mkdir -p /etc/logstash/certs/Logstash/ /etc/logstash/certs/Beats/
Create client certificates for FileBeat:

Let's get started...

Please enter the desired output file [/etc/elasticsearch/x-pack/]: /etc/logstash/certs/Beats/
Enter instance name: Beats
Enter name for directories and files : Beats
Enter IP Addresses for instance (comma-separated if more than one) []:
Enter DNS names for instance (comma-separated if more than one) []: localhost
Certificates written to /etc/logstash/certs/Beats/

Create client certificates for Logstash:

Let's get started...

Please enter the desired output file [/etc/elasticsearch/x-pack/]: /etc/logstash/certs/Logstash/
Enter instance name: Logstash
Enter name for directories and files : Logstash
Enter IP Addresses for instance (comma-separated if more than one) []:
Enter DNS names for instance (comma-separated if more than one) []: localhost
Certificates written to /etc/logstash/certs/Logstash/

This file should be properly secured as it contains the private keys for all
instances and the certificate authority.

After unzipping the file, there will be a directory for each instance containing
the certificate and private key. Copy the certificate, key, and CA certificate
to the configuration directory of the Elastic product that they will be used for
and follow the SSL configuration instructions in the product guide.

For client applications, you may only need to copy the CA certificate and
configure the client to trust this certificate.

Extract certificates:
unzip /etc/logstash/certs/Beats/ -d /etc/logstash/certs/Beats/
unzip /etc/logstash/certs/Logstash/ -d /etc/logstash/certs/Logstash/

Convert the Logstash key Logstash.key from PKCS#1 to PKCS#8 format:
Reason: the following error message in the logstash.log occured when using the PKCS1 format:
[ERROR][ ] Looks like you either have an invalid key or your private key was not in PKCS8 format. {:exception=>java.lang.IllegalArgumentException: File does not contain valid private key: /etc/logstash/certs/Logstash/Logstash/Logstash.key}

See: //

openssl pkcs8 -in /etc/logstash/certs/Logstash/Logstash/Logstash.key -topk8 -nocrypt -out /etc/logstash/certs/Logstash/Logstash/Logstash.key.PKCS8

Configure Beats for SSL

Content of /etc/filebeat/filebeat.yml
- input_type: log
- /var/log/apache2/access.log
hosts: ["localhost:5044"]
ssl.certificate_authorities: ["/etc/logstash/certs/Logstash/ca/ca.crt"]
ssl.certificate: "/etc/logstash/certs/Beats/Beats/Beats.crt"
ssl.key: "/etc/logstash/certs/Beats/Beats/Beats.key"

Content of /etc/logstash/conf.d/apache.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate_authorities => ["/etc/logstash/certs/Logstash/ca/ca.crt"]
ssl_certificate => "/etc/logstash/certs/Logstash/Logstash/Logstash.crt"
ssl_key => "/etc/logstash/certs/Logstash/Logstash/Logstash.key.PKCS8"
ssl_verify_mode => "force_peer"

Restart both Logstash and Filebeat
service logstash restart
service filebeat restart

NOTE: I’m still having problems with the SSL connection of Filebeat to Logstash where Filebeat throws this error in (/var/log/logstash/logstash-plain.log):
TLS internal error.
The following URL seems to have found some similar problems but because of lack of time I haven’t figured it out yet.

X-Pack for Logstash

INSTALL X-Pack for logstash

X-Pack is an Elastic Stack extension that bundles security, alerting, monitoring, reporting, machine learning, and graph capabilities into one easy-to-install package.
X-Pack also provides a monitoring UI for Logstash.

/usr/share/logstash/bin/logstash-plugin install x-pack


Downloading file: //
Downloading [=============================================================] 100%
Installing file: /tmp/studtmp-bc1c884de6d90f1aaa462364e5895b6b08b050f0b64587b4f5e0a8ec5300/
Install successful

Configuring X-Pack in Logstash:

The defaults settings created during the installation works best for most cases. For more information see:

To Prevent generation of monitoring error messages in logstash.log edit /etc/logstash/logstash.yml and add the following line at the end:
(Ref: //

xpack.monitoring.enabled: false


apt-get install elasticsearch

Start/Stop/Restart Elastic search:
/etc/init.d/elasticsearch {start|stop|restart}

To check if elasticsearch has been started:
ps aux | grep $(cat /var/run/elasticsearch/

Example of result(truncated):
elastic+ 10978 3.2 55.2 4622152 2319168 pts/3 Sl 15:44 0:10 /usr/lib/jvm/java-8-oracle/bin/java ........

The check the Elasticsearch log file:
tail -f /var/log/elasticsearch/elasticsearch.log

If you see the line:
[WARN ][o.e.b.BootstrapChecks ] [wJdCtOd] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
and the result of the following command is empty,

grep vm.max_map_count /etc/sysctl.conf

Raise the max virtual memory areas vm.max_map_count to 262144 as follows:
Add the following line in the file /etc/sysctl.conf


And run the command:
sysctl -w vm.max_map_count=262144
echo 262144 > /proc/sys/vm/max_map_count

ALSO Make sure the elasticsearch config file (/etc/elasticsearch/jvm.options) has the following entries:

if the following commands are failing it might be because some Virtual Servers are not allowing such changes in the kernel:
sysctl -w vm.max_map_count=262144
sysctl: permission denied on key ‘vm.max_map_count’
echo 262144 > /proc/sys/vm/max_map_count
-bash: /proc/sys/vm/max_map_count: Permission denied

Elastic search should be able to run anyway but might be limited in performance and may have other issues because of these limitations.
There is no known remedies to this for Strato VM servers.

If you see the line:
[WARN ][i.n.u.i.MacAddressUtil ] Failed to find a usable hardware address from the network interfaces; using random bytes: ……..

No need to worry, the accuracy of the MAC address is not so important in this installation.

If you see the line:
[WARN ][o.e.b.BootstrapChecks ] [wJdCtOd] system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
If this problem occurs elasticsearch will start but not get initialised properly and most likely not function properly.

If elasticsearch is accessed only in a protected environment, disabling this installation of system call filters should be no problem
by editing the file /etc/elasticsearch/elasticsearch.yml and adding the following line:
bootstrap.system_call_filter: false
Restart elasticsearch:
service elasticsearch restart


X-Pack for elasticsearch

X-Pack is an Elastic Stack extension that bundles security, alerting, monitoring, reporting, machine learning, and graph capabilities into one easy-to-install package.

/usr/share/elasticsearch/bin/elasticsearch-plugin install x-pack

-> Downloading x-pack from elastic
[=================================================] 100%
@ WARNING: plugin requires additional permissions @
* \\.\pipe\* read,write
* java.lang.RuntimePermission
* java.lang.RuntimePermission getClassLoader
* java.lang.RuntimePermission setContextClassLoader
* java.lang.RuntimePermission setFactory
* createPolicy.JavaPolicy
* getPolicy
* putProviderProperty.BC
* setPolicy
* java.util.PropertyPermission * read,write
* java.util.PropertyPermission write
* setHostnameVerifier
See //
for descriptions of what these permissions allow and the associated risks.

Continue with installation? [y/N]y
@ WARNING: plugin forks a native controller @
This plugin launches a native controller that is not subject to the Java
security manager nor to system call filters.

Continue with installation? [y/N]y
-> Installed x-pack


Install kibana package
apt install kibana
Install X-Pack for logstash
X-Pack is an Elastic Stack extension that bundles security, alerting, monitoring, reporting, machine learning, and graph capabilities into one easy-to-install package.
/usr/share/kibana/bin/kibana-plugin install x-pack
Change built-in users password
Ref: //
change passwords

curl -XPUT 'localhost:9200/_xpack/security/user/elastic/_password?pretty' -H 'Content-Type: application/json' -d'
"password": "elasticpassword"

curl -XPUT 'localhost:9200/_xpack/security/user/kibana/_password?pretty' -H 'Content-Type: application/json' -d'
"password": "kibanapassword"

curl -XPUT 'localhost:9200/_xpack/security/user/logstash_system/_password?pretty' -H 'Content-Type: application/json' -d'
"password": "logstashpassword"

Update the Kibana server with the new password /etc/kibana/kibana.yml
elasticsearch.password: kibanapassword
Update the Logstash configuration with the new password /etc/logstash/logstash.yml
xpack.monitoring.elasticsearch.password: logstashpassword
Disable Default Password Functionality /etc/elasticsearch/elasticsearch.yml false

Start/Stop/Restart kibana
service kibana {start|stop|restart}

04 Jul 17 TCP Proxying using socat

Lately I’ve had to create a pure bidirectional TCP Proxy for a project. For this there are lots of alternatives like haproxy, nginx, cat and socat and others. Because of the simplicity of the command I decided to use socat but will also show the command for cat as well.

The NCAT method:
The following command will us a pipe to transport the data in both directions. Only one client can be connected at one time.
cd /var/tmp
mkfifo fifo &>/dev/null
/bin/nc -l -p $frontend_port -s $frontend_addr <fifo | /bin/nc $backend_addr $backend_port >fifo

The SOCAT method(Best!):
Note: this method runs the command in a screen session but doesn’t need to if the process is only temporarily needed to be run.
/usr/bin/screen -d -m /usr/bin/socat -d -d -lmlocal2 \
TCP4-LISTEN:$frontend_port,bind=$frontend_addr,reuseaddr,fork,su=daemon \

23 May 17 Disabling the admin security password confirmation in Jira and Confluence

Although in Jira and Confluence the WebSudo, requesting the confirmation of the administrator’s password, are neat security features if you are working in a company where the chances of someone fiddling around with your computer are high. BUT in a very small company, where this risk is almost none, this feature has proven very annoying for me. So I did some research to disable these features in both Jira and Confluence.

Jira Version: 7.x
Confluence: 6.x


In Jira:

– Edit the file /opt/atlassian/jira/atlassian-jira/WEB-INF/classes/jpm.xml
– Look for the property: and set all is values to true as follows:

In Confluence

– Edit the file /opt/atlassian/confluence/bin/
– Close to the end where there is a list of multiple components of the variable

– Add the following line after this list but before the line: export CATALINA_OPTS
CATALINA_OPTS="-Dpassword.confirmation.disabled=true ${CATALINA_OPTS}"

Note: After these changes Jira and Confluence need to be restarted as follows:
service jira stop
service confluence stop
service jira start
service confluence start