msgbartop
MAC OS X, Linux, Windows and other IT Tips and Tricks
msgbarbottom

27 Apr 16 Enabling SPDY and Strict-Transport-Security to NginX in Ubuntu 14.04

In Ubuntu 14.04 NginX is been compiled with the SPDY capability. To use it one must enable it inside the server {…} block for each virtual host.
eg.
server {
server_name mprofi.com www.mprofi.com;
root /var/www/mprofi.com;
index index.php;
#
# Added to handle HTTP and HTTPS and SPDY
listen 80;
listen 443 ssl spdy;
ssl_certificate /etc/letsencrypt/live/www.mysite.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.mysite.com/privkey.pem;
#
# ENABLE STRICKT TRANSPORT SECURITY and X-Frame-Options
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
add_header X-Frame-Options "DENY";
}

Restart NginX
service nginx restart

21 Mar 16 Redirecting HTTP to HTTPS in NginX

Here is a working method of redirecting any requested HTTP URL to HTTPS in NginX VirtualHosts that handles both HTTP and HTTPS.
For example, to have a single vhost support both HTTP and HTTPS you have normally the following directives:
# Support for HTTP and HTTPS
listen 80;
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/www.myserver.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/www.myserver.com/privkey.pem;

Then to redirect all HTTP requests to HTTPS within this vhost without creating infinite loops you add the following redirect:
if ($scheme != "https") {rewrite ^ https://$host/$request_uri permanent;}
Other methods can be seen here:
http://serverfault.com/questions/67316/in-nginx-how-can-i-rewrite-all-http-requests-to-https-while-maintaining-sub-dom

17 Dec 15 Issue free and CA signed SSL certificates for web servers from LetsEncrypt

Introduction:
SSL Certificates provide two functions:
1. Authentication
2. Encryption

Encryption can be achieved without authentication but, for some reason, someone decided to join them together in one certificate. It seem to make sense for banks and serious e-commerce sites which need to be properly authenticated. Therefore when the HTTPS protocol got developed it was not possible to encrypt-only the stream of HTTP. This situation made us dependent to Certificate Authentication Authorities to obtain a certificate even if we only wanted encryption. Now some genius group of people at https://letsencrypt.org/ finally created the possibility to obtaining certificates which preform simple authentication verification, by calling the URL and expecting a specific response, and if successful issues a free 90 days valid and CA signed SSL certificate. For system administrators this process of requesting and install such free certificate has therefore become quite simple. Here is one method of doing just this in a Debian/Ubuntu web server.
References: http://www.admin-magazine.com/Articles/Getting-a-free-TLS-certificate-from-Let-s-Encrypt?utm_source=ADMIN+Newsletter&utm_campaign=ADMIN_Update_Free_Certificates_with_Let%27s_Encrypt_2016-20-07&utm_medium=email

STEPS:

Installing LetsEncrypt

apt-get update && apt-get install git
cd /usr/local/lib/
git clone https://github.com/letsencrypt/letsencrypt
cd letsencrypt
./letsencrypt-auto --email user@mydomain.com --agree-tos --help
echo "export PATH=$PATH:/usr/local/lib/letsencrypt" >> /root/.bashrc
. /root/.bashrc

NOTE: Make sure your web site you want to add HTTPS to is already configured and live in your web server.
The reason is that during the process of requesting a certificate, LetsEncrypt will create an extra sub-directory({htdocs}/.well-known/acme-challenge/) and a special temporary file in the htdocs of the site (pointed to by DocumentRoot directive in Apache) then call that file on the site from the LetsEncrypt server to authenticate the URL. If the the URL called is invalid it won’t issue the certificate. For this reason your site needs to be live and you need to give the path of the htdocs. After the authentication process, the temporary file will be erased but not the sub directories. They will stay empty.

Troubleshooting:

InsecurePlatformWarning
If you get the following error message then in Debian Wheezy you can solve it by importing SSl into Python. See below.
InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning

Importing Python SSL support:
python
>>> import ssl
>>> (CTRL-D)

Upgrading LetsEncrypt client program

rm -rf /usr/local/lib/letsencrypt.old &>/dev/null
mv /usr/local/lib/letsencrypt /usr/local/lib/letsencrypt.old
cd /usr/local/lib/
git clone https://github.com/letsencrypt/letsencrypt

Requesting the certificate

Eg. for the domain blog.mydomain.com
NOTE: on first time request the script will ask you to give an email address for contact purposes as well as to accept the terme and conditions of using this tool. Afterwards it will not ask you these questions.
letsencrypt-auto certonly --webroot -w /www/clients/blog.mydomain.com/htdocs -d blog.mydomain.com
The certificates and key will be stored in /etc/letsencrypt/live/blog.mydomain.com/ as:
cert.pem : Certificate
chain.pem : CA Certificate
privkey.pem : Private key
fullchain.pem : Combination of the certificate and the CA Certificate

Instead of moving the certificate, just configure Apache or other web server to point to the certs files where they are.
This way a cron job can be created to regularly renew the certificate automatically without manual intervention.
The certificate will be valid for 90 Days only; no exceptions.
This means that the same above command will need to be run every 3 months or earlier with the addition of the option –renew-by-default.
The limit of certificates you can ask for a certain domain is: currently 5 certificates / 7 days.

Renewing single certificate:

In order to renew the certificate automatically it is suggested to use a cron job and adding the option –renew-by-default in the command eg. as follows:
letsencrypt-auto certonly --renew-by-default --webroot -w /www/clients/blog.mydomain.com/htdocs -d blog.mydomain.com

Renewing all installed Letsencrypt certificates:

letsencrypt-auto renew
Note: It is recommended to send the output of the command by email to verify if the process was successful.

Extra Info

The certificates of LetsEncrypt are stored in /etc/letsencrypt/ directories in different ways. It is simply NOT recommended to delete any of the certificates, files or symlinks in these directories because the files in the ‘keys’ and ‘csr’ directories are not identified to refer to a specific certificate. So just deleting some files but not others related to the same cert might confuse the client command and you then can’t request any more certificates. The error message from the client program is something like:
letsencrypt TypeError: coercing to Unicode: need string or buffer, NoneType found
If you ever get to that non-return point then just delete all directories: archive, csr, keys, live and renewal BUT not accounts. Then re-issue certificates requests for already existing sites. The certificates will then be renewed and you can then also request new ones.

For more information of the subject see:
https://letsencrypt.readthedocs.org/en/latest/using.html

Comfortable script

If you want to be able to issue a certificate and you want it to self-renew after 80 days, this script might be of some use.
#!/bin/bash
# Purpose: Issue or renew a certificate from LetsEncrypt
# It will also issue an 'at'command which will be responsible to automatically renew the certificate automatically
# This script also issues a new at comand which will do the same in around 3 Months days depending on the settings here
# Syntax: cert_request.sh -s SITE_NAME -d SITE_HTDOCS
# Changes: 30.12.2015 First implementation of the script
# 10.01.2016 Took out the read of wpinstall.cfg config file. Added checks for the letsencrypt and at programs
#--------------------------------------------------------------
. /root/.bashrc
RENEW_DAYS="80"
# Absolute path to this script.
SCRIPT=$(readlink -f $0)
CERTS_DIR="/etc/letsencrypt/live"
# Absolute path this script is in.
scriptdir=$(dirname $SCRIPT)
encryptprgm="/usr/local/lib/letsencrypt/letsencrypt-auto"
atprgm="/usr/bin/at"
EMAIL="admin@itmatrix.eu"

# Check the syntax
function usage () {
echo "Usage: cert_request.sh -s SITE_NAME -d SITE_HTDOCS"
echo "-s SITE_NAME Full web site address WITHOUT the 'http://' eg.: www.myblog.com"
echo "-d SITE_HTDOCS The absolute path where WordPress will be installed. eg. /www/sites/www.mysite.com/htdocs"
exit 1
}

if [ $# -ne 4 ]; then
echo "ERROR: Wrong number of given argunents."
usage
fi

# Make sure the letsencrypt client prgm is installed
if ! [ -e $encryptprgm ] ; then
echo "ERROR: the letsencrypt program isn not installed. Install it and retry."
echo "See instructions at: //tipstricks.itmatrix.eu/install-new-and-signed-ssl-certificate-for-web-servers"
exit 1
fi

# Make sure the at is installed
if ! [ -e $atprgm ] ; then
echo "ERROR: the 'AT' program isn not installed. Install it and retry."
echo "apt-get install at"
exit 1
fi

# Everything look good so far. Lets start.

# get the command options
while getopts "s:d:" OPTION
do
case $OPTION in
s) SITE_NAME=$OPTARG
;;
d) SITE_HTDOCS=$OPTARG
;;
h|?|*)
echo "ERROR: argument(s) unknown."
usage
;;
esac
done

echo "Requesting certificate at LetsEncrypt"
# Does it exist already, then renew only, otherwise request renewing the cert
if [ -d $CERTS_DIR/${SITE_NAME} ] ; then
echo "The certificate already exists. Requesting a renewal"
RENEW="--renew-by-default"
else
RENEW=""
fi

if ($encryptprgm certonly $RENEW --webroot -w $SITE_HTDOCS -d ${SITE_NAME} &>/dev/null); then
# Enable the Apache SSL configuration and restart Apache
(echo "Certificate request successful."
echo "Issuing a renewal of the certificate in 80 days using 'at' command"
service apache2 restart
echo "$SCRIPT -s $SITE_NAME -d $SITE_HTDOCS" | $atprgm now + $RENEW_DAYS days)| tee /tmp/cert_request.sh.log \
| mail -s "Request/Renewal of Certificate for $SITE_NAME" admin@itmatrix.eu
echo -e "------- SITES LIST --------\n$(ls -1 /etc/apache2/sites-enabled/ | egrep -v '^00|^wptest1')\n\n--------- CERTIFICATES LIST ---------$(ls -l /etc/letsencrypt/live/ | cut -c29-)\n\n------- AT Jobs LIST -------\n$(/root/bin/atlist.sh)" | mail -s "Request/Renewal of Certificate request LIST" $EMAIL
cat /tmp/cert_request.sh.log
exit 0
else
(echo "ERROR: The certificate request/renewal FAILED.")| tee /tmp/cert_request.sh.log \
| mail -s "Request/Renewal of Certificate for $SITE_NAME" $EMAIL
cat /tmp/cert_request.sh.log
exit 2
fi

Status of the certificates

Here is a useful script that will display the following information:
– List of AT Jobs ready to start at the required time
– List of present certificates and their file timestamps
Since each letsencrypt is only valid for 90 days this will give you an overview of how old the present certificates are and when is the next time the requests for certificates will be done.
#!/bin/bash
# Description: Displays all 'at' jobs and their respective commands
# Systax: atlist.sh
# Changes: 05.11.2016 First inplementation
########################################################################
# Get the short jobs list and expand from there
echo "================ AT Jobs ready to start at the required times ==============="
atq | while read line ; do
jobnr=$(echo $line | awk '{print $1}')
echo $line
# Pickup all the command lines after first line matching '}'.
# This excludes all the environment variables and the at exit line.
# Exclude the matching '}' line and empty lines
# Add an offset of 8 chars to each command line.
# at -c $jobnr | grep -A100 -m1 -e '^\}' | grep -v '^\}' | sed -e '/^$/d' -e 's/^/ /'
at -c $jobnr | at -c $jobnr | sed -e '1,/^\}/d' -e '/^$/d' -e 's/^/ /'
done
echo ; echo
echo "=============== Age of present certificates ====================="
ls -l /etc/letsencrypt/live/*/cert.pem | awk '{print $6" "$7" "$8" "$9}' | sed -e 's|/etc/letsencrypt/live/||' -e 's|/cert.pem||'

10 Dec 15 Creating a web certificate CSR file.

The process of buying an SSL certificate for a web site is usually as follows:
– You create a secret key and CSR files using the method showm in this post.
– You cut and paste the content of the CSR file into a field in a SSL Vendor web site
– The SSL vendor produces a certificate based on the CSR you provided and send it to you.
– You download the CA Certificate from the SSL provider’s site
– You install the private keyfile, the CA certificate and the certificate in the web server and bobs’s-your-uncle.

The following procdeure is an extract from the site:
https://support.globalsign.com/customer/portal/articles/1221018-generate-csr—openssl
Generate a CSR & Private Key:
openssl req -out CSR.csr -new -newkey rsa:2048 -nodes -keyout privatekey.key

Fill out the following fields as prompted:
Note: The following characters can not be accepted: <> ~ ! @ # $ % ^ * / \ ( ) ?.,&


Field Example
============ ==========================================
Country Name US (2 Letter Code)
State or Province New Hampshire (Full State Name)
Locality Portsmouth (Full City name)
Organization GMO GlobalSign Inc (Entity's Legal Name)
Organizational Unit Support (Optional, e.g. a department)
Common Name www.globalsign.com (Domain or Entity name

06 Nov 15 Configuring HAproxy load balancer in Ubuntu 14.04

Goal:
In this example HTTP requests are proxied directly as HTTP requests to the HTTP web servers. In the case of HTTPS requests, they are handled with the certificates by HAproxy and then proxied to the web servers as HTTP requests.

SSLCertificates:
The certificates for all virtualhosts being proxied are stored as one PEM format file per certificate/key combination in the directory:
/etc/ssl/private/
The CAs are also stored as one PEM format file per CA in the directory:
/etc/ssl/certs/

Steps:
Install HAproxy:
apt-get update && apt-get install haproxy

Configure HAproxy for HTTP and HTTPS load-balancing:

Edit the file /etc/haproxy/haproxy.cfg
Content:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
#
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
#
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL). This list is from:
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
ssl-default-bind-options no-sslv3
tune.ssl.default-dh-param 2048
#
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
# Added to create separate error and access logs
option log-separate-errors
#
# ------- HTTP Frontend --------------
frontend http_in
bind *:80
mode http
reqadd X-Forwarded-Proto:\ http
default_backend http_out
#
# ------- HTTPS Frontend --------------
frontend https_glwp-in
bind *:443 ssl crt /etc/ssl/haproxy_certs/
mode http
reqadd X-Forwarded-Proto:\ https
default_backend http_out
#
#------------------------------------
listen stats :2000
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /stats
stats auth admin:mypasswd
#
# ------- HTTP Backend --------------
backend http_out
balance roundrobin
stick-table type ip size 200k expire 60m
stick on src
option forwardfor
option httpclose
http-request set-header X-Forwarded-Port %[dst_port]
option httpchk HEAD /
server web1 webserv1.mynet.net:80 check
server web2 webserv2.mynet.net:80 check
server web3 webserv3.mynet.net:80 check
server web4 webserv4.mynet.net:80 check

Preserving the source IP of client in TCP Proxying

In the above examples the protocols that are being load-balanced are application protocols, where you can retain the Source IP by retrieving it from the HTTP/HTTPS header X-Forwarded-For: (obtained by the option: option forwardfor), but if you use HAProxy as a TCP layer load balancer, in order to retain the source IP(client’s IP) see the following article: http://blog.haproxy.com/2012/06/05/preserve-source-ip-address-despite-reverse-proxies/
It’s a tiny bit complex to understand and implement, especially in the backend server. I have not tried it yet, so I can’t guarantee its validity therefore I can’t give any examples. From what I understand, the only changes needed to the TCP proxying directives(not explained here) are the following 2 requirements:
1) HAProxy Backend configuration includes the extra entry: source 0.0.0.0 usesrc clientip
2) The backend server network settings needs to be configured to have the HaProxy host IP address as the default Gateway.

This way the backend server sees the source IP of the client as if the client connected directly to the backend server and the responses from the backend server are returned via the HAProxy Host.
To be continued soon with practical examples …..

Happy load-balancing πŸ™‚

17 Oct 15 Installing pure-ftpd in Debian/Ubuntu

Difficulty with FTP servers and firewall:
If you configure a firewall for a host which runs an FTP server you normally need to leave the ports 1024-65365 range open, since you never know which port the FTP server will use to send data to the FTP client. This situation is quite critical if you have a host which has sensitive ports above 1024 which need to be closed to Internet. Of course you can select each port and close it in the firewall, but I definitely prefer using the firewall method which closes everything and opens only the ports that are needed access from Internet. Here is where pure-ftpd come to the rescue. This FTP server has the capability to select the range of ports which will be used for transferring data to the FTP client. This makes the configuration of a firewall much easier.

In the following example, pure-ftp has the following configuration:
– provides FTP and FTPS with jailed Users(Users are confined to their home directory).
– no anonymous clients
– IP Version 4 only
– ports for data transfer are limited to the range 20000-20099

STEPS:
apt-get install pure-ftpd
echo '20000 20099' > /etc/pure-ftpd/conf/PassivePortRange
echo "yes" > /etc/pure-ftpd/conf/NoAnonymous
echo "yes" > /etc/pure-ftpd/conf/ChrootEveryone
echo "yes" > /etc/pure-ftpd/conf/IPV4Only
echo "1" > /etc/pure-ftpd/conf/TLS

If you want to force clients to use TLS only for FTP connections then use the command
echo "3" > /etc/pure-ftpd/conf/TLS

Exceptions to chroot
If you want to confine all users to their home directories EXCEPT some trusted users, you need to:
– create a new system group where you add the trusted users in it
– instead of using the above command ‘echo “yes” > /etc/pure-ftpd/conf/ChrootEveryone’
insert the the GID of the trusted group into the file /etc/pure-ftpd/conf/TrustedGID.
Example: We want chroot for all users except ‘martin’ and ‘jannine’. Meaning martin and jannine will be able to navigate in other parts of the system other than their home directories, but all other users will be confined to their home directories:
groupadd ftptrusted
usermod -G ftptrusted martin
usermod -G ftptrusted jannine
GID=$(grep ftptrusted /etc/group | cut -d: -f3)
echo "$GID" > /etc/pure-ftpd/conf/TrustedGID
rm /etc/pure-ftpd/conf/ChrootEveryone

NOTE: To create a properly authority signed certificate file for pure-ftpd, make sure you have both following components in the file /etc/ssl/private/pure-ftpd.pem:
– Private key (in PEM format)
– Certificate (in PEM format)

If instead you want to run it with a self-signed certificate then run the following commands:
mkdir -p /etc/ssl/private/
openssl req -x509 -nodes -days 97300 -newkey rsa:2048 -keyout /etc/ssl/private/pure-ftpd.pem -out /etc/ssl/private/pure-ftpd.pem
chmod 600 /etc/ssl/private/pure-ftpd.pem

Restart pure-ftpd to register the new configuration and certificate.
service pure-ftpd restart

26 Aug 15 Fine tune Ubuntu TCP stack for web server

The following tips taken from the site will help reduce the TCP latency of Ubuntu as a web server :
http://www.cyberciti.biz/faq/linux-tcp-tuning/

 

 

13 Jul 15 Installing NginX 1.9.2 in Ubuntu server 14.04.2 LTS

Since the version of NginX in Ubuntu Server 14.04.2 is only 1.4.6, we need to tell APT to install the more recent version of nginx directly from the NginX maintainer.
Steps:
Add the following lines in /etc/apt/sources.lst
deb http://nginx.org/packages/mainline/ubuntu/ trusty nginx
deb-src http://nginx.org/packages/mainline/ubuntu/ trusty nginx

From your server download the signing key add it to the apt program keyring with the following commands:
cd /etc/apt
wget http://nginx.org/keys/nginx_signing.key
apt-key add nginx_signing.key

Install NginX:
apt-get update
apt-get install nginx

Verify its version:
/usr/sbin/nginx -V
nginx version: nginx/1.9.2
You’re off and running

07 May 15 TCP Load balancing email/web servers with NginX

I’ve got 2 synchronized email servers running and, in order to make sure I don’t have to change the servername settings of my mail client in case one server goes down, I was looking for a straight TCP layer load balancer. There are a few software packages on the market that can do that , eg. Lvs-kiss etc. I found the solution of using NginX quite interesting, since the load balancing seems to be better built especially regarding the return route which gave me some headaches with LVS-KISS. Here is an example of using NginX and TCP load balancing to 2 IMAPs/SMTPs/Webmail servers.

Note: Unfortunately my IMAPs/SMTPs servers are using a legitimate certificates but because the email clients do use the address of an extra server for Loadbalancing, the certificate is declared invalid by the email clients. This TCP load balancing operates at a lower layer(TCP) than application layer where SSL certificates can be used to authenticate. It is therefore not possible to add a certificate to this load balancer. For this reason it is recommended to use a wildcard certificate in both back-end Mail servers for production use of Mail services load-balancing. For HTTP and HTTPS there is no need for a wildcard certificate. Normal certificates installed in both web servers will do.

NginX shortcomings


NginX did have the TCP load-balancing feature compiled only in the Pro version with all the features of backend health check etc.
Since the version 1.9 they introduced a limited version of the TCP load-balancing feature into the community version.
Unfortunately if you need other pro features like backend health-checks you are out of luck.
Fortunately some 3rd parties have created a patch for earlier versions which implement the necessary features of a good low level TCP load balancer. In this tutorial I will show how to compile and patch an earlier version of Nginx to achieve the TCP Load-balancer.

Steps


Login as root and run the following commands:
apt-get remove nginx
apt-get install libssl-dev build-essential git
mkdir ~/build ; cd ~/build
wget -O - http://nginx.org/download/nginx-1.6.2.tar.gz | tar xfvz -
git clone git://github.com/yaoweibin/nginx_tcp_proxy_module
cd nginx-1.6.2/
patch -p1 < ../nginx_tcp_proxy_module/tcp.patch ./configure --add-module=../nginx_tcp_proxy_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --without-http_rewrite_module --without-http_charset_module --without-http_gzip_module --without-http_ssi_module --without-http_userid_module --without-http_access_module --without-http_auth_basic_module --without-http_autoindex_module --without-http_geo_module --without-http_map_module --without-http_split_clients_module --without-http_referer_module --without-http_proxy_module --without-http_fastcgi_module --without-http_uwsgi_module --without-http_scgi_module --without-http_memcached_module --without-http_limit_conn_module --without-http_limit_req_module --without-http_empty_gif_module --without-http_browser_module --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body make && make install mkdir -p /usr/share/nginx/logs mkdir /var/log/nginx mkdir -p /var/lib/nginx/body touch /var/log/nginx/error.log /var/log/nginx/access.log touch /usr/share/nginx/logs/tcp_access.log chown -R www-data: /var/{lib,log}/nginx /usr/share/nginx/logs

Check it's version:
/usr/share/nginx/sbin/nginx -V
Create an init start/stop script
touch /etc/init.d/nginx
chmod 755 /etc/init.d/nginx
mcedit /etc/init.d/nginx

Content:
#!/bin/sh
### BEGIN INIT INFO
# Provides: nginx
# Required-Start: $local_fs $remote_fs $network $syslog
# Required-Stop: $local_fs $remote_fs $network $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: starts the nginx web server
# Description: starts nginx using start-stop-daemon
### END INIT INFO
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/usr/share/nginx/sbin/nginx
NAME=nginx
DESC=nginx
# Include nginx defaults if available
if [ -f /etc/default/nginx ]; then
. /etc/default/nginx
fi
test -x $DAEMON || exit 0
set -e
. /lib/lsb/init-functions
test_nginx_config() {
if $DAEMON -t $DAEMON_OPTS >/dev/null 2>&1; then
return 0
else
$DAEMON -t $DAEMON_OPTS
return $?
fi
}
case "$1" in
start)
echo -n "Starting $DESC: "
test_nginx_config
# Check if the ULIMIT is set in /etc/default/nginx
if [ -n "$ULIMIT" ]; then
# Set the ulimits
ulimit $ULIMIT
fi
start-stop-daemon --start --quiet --pidfile /var/run/$NAME.pid \
--exec $DAEMON -- $DAEMON_OPTS || true
echo "$NAME."
;;
stop)
echo -n "Stopping $DESC: "
start-stop-daemon --stop --quiet --pidfile /var/run/$NAME.pid \
--exec $DAEMON || true
echo "$NAME."
;;
restart|force-reload)
echo -n "Restarting $DESC: "
start-stop-daemon --stop --quiet --pidfile \
/var/run/$NAME.pid --exec $DAEMON || true
sleep 1
test_nginx_config
# Check if the ULIMIT is set in /etc/default/nginx
if [ -n "$ULIMIT" ]; then
# Set the ulimits
ulimit $ULIMIT
fi
start-stop-daemon --start --quiet --pidfile \
/var/run/$NAME.pid --exec $DAEMON -- $DAEMON_OPTS || true
echo "$NAME."
;;
reload)
echo -n "Reloading $DESC configuration: "
test_nginx_config
start-stop-daemon --stop --signal HUP --quiet --pidfile /var/run/$NAME.pid \
--exec $DAEMON || true
echo "$NAME."
;;
configtest|testconfig)
echo -n "Testing $DESC configuration: "
if test_nginx_config; then
echo "$NAME."
else
exit $?
fi
;;
status)
status_of_proc -p /var/run/$NAME.pid "$DAEMON" nginx && exit 0 || exit $?
;;
*)
echo "Usage: $NAME {start|stop|restart|reload|force-reload|status|configtest}" >&2
exit 1
;;
esac
exit 0

The NGinX configuration here TCP-load-balances the following ports: 143, 993, 587, 465, 80 & 443.

Configuration:
To use only the TCP load balancing feature of NginX we configure the strict minimum:
Rename the created configuration file
mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf.orig
Create the new configuration file with the following content:
mcedit /etc/nginx/nginx.conf
Content:
user www-data;
worker_processes 1;
events {
worker_connections 1024;
}
# ---------- TCP Load balancer for IMAPs, IMAP and SMTPS -----------------
tcp {
# ----------------- IMAPs ------------------
upstream cluster_imaps {
server mail2.itmatrix.eu:993;
server mail3.itmatrix.eu:993;
check interval=5000 rise=2 fall=5 timeout=2000 type=tcp;
ip_hash;
}
server {
listen 993;
proxy_pass cluster_imaps;
}
# ----------------- IMAP -------------------
upstream cluster_imap {
server mail2.itmatrix.eu:143;
server mail3.itmatrix.eu:143;
check interval=5000 rise=2 fall=5 timeout=2000 type=imap;
ip_hash;
}
server {
listen 143;
proxy_pass cluster_imap;
}
# ----------------- SMTP -------------------
upstream cluster_smtp {
server mail2.itmatrix.eu:587;
server mail3.itmatrix.eu:587;
check interval=5000 rise=2 fall=5 timeout=2000 type=smtp;
ip_hash;
}
server {
listen 587;
proxy_pass cluster_smtp;
}
# ----------------- SMTP -------------------
upstream cluster_smtps {
server mail2.itmatrix.eu:465;
server mail3.itmatrix.eu:465;
check interval=5000 rise=2 fall=5 timeout=2000 type=smtp;
ip_hash;
}
server {
listen 465;
proxy_pass cluster_smtps;
}
# ----------------- HTTP -------------------
upstream cluster_http {
server mail2.itmatrix.eu:80;
server mail3.itmatrix.eu:80;
check interval=5000 rise=2 fall=5 timeout=2000 type=tcp;
ip_hash;
}
server {
listen 80;
proxy_pass cluster_http;
}
# ----------------- HTTPS -------------------
upstream cluster_https {
server mail2.itmatrix.eu:443;
server mail3.itmatrix.eu:443;
check interval=5000 rise=2 fall=5 timeout=2000 type=tcp;
ip_hash;
}
server {
listen 443;
proxy_pass cluster_https;
}
#--------------- End of TCP Block --------------
}

Start NginX server:
service nginx start

Some explanations of this NginX configuration


UPSTREAM OPTION: check
========================
interval=2000 # Alive-Check interval for the back-end mail servers, 2s
rise=2 # How many Alive-Checks must be successful in order to consider the server as ON-line
fall=5 # How many Alive-Checks must fail in order to consider the server as OFF-line
timeout=1000 # Timeout for responses of Alive-Checks Here: 1 sec.
type=imap; # Type of Alive-Check

STICKY Sessions
ip_hash; # Based on client IP

An overview of types from NGinX Git repositories:
1. tcp is a simple tcp socket connect and peek one byte.
2. ssl_hello sends a client ssl hello packet and receives the server ssl hello packet.
3. http sends a http request packet, receives and parses the http response to diagnose if the upstream server is alive.
4. smtp sends a smtp request packet, receives and parses the smtp response to diagnose if the upstream server is alive. The response begins with β€˜2’ should be an OK response.
5. mysql connects to the mysql server, receives the greeting response to diagnose if the upstream server is alive.
6. pop3 receives and parses the pop3 response to diagnose if the upstream server is alive. The response begins with β€˜+’ should be an OK response.
7. imap connects to the imap server, receives the greeting response to diagnose if the upstream server is alive.

12 Dec 14 Verifying a SSL certificate chain

In order to see if an SSL web site has the proper SSL Certificate chain, this simple command can help:
echo "" | openssl s_client -showcerts -servername web.site.com -connect web.site.com:443 -CApath /etc/ssl/certs/
Example:
echo " " | openssl s_client -showcerts -servername tipstricks.itmatrix.eu -connect tipstricks.itmatrix.eu:443 -CApath /etc/ssl/certs
Result:(most important extract from full result)
CONNECTED(00000003)
depth=2 C = SE, O = AddTrust AB, OU = AddTrust External TTP Network, CN = AddTrust External CA Root
verify return:1
depth=1 C = GB, ST = Greater Manchester, L = Salford, O = COMODO CA Limited, CN = PositiveSSL CA 2
verify return:1
depth=0 OU = Domain Control Validated, OU = PositiveSSL Wildcard, CN = *.itmatrix.eu
verify return:1
---
Certificate chain
0 s:/OU=Domain Control Validated/OU=PositiveSSL Wildcard/CN=*.itmatrix.eu
i:/C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=PositiveSSL CA 2
1 s:/C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=PositiveSSL CA 2
i:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root
2 s:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root
i:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root
---
...............

Explanation:
As seen in the above chain check result, each certificate in the chain involved has an Issuer(i:) line and a Subject(s:) line. The idea to have a full valid certificate chain, is to have the Issuer(i:) line of a certificate the same as the Subject(s:) line of the depth below, and the last (root certificate) has both Issuer and Subject lies the same.
Same example again:
0 s:/OU=Domain Control Validated/OU=PositiveSSL Wildcard/CN=*.itmatrix.eu
i:/C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=PositiveSSL CA 2
.
1 s:/C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=PositiveSSL CA 2
i:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root
.
2 s:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root
i:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root

Verifying the certificate chain using certificate files only(no web request)


Commands using openssl and the certificate & CA files locally can also be used to verify the certificate chain. One possibility is to use the openssl ‘verify’ command as follows:
openssl verify -verbose -purpose sslserver -CAfile {CA_bundlefile.pem} {signed_certificate.pem}

Example:
openssl verify -verbose -purpose sslserver -CAfile Symantec_CA_G4_Bundle.pem my.certificate.com.CRT.pem
The results will be:
If it FAILS:
my.certificate.com.CRT.pem: C = DE, ST = Berlin, L = Berlin, O = my-company, OU = IT DEP, CN = my.company.com
error 20 at 0 depth lookup:unable to get local issuer certificate

OR if it succeeds:
my.certificate.com.CRT.pem: OK

Important note:
In case the CA bundle file contains more than one Intermediate Certificates the lowest level CA must be at the bottom of the file.
eg.
-----BEGIN CERTIFICATE-----
......Intermediate CA
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
......Root CA
-----END CERTIFICATE-----

Another method is to run this script:


CA_Chain_check.sh {CA_bundlefile.pem} {signed_certificate.pem}
eg.
CA_Chain_check.sh Symantec_CA_G4_Bundle.pem my.certificate.com.CRT.pem
Content of the script CA_Chain_check.sh:
#!/bin/bash
# Displays the issuers and Subject of each certificate file
for file in $@ ; do
openssl x509 -in $file -noout -text | egrep 'Issuer:|Subject:'
done

In order for the CA chain validation to succeed the result should be so that the Subject: line of the first(CA Bundle) should match the Issuer: line of the second(web certificate)
eg.
Issuer: C=US, O=VeriSign, Inc., OU=VeriSign Trust Network, OU=(c) 2006 VeriSign, Inc. - For authorized use only, CN=VeriSign Class 3 Public Primary Certification Authority - G5
Subject: C=US, O=Symantec Corporation, OU=Symantec Trust Network, CN=Symantec Class 3 Secure Server CA - G4
Issuer: C=US, O=Symantec Corporation, OU=Symantec Trust Network, CN=Symantec Class 3 Secure Server CA - G4
Subject: C=DE, ST=Berlin, L=Berlin, O = my-company, OU = IT DEP, CN = my.company.com