MAC OS X, Linux, Windows and other IT Tips and Tricks

09 Aug 19 Working with UFW and Docker


I’m not really sure how well this DANGEROUS situation is overall known but I will try to explain it just a bit here because I’m not a specialist neither in Docker, nor in Iptables nor in UFW.

If for example you install Docker in you Linux system in, for example Ubuntu 18.x, and want to use UFW as firewall to block access from Internet to some internal ports offered by Docker Containers, you’re up for a bad surprise. Docker uses firewall rules which are seen Before the normal INPUT filter Iptables chains. The result is that all the ports offered by containers running in Docker are visible from Internet even if you blocked them in UFW firewall. That is a very dangerous situation where not-so-well-protected ports in docker containers could easily get hacked.

Half Workaround:
Although the issue is discussed in depth in the following forum:
the quick and dirty solution is the following:
Note: Please read this forum since the implications of doing so might break some inter-workings of your Docker containers.

Set DEFAULT_FORWARD_POLICY="ACCEPT" in /etc/default/ufw
Set DOCKER_OPTS="--iptables=false" in /etc/default/docker

On Ubuntu 18.04 things are different, because docker is started by systemd, so /etc/default/docker is ignored. The solution described here creates the file /etc/systemd/system/ with this content
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --iptables=false

and issue systemctl daemon-reload afterwards.

Note: If you know of a more elegant solution, please send it as a comment and will be glad to include it here.

07 Aug 19 Installing ‘glances’ system overview

Although top, htop etc. are giving a good overview of the system here is another one which is quite good: glances
Upgrade pip(if needed)

# apt install python-pip python3-pip
# pip install –upgrade pip
# pip install glances

25 Jul 19 Solution for MySQL/MariaDB Field ‘xxx’ doesn’t have a default value

Using PhpMyAdmin I was trying to add a user and this error kept on coming up and I could not add the user. After some research here are 2 solutions:

The following articles are so good and helpful that as soon as I saw them in Internet I wanted to copy them here. The first solution was taken from the following site.

Thanks for providing this short but great article.


In the process of migrating from MySQL 5.5 to MariaDB 5.5, I made a configuration mistake.

When I installed MariaDB with the default options, it had enabled the sql mode, “STRICT_TRANS_TABLES”.

It took me a while to realize that I needed to change the my.ini / my.cnf value for sql-mode to be the following:


My application is not designed for the strict_trans_tables behavior, which seems to require that a default value is explicitly set on every column or insert/update/replace statements will fail.  Rather then update the schema on every table, I choose to just change the sql-mode back to the behavior my app was designed for.  The mysql docs do say that some of the strict modes can decrease mysql’s performance, so it doesn’t seem like much would be gained from trying to enable this – though you could have it on in the test environment, but off in production.

I couldn’t find any resources online that posted this as a fix.

The mysql docs for STRICT_TRANS_TABLES state:

Strict mode controls how MySQL handles input values that are invalid or missing. A value can be invalid for several reasons. For example, it might have the wrong data type for the column, or it might be out of range. A value is missing when a new row to be inserted does not contain a value for a non-NULL column that has no explicit DEFAULT clause in its definition. (For a NULL column, NULL is inserted if the value is missing.)

This will also fix the same error when using “triggers”.  I did find someone on the Mariadb mailing list citing this as a “bug”, but it seems like it’s just a matter of configuration / schema design. 

I hope this helps someone else!

Here is another solution taken from the following site:

The ‘SSL_cypher’ column in your table structure has been marked ‘NON-NULL’ but your ‘INSERT’ query isn’t providing a value for it. MySQL will try to assign the default value in these circumstances, but your column hasn’t been given one.

You need either to set a default value for ‘ssl_cypher’ or alter the table structure such that the ‘ssl_cypher’, ‘x509_issuer’ and ‘x509_subject’ columns are marked as ‘NULL’

22 Jul 19 GRUB on EUFI capable System

New PCs are often equipped of the UEFI(Universal EFI) capable booting. Some SSD drives will even not boot in Linux the Legacy MBR mode. So in order to make the PC boot properly we need to:
– Create an EFI Partition
– Install GRUB(Boot loader) on it

Create the EFI Partition:
Create an EFI capable Boot partition of 200MB as first Partition before installing the Debian/Ubuntu system

# gdisk /dev/(boot disk)
From inside gdisk:

  • Create an EFI Boot partition of 200MB as first Partition before installing the Debian/Ubuntu system
    gdisk /dev/(boot disk)
    • Accept to convert the MBR into a GPT partition if asked
    • create new partition of 200MB
    • set the type of partition as EF00
    • set the attribute of the partition with the ‘a’ command in expert menu and enter the number 2 (legacy BIOS bootable)
    • write the partition ‘w’ mkfs.vfat /dev/(EFI Partition)
    • exit gdisk
  • Start a Live system
  • Do full installation but the Grub Bootloader might not succeed. Jump this step and do the following to properly install the GRUB

Installing GRUB on a EFI system

Start a live system from a liveCD or an USB Drive.
From inside the live system:

  • Login as root
  • Mount the installed system on /mnt
    • Chroot to /mnt
      • # for i in /dev /dev/pts /proc /sys /run; do mount -B $i /mnt$i;
      • # done; chroot /mnt
  • # . /root/.bashrc
  • # apt-get install grub-efi
  • # mkdir /boot/efi
  • # mount /dev/(EFI Partition) /boot/efi
  • # grub-install –target=x86_64-efi /dev/nvme0n1
  • # update-grub

Also see: //

22 Jul 19 Unassign a software RAID member volume to become again a normal volume.

I had assigned the drive partition (/dev/sdb1) to a Linux Software RAID disk group. Now I want to take out this drive from the software RAID group and use it as a normal drive. Since the drive has been assigned to a group simply trying to use it as a normal drive by re-partitioning and formatting it won’t work. The disk has to first be taken out of the RAID pool and then it can be seen as a normal drive by Linux.

Take a look at the content of the software RAID drive pool to see if the drive is par of it:
# mdadm –detail /dev/md0
Make sure the RAID drive is not mounted
# umount -l /dev/md0
Stop the RAID drive:
# mdadm –stop /dev/md0
Extract the desired drive from the RAID pool:
# mdadm –zero-superblock /dev/sdb1
Take a look at the remaining volumes in the RAID pool
# mdadm –detail /dev/md0

Now the volume /dev/sdb1 can be formatted and mounted as a normal volume.

Important note:
In the process ‘glueing’ physical partitions into one logical one, there is a loss of around 4% of the free space for LVM management of the volumes. On top of that when the logical drive gets formatted in ext4 format, it also loses an extra 5% of the space for filesystem management. So comparing the final usable free space for files and directories to the original raw space of the partitions, we get around 9% space loss.(actually used for management).

11 May 19 Solving the Running /scripts/local-block loop while booting in linux

Linux booting and taking a long time while looping with the script:

Linux boot needs to know the UUID of the Swap file it tries to mount.

Run the command:
and get the UUID of the Swap file.
Run the command:
nano /etc/initramfs-tools/conf.d/resume
This file doesn’t exist. It will then be created.
Add the following line as follows(example of UUID):
Save the file
Run the following command:
update-initramfs -u

17 Apr 19 Find used iNodes in System

I was working on a system that suddenly showed no more space available in system even though ‘df -h‘ showed that there was still lost of space left. The command ‘df -hi‘ showed me that there were no iNodes left in system: all used. Some sites are showing how to find the directories that use the most iNodes but in this case they didn’t help since some components of the commands need temporary disk space.

One of the suggested commands which worked very well in this case is the following found at this site in Post No. 20:

du /* --inodes -S | sort -rh | sed -n '1,50{/^.\{71\}/s/^\(.\{30\}\).*\(.\{37\}\)$/\1...\2/;p}

08 Apr 19 Creating new Software RAID Disks


Lets say you rented a server in Hetzner and it offers the wappy 4 x 10TB disks. You want the following disks configuration:

Disks: 1 and 2 (in RAID 1) total 10TB in Mirrored RAID 1 mode
Disks: 3 and 4 (In RAID 1) total 10TB in Mirrored RAID 1 mode

1 – During the configuration of the Hetzner Linux Image you disable(comment out ‘#’) the disk3 and disk 4 from the configuration and set the RAIDLEVEL to 1. You can also decide what kind of partitions the first RAID disks will be setup here (in the section which starts with ‘PART’ lines)but this HOWTO doesn’t cover that.

2 – Save and let the system be built.

3 – After reboot run the following command:

Example Output:

sda 9.1T disk
|-sda1 8G linux_raid_member part
| -md0 8G swap raid1 [SWAP] |-sda2 512M linux_raid_member part |-md1 511.4M ext3 raid1 /boot
|-sda3 9.1T linux_raid_member part
| -md2 9.1T ext4 raid1 / -sda4 1M part
sdb 9.1T disk
|-sdb1 8G linux_raid_member part
| -md0 8G swap raid1 [SWAP] |-sdb2 512M linux_raid_member part |-md1 511.4M ext3 raid1 /boot
|-sdb3 9.1T linux_raid_member part
| -md2 9.1T ext4 raid1 / -sdb4 1M part
sdc 9.1T disk
sdd 9.1T disk

You can see that the first partitions have been set under md0, md1 and md2 disks partitions BUT the drives sdc and sdd are not assigned to any RAID group. We will assign them to md3 with the command:

mdadm --create --verbose /dev/md3 --level=1 --raid-devices=2 /dev/sdc /dev/sdd


mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
mdadm: size set to 9766305792K
mdadm: automatically enabling write-intent bitmap on large array
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md3 started.

To verify we run the command:



sda 9.1T disk
|-sda1 8G linux_raid_member part
| -md0 8G swap raid1 [SWAP] |-sda2 512M linux_raid_member part |-md1 511.4M ext3 raid1 /boot
|-sda3 9.1T linux_raid_member part
| -md2 9.1T ext4 raid1 / -sda4 1M part
sdb 9.1T disk
|-sdb1 8G linux_raid_member part
| -md0 8G swap raid1 [SWAP] |-sdb2 512M linux_raid_member part |-md1 511.4M ext3 raid1 /boot
|-sdb3 9.1T linux_raid_member part
| -md2 9.1T ext4 raid1 / -sdb4 1M part
sdc 9.1T linux_raid_member disk
-md3 9.1T raid1 sdd 9.1T linux_raid_member disk -md3 9.1T raid1</PRE>
Now we can start formatting the md3 disk array to ext4 format:
mkfs.ext4 /dev/md3

Create a mountpoint in the filesystem:
mkdir /DATA

Configure the /etc/fstab for mounting it at boot time:

mcedit /etc/fstab


proc /proc proc defaults 0 0
/dev/md/0 none swap sw 0 0
/dev/md/1 /boot ext3 defaults 0 0
/dev/md/2 / ext4 defaults 0 0
/dev/md3 /DATA ext4 defaults 0 0

Mount all existing drives set in /etc/fstab

mount -a

Verify the space in system:
df -h


Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 588K 3.2G 1% /run
/dev/md2 9.1T 1.1G 8.6T 1% /
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/md1 488M 40M 423M 9% /boot
tmpfs 3.2G 0 3.2G 0% /run/user/0
/dev/md3 9.1T 80M 8.6T 1% /DATA

23 Mar 18 Listing all subscribers in mailman mailing list

As far as my experience with mailman is concerned, if I create a list of all subscribers of a mailing list using the web interface I get the list with the word ‘at’ instead of ‘@’ in each email address. In order to get a normal list of all addresses of subscribers a mailing list here is the command line for doing this:

/usr/lib/mailman/bin/list_members ListName

16 Oct 15 Installing Debian backports in Debian Wheezy

Login as root and run the following commands:
cd /etc/apt
echo "deb // wheezy-backports main" > /etc/apt/sources.list.d/backport.list
apt-get update
gpg --keyserver --recv-key 7638D0442B90D010
gpg -a --export 7638D0442B90D010 | apt-key add -

(should get the ‘OK’ as answer)

Installing a single package from backports:
apt-get -t wheezy-backports install {package-name}