MAC OS X, Linux, Windows and other IT Tips and Tricks

21 Jan 17 Mounting a remote directory using SSHFS in Debian Jessie

If you want to mount a directory on a remote server via Internet NFS can be quite a challenge to protect. A good solution would then be to use SSHFS. Here is a shot Howto for Debian Jessie.

Note: In Wheezy and in Jessie before I did an upgrade to the kernel 3.16.0-4-amd64, the following entry in /etc/fstab was working: /local_dir fuse defaults 0 0
BUT, as soon as upgraded Jessie to the kernel 3.16.0-4-amd64, I could not boot any more and the system went into an emergency mode signalizing that I should give the root password or press Ctrl-D to continue. Ctrl-D brought to nowhere and the system just crashed. It was also suggested that I should give the command ‘journalctl -xb’ to find out what was wrong after I had given the root password. This command gave me the indication that ‘process /bin/plymouth could not be executed’. Well, the message is quite misleading since the error was that the new kernel was no more supporting the above older method of mounting a filesystem using SSHFS in /etc/fstab. Commenting this entry in /etc/fstab allowed me to boot and later to change the entry for a new one that worked which follows.

First install the needed package:
apt-get install sshfs
Then considering the two scenarios:
1 – User mount: Mounting a remote directory belonging to user ‘media’ using SSHFS and the ssh keys. User ‘media’ was configured in both servers to have the same UID.
2 – Root mount: Mounting a remote directory belonging to root using SSHFS and the ssh keys.

Scenario 1:(user mount)

On remote server run the command:
useradd -d /home/media/ -u 2017 -s /bin/bash media
passwd media (give any password, that will need to be deleted later anyway)
mkdir -p /home/media/share1
chown -R media: /home/media/share1

On local server run the commands:
useradd -d /home/media/ -u 2017 -s /bin/bash media
mkdir -p /home/media/share1
chown -R media: /home/media/share1
su - media
ssh-keygen -t rsa (press <Enter> to all questions)
ssh-copy-id (enter media user's temporary password of remote server)

Enter in /etc/fstab: /home/media/share1 fuse.sshfs noauto,x-systemd.automount,_netdev,user,idmap=user,follow_symlinks,identityfile=/home/media/.ssh/id_rsa,allow_other,default_permissions,uid=2017,gid=2017 0 0
Back on remote server, disable the user’s password using the command:
passwd -l media
———- End scenario 1 ———–

Scenario 2 (root mount)

ssh-copy-id (enter 'root' password of remote server)
Enter in /etc/fstab: /share2 fuse.sshfs noauto,x-systemd.automount,_netdev,user,idmap=user,follow_symlinks,identityfile=/root/.ssh/id_rsa,allow_other,default_permissions,uid=0,gid=0 0 0
———- End scenario 2 ———–
Then reboot the system
After reboot you won’t see yet any mount entry if you give the command ‘mount’. It will only appear after the first attempt to access the mount point in the local server. This mount is governed by systemd. You can’t quite control manually the mounting and unmounting of this new method since it’s controlled by systemd. I’m still looking for ways to manually mount/unmount this systemd controlled mount. Any suggestions is welcome.

13 Nov 16 Ubuntu 16.10 : xenconsole: Could not read tty from store: Success


After having had some stability problems, with running Xen DOMUs under Ubuntu 16.04/Xen 4.6, I decided to upgrade to Ubuntu 16.10/Xen 4.7.
Unfortunately, as I tried to start any of the DOMUs with the option -c to see the console content, the following error message was displayed and I got kicked out and no console.
xenconsole: Could not read tty from store: Success
I searched for hours in the Internet to find a solution. This morning I found an article where the version of Xen was much earlier but the problem was the same.


The daemon xenconsoled was not running. Loading this daemon beforehand seems to have solved this issue, which got me in real trouble with my clients screaming against such long downtime of the servers. For some reason the DOMUs just hung as well.


Start the daemon with the command:
/usr/lib/xen-4.7/bin/xenconsoled --pid-file=/var/run/
You can make sure that this daemon will start automatically by using one of the following 2 methods:
Start the daemon using the @reboot cron job as follows:
crontab -e
@reboot /bin/sleep 15; /usr/lib/xen-4.7/bin/xenconsoled
Start the daemon using Systemd start method.
touch /etc/systemd/system/xenconsoled.service
vim /etc/systemd/system/xenconsoled.service

Content of xenconsoled.service
Description=Xen Console Daemon service
ExecStart=/usr/lib/xen-4.7/bin/xenconsoled --pid-file=/var/run/
ExecStop=/usr/bin/killall xenconsoled

Execute those commands to register the service for boot start and start it now manually.
systemctl enable xenconsoled
systemctl daemon-reload
service xenconsoled start

30 Oct 16 Resolving Mysql error: Too many open files

As I upgraded from Mysql 5.5 to 5.6 suddenly some sites were showing the following error:
...... Too many open files
The issue has to do with the present limitations given to the system and PAM system to open max 1024 files. After doing some research I found this site below here which is in German in which the following is explained:

Check the files-open limits or running mysql server:
mysql -p -u root
mysql> SHOW VARIABLES LIKE 'open%';

The very possible output:
| Variable_name | Value |
| open_files_limit | 1024 |
1 row in set (0.00 sec)

That means that Mysql server gets to open maximum 1024 which seems too little for Mysql 5.6.

Raising this limit
Edit the file /etc/security/limits.conf and add the following lines:
mysql hard nofile 65535
mysql soft nofile 65535

This will raise the limit of open files to 65535 for the user mysql only.
If you want to rise this limit for all users then replace the word mysql for *
* hard nofile 65535
* soft nofile 65535

And according to this site edit the file /etc/pam.d/common-session and add this line at the end:
session required
Note: I’m not sure this step is really needed though. Some people did try without it and it also worked. For me, in Debian Wheezy, I had to do this otherwise I was still getting the error.

For systems that run systemd instead of InitV do the following:
Edit file /usr/lib/systemd/system/mysqld.service
Add these 2 lines at the end:

Restart Mysql server and test it again
service mysql restart
mysql -p -u root
mysql>> SHOW VARIABLES LIKE 'open%';

The hopeful output:
| Variable_name | Value |
| open_files_limit | 65535 |
1 row in set (0.00 sec)

This error should no more appear.

10 Sep 16 Adding a new service to Linux systemd

Since the System V is slowly being phased out one most likely needs to learn how to get along with SYSTEMD which is much more powerful. For example, one useful feature is to automatically restart services that stop on their own. Such features are found for example in BluePill etc. With Systemd there is no need to use such extra watchdog. Here are some very very basic information about how to create a new service called ‘unit’ under Systemd in Linux.
Systemd has its configuration files in: /etc/systemd/
In this example I will create a Systemd configuration file for a simple service called istatd which should start the single daemon with the command: /usr/local/bin/istatd -d
In order to create a service that only root can operate, its new configuration file should be created as: /etc/systemd/system/istatd.service
touch /etc/systemd/system/istatd.service
chmod 644 /etc/systemd/system/istatd.service

Description=IStad iPhone monitoring service
ExecStart=/usr/local/bin/istatd -d
ExecStop=/usr/bin/killall istatd

This configuration file for the unit istatd will start/stop the daemon and restart it if it stops on its own 3 sec after it was detected by the watchdog of its disappearance from the process list.
To activate the new configuration and start the service run:
systemctl daemon-reload
service istatd start

Possible commands for start/stop/restart/status and debugging are:
systemctl {start|stop|restart|status} istatd
service istatd {start|stop|restart|status}

For Systemd debugging use the command:
journalctl -xn
After any changes to any of the Systemd configuration file you should run the command:
systemctl enable istatd
systemctl daemon-reload

For more information on how Systemd works and how to build its configuration files see:

Some other useful commands:

Completely delete a service:
systemctl stop [servicename]
systemctl disable [servicename]
systemctl daemon-reload
systemctl reset-failed