Solving MySQL / MariaDB startup / connection problems when running ZoneMinder in a Docker image

After a few years away from ZoneMinder, I decided to reinstall, but this time using a Docker image. I selected dlandon’s very helpful packaging rather than attempting to create my own image. I created the necessary directories (for the mapped in recording cache and for the MySQL database) and started up the image with the parameters as described in dlandon’s documentation.

However, the image kept refusing to start for me. I would get:

Starting MariaDB database server mysqld,
...fail!

First thing to do was to figure out what the problem was for mysqld. You can’t just run a shell inside the image though, as the image has quit by the time you’re looking at it. So let’s modify the image to start up with a bash shell (while keeping its state, so that we can check the logs, etc.) and poke around inside. I recommend storing all this in a script so you can repeat it! Here’s the script I used to commit a new copy with bash as the entry point, start it up, and then delete it afterwards (while it’s running you can exec bash in it from another terminal if you want to have multiple terminals poking around, which can be helpful):

YOURCONFIGPATH=your config path goes here
YOURCACHEPATH=your cache path goes here
YOURUSERNAME=your user name goes here

# Get the original container
CONTAINERID=`docker ps -a | grep zoneminder | awk '{print $1}'`

# Clone it
docker commit ${CONTAINERID} ${YOURUSERNAME}/test

# Start it with bash as the modified entry point
docker run -it --entrypoint=bash \
--name="Zoneminder2" \
--privileged="true" \
-p 8443:443/tcp \
-p 9000:9000/tcp \
-e TZ="America/Chicago" \
-e SHMEM="30%" \
-e PUID="99" \
-e PGID="100" \
-v "${YOURCONFIGPATH}":"/config":rw \
-v "${YOURCACHEPATH}":"/var/cache/zoneminder":rw \
${YOURUSERNAME}/test

# Clean up afterwards ...
docker images -a | grep "${YOURUSERNAME}/test" | awk '{print $3}' | xargs docker rmi -f
docker ps -a | grep "Zoneminder2" | awk '{print $1}' | xargs docker rm

Now you can poke around inside the container to figure out why mysqld isn’t happy running. You’ll find your key log file here:

/var/log/mysql/error.log

You may want to clean out the log file (it probably has some goop inherited from the original docker build) and re-run the service to get a clean and easy view:

rm /var/log/mysql/error.log
service mysql start

OK, so what you’re probably seeing in the log file is a warning about not being able to write a test file, and some following MariaDB errors about bad permissions (I’m no longer getting them, so this is from memory/Googling others):

[Warning] Can't create test file

This is indicating that MySQL can’t write to the directory you’ve pointed it to.

First, ensure that there are no basic permissions problems. I won’t get too into this, as you should be able to do this — but make sure that wherever you put the database (it’s going to be in ${CONFIG}/mysql) is fully accessible (e.g., the directory path can all be read, the directory itself is writable, etc.)

I’m assuming you have what look to be fine permissions — I sure did.

At this point, you probably have one of three culprits:

  1. If you located the CONFIG outside of the normal spots (let’s say, anywhere outside of /var), then it’s possible you’re running into MariaDB’s own protection. It’s got some settings to prevent writing to /usr, /etc, /home, and so forth. This wasn’t my problem, but I found it while trying to figure out what my problem was. If this is your problem, easiest would be to relocate the directory, but if you can’t, you may find some help in tinkering with the ProtectHome and ProtectSystem settings for MySQL in the configuration (inside the docker image! — which introduces its own long-term complexities).
  2. If you’re running SELinix in your host system, SELinux may be blocking access to your data directory. I’m not, so I can’t give you details, but you may find SELinux and MySQL by Jeremy Smyth to be of use.
  3. For me, the problem was that AppArmor (in my host system) was blocking access. I could see this very clearly (once I started looking) by checking /var/log/syslog and looking for apparmor lines. You can find the fix for this in AppArmor and MySQL by Jeremy Smyth . In brief, the fix is to add two lines to the bottom of /etc/apparmor.d/local/usr.sbin.mysqld (customized for your data location, of course):
/YOURDATAPATH/ r,
/YOURDATAPATH/** rwk,

Et voila, everything started working!

Fix ZoneMinder API Problems (found with HomeAssistant)

I was having difficulty getting the ZoneMinder module to work with HomeAssistant, running on Ubuntu 16.04.  I was getting an error about invalid credentials (sorry, didn’t record the error).  It turned out that the problem was with the APC cache not working properly in the ZoneMinder API.  While I didn’t carefully record all of my steps, here are some general pointers for you to help you in running it down.

First, the best way to test whether the ZoneMinder API is working is just to fetch the URL http://{my.server.com}/zm/api/host/getVersion.json .  Note that you should first get your authentication cookie in your browser (by logging into ZoneMinder normally through the browser).  Until you get a good version message back from ZoneMinder, your overall API isn’t working.

You can find information in general about the ZoneMinder API here.

To troubleshoot what’s going on in the API, pay attention to your Apache logs.  And when you change something in general (e.g., with PHP, which we’re going to get to) be sure to restart all the appropriate services!  (Apache and ZoneMinder at least).

In my case, the problem appears to be that my system went from PHP 5.5 to PHP 7.0 as the base PHP after I’d installed ZoneMinder.  As far as I understand it, PHP 5.5 supported APC as its cache, but that support was removed (by default) in PHP 7.0.  The Cake module, which is used as the basis of the ZoneMinder API, is set to use APC by default as its cache.

To fix this, you could either edit the default cache for Cake (look in your ZoneMinder PHP files under the API directory, and in one of the initialization files you’ll find the default being set to APC — I think if you set it to “file” instead you’ll disable caching).  Or, as I did, you can enable backwards-compatible APC caching in Apache/PHP 7.0.  In general, this means enabling APCu caching in Apache (which is the current implementation), then enabling backwards compatibility to support APC.  I don’t remember exactly which sites I used to work through this, but a Google search for “enable apcu php 7” should get you on the right track.   I believe you have to get a module or two, and then make sure you’ve got both apc and apcu enabled for Apache.

One last thing I did — and I’m not sure if it was relevant or not — was to precreate some temporary directories according to this ZoneMinder forum post.  That may have been a red herring … but it was part of my overall solution.

At this end of this, ZoneMinder works great with HomeAssistant, and the ZoneMinder API is back up and running!

Fixing iPerf3 Permission Denied Problem on Windows

Installed iPerf3 on Windows (10) to c:\program files\iPerf.  When attempting to connect to a server, was receiving the error: “iperf3: error – unable to create a new stream: Permission denied”.

Couldn’t find anything Windows-related by searching.

The problem was that attempting to run in a protected location (“c:\program files”) blocked the program from being able to create temporary files.  Relocating and running from a non-protected location fixed the problem.

Squid3 Proxy Problems on Ubuntu Linux to Yahoo, Google, Facebook, YouTube and so on

I set up a Squid3 (Squid) proxy as part of my DansGuardian setup on Ubuntu to filter the kids’ web traffic.  Overall, the proxy worked fine … but I was getting strange connection failures to some of the largest web properties, such as Yahoo, Google, YouTube and Facebook, whereas all smaller properties worked just fine.

The general error I was receiving was “The system returned: (110) Connection timed out”.

It turned out the problem was that Squid was using IPv6 to access any site that returned a legitimate IPv6 address.  As my system wasn’t properly configured for IPv6, the request was failing.

The right answer, of course, is to get on board and configure properly for IPv6.  It’s the future, it’s faster, etc.

The short answer is to add to your squid.conf file:  dns_v4_first on

This will force Squid to check for a valid IPv4 DNS entry, and use that.  fixed a day-long problem for me like … snap!

Setting up Ubuntu Firewall (UFW) for NFS

I use ufw as my firewall in Ubuntu.  I was recently trying to hook two Ubuntu servers together with NFS, and running into firewall problems.  Here’s how to get it working, in case you’re encountering the same problem.

1.  Start by ensuring that you have the basic NFS ports open.  These are going to be 2049 (udp/tcp) for NFS, and 111 (udp/tcp) for “sunrpc”.  You can add both of these with a straightforward ufw rule, relying on /etc/services to identify the ports.  For example, assuming that you have LCL_NET set to your local network, and only want to allow access to machines in that network:

ufw allow from ${LCL_NET} to any port nfs

ufw allow from ${LCL_NET} to any port sunrpc

2.  The next problem you have is that the rpc.mountd port is assigned randomly, unless you fix it otherwise.  So, first, edit /etc/default/nfs-kernel-server and change the line for RPCMOUNTDOPTS to be:

RPCMOUNTDOPTS=”-p 4001 -g”

Then go back to ufw and allow this port for both udp and tcp.  (I’m not including the command, as there are a few different ways to do it, and I do it in a way that’s simpler in the end, but more complex to explain at the moment.)

Finally, of course, restart ufw and nfs.

Resources:

 

Configuring ZoneMinder on Ubuntu – Buttons Don’t Work (Javascript Errors)

If you’re configuring ZoneMinder (a great IP camera control application) on Ubuntu, and the console doesn’t work (buttons aren’t active when you click them), your problem is probably that the Javascript path isn’t working.

To test this, view source on the console page, and check the path to MooTools.

MooTools should be installed in /usr/shared/javascript/mootools.  The configuration for shared javascript should be in /etc/apache2/conf-enabled.  For me, for whatever reason, conf-enabled wasn’t actually enabled.  Copying javascript-common.conf to /etc/apache2/conf.d and then restarting apache (/etc/init.d/apache2 force-reload) worked to fix the problem.

Tunneling SMTP (TCP port 25) through a VPN

I’ve recently switched providers (having moved countries), and am now reconstructing my services in a new location, using AT&T UVerse.  I continue to have an account with StrongVPN that I use (I originally acquired it to give me a US IP for use when out of the US … side note, I’m really happy with StrongVPN).

The problem is that UVerse blocks outbound SMTP (port 25) traffic, and doesn’t provide their own relay (or, more accurately, won’t relay mail that’s neither to nor from an AT&T address).  I don’t have much mail to send (just what the kids generate, and the occasional system alert), so I don’t feel that I’m much of a threat to anyone’s traffic.  It took me a while to figure this out (I’m not the world’s greatest IP routing guru), so I figured it might be of use to you.

Many thanks to Lekensteyn and the other contributors to the post “iptables – Target to Route Packet to Specific Interface” on serverfault.com for key pointers.

Objective:  Running a postfix SMTP server on Ubuntu Linux, route all outbound SMTP through a VPN tunnel.  We’re going to wind up not using a relay server, and just directly connecting to the target hosts.

We’ll assume that your tunnel interface is TUN0 … replace this below as you see fit.

1. Remove any relays from your postfix configuration

If you’re coming from a previous configuration, you were probably configured with a relay server.  You need to remove it, so that you’re directly connecting to your target hosts (if you had a relay server, you probably don’t need to force the traffic anywhere!  On the other hand, if you’re trying to route port 25 for some other reason, then skip this step.)

a. Edit /etc/postfix/main.cf

Edit this file to remove or comment out the line that established your relay host:

# relayhost = smtp.mypreviousprovider.nl

2. Force SMTP Port 25 through the VPN

I have a very complex routing scheme, with a ton of subnets that I use for all sorts of things — for example, forcing a wifi AP out through the VPN (so that you can connect to a specific AP to auto-use the VPN), keeping the kids on a different subnet so that I can force them out a different Internet connection, and blackholing unknown devices so that the kids can’t hook anything up without me verifying the MAC address … drop me a comment if you’re interested in any of these things … so the long and short of it is that I already have a routing script that I use to set up all my routing tables.

If you already have such a script, just add the additions below to that script.  If not, then create a new script for these commands.  It’s a somewhat separate exercise to hook it up so that it’s properly invoked whenever you bring an interface up or down (which I leave as a Google exercise for you, as it’s not fresh on my mind) … but in the worst case, you can just run it manually whenever your routing tables get rebuilt.

Note that we’re going to use “7” as the mark for our rules.  This is arbitrary.

a.  If you haven’t already, create a routing table for the VPN

I already had a routing table set up … if you do then just use that routing table, below.  But if you don’t, here’s a quick and dirty routing table to push over your VPN (please note, I’m just typing this, and haven’t actually tested the exact lines below, as I don’t need them in my setup!!):

# For clarity, clear anything that might have accumulated there.  Ignore any error, here.
ip route flush table vpn_table
# Push all traffic that goes to this table out the VPN.  Substitute your VPN's gateway for 99.99.99.99 below.
iproute add default via 99.99.99.99 table vpn_table
# And be sure to flush to pick it up
iproute flush cache
b. Now mark all SMTP port 25 packets with 7
iptables -t mangle -A OUTPUT -p tcp --dport 25 -j MARK --set-mark 7
c. Set the source IP to our ID on the VPN (substitute for 88.88.88.88) rather than our local network ID.  Remember to use your correct interface if not TUN0, below.
iptables -t nat -A POSTROUTING -o tun0 -j SNAT --to-source 88.88.88.88
d. Send everything marked with 7 to the VPN table (to force out the VPN)
ip rule add fwmark 25 table vpn_table
e. Relax the reverse path source validation

(See the post for a discussion.)

sysctl -w net.ipv4.conf.tun0.rp_filter=2
f. And flush for good measure
ip route flush cache

That should do it!  Run your script, and all your port 25 traffic should be running out your VPN.  Obviously, you can adapt the concepts here for other applications.

Upgrading from Ubuntu 12.04 (Precise Pangolin) to 12.10 (Quantic Quetzal)

Just some notes on some of the issues I hit in updating from Ubuntu 12.04 (Precise Pangolin) to 12.10 (Quantic Quetzal), in the event they’re of use to someone.

Problems with PHP Initialization

I was receiving a ton of errors from cron, with the text:

PHP Warning:  PHP Startup: Unable to load dynamic library ‘/usr/lib/php5/20100525/pam_auth.so’ – /usr/lib/php5/20100525/pam_auth.so: cannot open shared object file: No such file or directory in Unknown on line 0

While there appears to be a line of thought (via Google) that one needs to install php5-auth-pam or php5-pam … I didn’t find either package with a simple apt-get install, and didn’t bother investigating further.

The actual fix was to check /etc/php5/conf.d.  In that folder was a legacy (few years old) pam_auth.ini file which was attempting to load pam_auth.so.  Removing this file enabled PHP to start.

WordPress Hiccough

WordPress sites didn’t want to start up for me,  claiming a missing wp-config.php.  When I checked /etc/wordpress, I found that my old wp-config.php has been copied to a typical wp-config.php.dpkg-bak … but no new file had been installed.  I just replaced back my old copy.

 

Firewall Port 9933 for My Singing Monsters

Just recording this, as I was only able to find an unconfirmed hint on a forum.

You need to open up port 9933 (not sure whether TCP or UDP … I opened both, and didn’t bother going back to experiment) on your firewall/router in order to enable the Big Blue Bubble iOS game My Singing Monsters to connect to its server. Otherwise, you’ll get the “Failed to Connect to Server” error from the game.

Hope this helps someone …

Holy cow … restore overwritten files!

I just had one of those “gulp!” moments … it’s 3am, and I’d been editing a PowerPoint for maybe 5-6 hours.  I’d used a previous deck as a template, and because I thought I might want to sample from some slides.  I clicked “save”.  Uh oh … I never remembered to change the name of the PPT to a new deck — I just overwrote what turned out to be my only copy of the previous deck!  And it was a critical deck.  This is not the first time this has happened to me.

Thanks to this post on restoring overwritten PowerPoint files, I found ShadowExplorer, an awesome utility … man, it’s my new best friend!  I was immediately able to go to the directory and find my previous copy of the file and restore it … sanity and hours of work saved!  Later, I determined that maybe I could have just gone in Windows Explorer to the directory, right-clicked on the file, and selected “Previous Versions”.