Fix ZoneMinder API Problems (found with HomeAssistant)

I was having difficulty getting the ZoneMinder module to work with HomeAssistant, running on Ubuntu 16.04.  I was getting an error about invalid credentials (sorry, didn’t record the error).  It turned out that the problem was with the APC cache not working properly in the ZoneMinder API.  While I didn’t carefully record all of my steps, here are some general pointers for you to help you in running it down.

First, the best way to test whether the ZoneMinder API is working is just to fetch the URL http://{my.server.com}/zm/api/host/getVersion.json .  Note that you should first get your authentication cookie in your browser (by logging into ZoneMinder normally through the browser).  Until you get a good version message back from ZoneMinder, your overall API isn’t working.

You can find information in general about the ZoneMinder API here.

To troubleshoot what’s going on in the API, pay attention to your Apache logs.  And when you change something in general (e.g., with PHP, which we’re going to get to) be sure to restart all the appropriate services!  (Apache and ZoneMinder at least).

In my case, the problem appears to be that my system went from PHP 5.5 to PHP 7.0 as the base PHP after I’d installed ZoneMinder.  As far as I understand it, PHP 5.5 supported APC as its cache, but that support was removed (by default) in PHP 7.0.  The Cake module, which is used as the basis of the ZoneMinder API, is set to use APC by default as its cache.

To fix this, you could either edit the default cache for Cake (look in your ZoneMinder PHP files under the API directory, and in one of the initialization files you’ll find the default being set to APC — I think if you set it to “file” instead you’ll disable caching).  Or, as I did, you can enable backwards-compatible APC caching in Apache/PHP 7.0.  In general, this means enabling APCu caching in Apache (which is the current implementation), then enabling backwards compatibility to support APC.  I don’t remember exactly which sites I used to work through this, but a Google search for “enable apcu php 7” should get you on the right track.   I believe you have to get a module or two, and then make sure you’ve got both apc and apcu enabled for Apache.

One last thing I did — and I’m not sure if it was relevant or not — was to precreate some temporary directories according to this ZoneMinder forum post.  That may have been a red herring … but it was part of my overall solution.

At this end of this, ZoneMinder works great with HomeAssistant, and the ZoneMinder API is back up and running!

Squid3 Proxy Problems on Ubuntu Linux to Yahoo, Google, Facebook, YouTube and so on

I set up a Squid3 (Squid) proxy as part of my DansGuardian setup on Ubuntu to filter the kids’ web traffic.  Overall, the proxy worked fine … but I was getting strange connection failures to some of the largest web properties, such as Yahoo, Google, YouTube and Facebook, whereas all smaller properties worked just fine.

The general error I was receiving was “The system returned: (110) Connection timed out”.

It turned out the problem was that Squid was using IPv6 to access any site that returned a legitimate IPv6 address.  As my system wasn’t properly configured for IPv6, the request was failing.

The right answer, of course, is to get on board and configure properly for IPv6.  It’s the future, it’s faster, etc.

The short answer is to add to your squid.conf file:  dns_v4_first on

This will force Squid to check for a valid IPv4 DNS entry, and use that.  fixed a day-long problem for me like … snap!

Setting up Ubuntu Firewall (UFW) for NFS

I use ufw as my firewall in Ubuntu.  I was recently trying to hook two Ubuntu servers together with NFS, and running into firewall problems.  Here’s how to get it working, in case you’re encountering the same problem.

1.  Start by ensuring that you have the basic NFS ports open.  These are going to be 2049 (udp/tcp) for NFS, and 111 (udp/tcp) for “sunrpc”.  You can add both of these with a straightforward ufw rule, relying on /etc/services to identify the ports.  For example, assuming that you have LCL_NET set to your local network, and only want to allow access to machines in that network:

ufw allow from ${LCL_NET} to any port nfs

ufw allow from ${LCL_NET} to any port sunrpc

2.  The next problem you have is that the rpc.mountd port is assigned randomly, unless you fix it otherwise.  So, first, edit /etc/default/nfs-kernel-server and change the line for RPCMOUNTDOPTS to be:

RPCMOUNTDOPTS=”-p 4001 -g”

Then go back to ufw and allow this port for both udp and tcp.  (I’m not including the command, as there are a few different ways to do it, and I do it in a way that’s simpler in the end, but more complex to explain at the moment.)

Finally, of course, restart ufw and nfs.

Resources:

 

Configuring ZoneMinder on Ubuntu – Buttons Don’t Work (Javascript Errors)

If you’re configuring ZoneMinder (a great IP camera control application) on Ubuntu, and the console doesn’t work (buttons aren’t active when you click them), your problem is probably that the Javascript path isn’t working.

To test this, view source on the console page, and check the path to MooTools.

MooTools should be installed in /usr/shared/javascript/mootools.  The configuration for shared javascript should be in /etc/apache2/conf-enabled.  For me, for whatever reason, conf-enabled wasn’t actually enabled.  Copying javascript-common.conf to /etc/apache2/conf.d and then restarting apache (/etc/init.d/apache2 force-reload) worked to fix the problem.

Tunneling SMTP (TCP port 25) through a VPN

I’ve recently switched providers (having moved countries), and am now reconstructing my services in a new location, using AT&T UVerse.  I continue to have an account with StrongVPN that I use (I originally acquired it to give me a US IP for use when out of the US … side note, I’m really happy with StrongVPN).

The problem is that UVerse blocks outbound SMTP (port 25) traffic, and doesn’t provide their own relay (or, more accurately, won’t relay mail that’s neither to nor from an AT&T address).  I don’t have much mail to send (just what the kids generate, and the occasional system alert), so I don’t feel that I’m much of a threat to anyone’s traffic.  It took me a while to figure this out (I’m not the world’s greatest IP routing guru), so I figured it might be of use to you.

Many thanks to Lekensteyn and the other contributors to the post “iptables – Target to Route Packet to Specific Interface” on serverfault.com for key pointers.

Objective:  Running a postfix SMTP server on Ubuntu Linux, route all outbound SMTP through a VPN tunnel.  We’re going to wind up not using a relay server, and just directly connecting to the target hosts.

We’ll assume that your tunnel interface is TUN0 … replace this below as you see fit.

1. Remove any relays from your postfix configuration

If you’re coming from a previous configuration, you were probably configured with a relay server.  You need to remove it, so that you’re directly connecting to your target hosts (if you had a relay server, you probably don’t need to force the traffic anywhere!  On the other hand, if you’re trying to route port 25 for some other reason, then skip this step.)

a. Edit /etc/postfix/main.cf

Edit this file to remove or comment out the line that established your relay host:

# relayhost = smtp.mypreviousprovider.nl

2. Force SMTP Port 25 through the VPN

I have a very complex routing scheme, with a ton of subnets that I use for all sorts of things — for example, forcing a wifi AP out through the VPN (so that you can connect to a specific AP to auto-use the VPN), keeping the kids on a different subnet so that I can force them out a different Internet connection, and blackholing unknown devices so that the kids can’t hook anything up without me verifying the MAC address … drop me a comment if you’re interested in any of these things … so the long and short of it is that I already have a routing script that I use to set up all my routing tables.

If you already have such a script, just add the additions below to that script.  If not, then create a new script for these commands.  It’s a somewhat separate exercise to hook it up so that it’s properly invoked whenever you bring an interface up or down (which I leave as a Google exercise for you, as it’s not fresh on my mind) … but in the worst case, you can just run it manually whenever your routing tables get rebuilt.

Note that we’re going to use “7” as the mark for our rules.  This is arbitrary.

a.  If you haven’t already, create a routing table for the VPN

I already had a routing table set up … if you do then just use that routing table, below.  But if you don’t, here’s a quick and dirty routing table to push over your VPN (please note, I’m just typing this, and haven’t actually tested the exact lines below, as I don’t need them in my setup!!):

# For clarity, clear anything that might have accumulated there.  Ignore any error, here.
ip route flush table vpn_table
# Push all traffic that goes to this table out the VPN.  Substitute your VPN's gateway for 99.99.99.99 below.
iproute add default via 99.99.99.99 table vpn_table
# And be sure to flush to pick it up
iproute flush cache
b. Now mark all SMTP port 25 packets with 7
iptables -t mangle -A OUTPUT -p tcp --dport 25 -j MARK --set-mark 7
c. Set the source IP to our ID on the VPN (substitute for 88.88.88.88) rather than our local network ID.  Remember to use your correct interface if not TUN0, below.
iptables -t nat -A POSTROUTING -o tun0 -j SNAT --to-source 88.88.88.88
d. Send everything marked with 7 to the VPN table (to force out the VPN)
ip rule add fwmark 25 table vpn_table
e. Relax the reverse path source validation

(See the post for a discussion.)

sysctl -w net.ipv4.conf.tun0.rp_filter=2
f. And flush for good measure
ip route flush cache

That should do it!  Run your script, and all your port 25 traffic should be running out your VPN.  Obviously, you can adapt the concepts here for other applications.

Upgrading from Ubuntu 12.04 (Precise Pangolin) to 12.10 (Quantic Quetzal)

Just some notes on some of the issues I hit in updating from Ubuntu 12.04 (Precise Pangolin) to 12.10 (Quantic Quetzal), in the event they’re of use to someone.

Problems with PHP Initialization

I was receiving a ton of errors from cron, with the text:

PHP Warning:  PHP Startup: Unable to load dynamic library ‘/usr/lib/php5/20100525/pam_auth.so’ – /usr/lib/php5/20100525/pam_auth.so: cannot open shared object file: No such file or directory in Unknown on line 0

While there appears to be a line of thought (via Google) that one needs to install php5-auth-pam or php5-pam … I didn’t find either package with a simple apt-get install, and didn’t bother investigating further.

The actual fix was to check /etc/php5/conf.d.  In that folder was a legacy (few years old) pam_auth.ini file which was attempting to load pam_auth.so.  Removing this file enabled PHP to start.

WordPress Hiccough

WordPress sites didn’t want to start up for me,  claiming a missing wp-config.php.  When I checked /etc/wordpress, I found that my old wp-config.php has been copied to a typical wp-config.php.dpkg-bak … but no new file had been installed.  I just replaced back my old copy.

 

Updating from Hardy Heron to Jaunty Jackalope

Finally got around to updating Ubuntu this evening.  Had to go from Hardy to Intrepid first (purely as a waypoint), then to Jaunty Jackalope.

No major problems, although HAL quit for me at some point and had to be restarted remotely (stopped recognizing my mouse and keyboard when I kvm’d back from another machine).

After booting into Jaunty, I started getting the error “Tracker Applet: There was an error while performing indexing: Index corrupted.”  This appears to be a pretty common error.  For instance:

You probably want to read those to draw your own conclusions, but what worked for me was to install the tracker-utils package, and then wipe out trackers.

# Install the tracker utilities
sudo apt-get install tracker-utils
# Kill all the processes and wipe out the database
sudo tracker-processes -r
# Remove the cache
rm -rf ~/.cache/tracker
# Clear local settings
rm -rf ~/.local/share/tracker
# Remake the directory
mkdir ~/.local/share/tracker
# Restart the daemon
sudo /usr/lib/tracker/trackerd &

Now … I suspect that’s not permanent, but we’ll see what happens after the next reboot.

Followup:  That worked fine.  After logging out and logging back in, the tracker applet restarted, indexing restarted, and finished pretty quickly.

Building MP3 support into Sox

I rip all my old CDs as MP3s now.  I used to rip as Ogg Vorbis, both for the higher quality, and because Ogg isn’t a patent-encumbered format, but there are just too many music applications (both software and embedded systems, such as the Netgear MP-101 wireless music streamers I picked up cheap to scatter around the house) that don’t understand Ogg, so I gave up.  MP3 it is.

Now, I’m trying to put together a simple script to merge an MP3 playlist into a single MP3.  Sox should do the trick … but it doesn’t come with built-in MP3 support (for obvious reasons).  Here’s what I did to build MP3 support into Sox — many thanks to a good post by Michael Walma on the Ubuntu Forums.

First, make sure you have what you’ll need.  LibLAME is the library for encoding MP3s — your specific version may vary from what’s below (use Synaptic Pkg Mgr to check what’s available with a search on “lame” if in doubt).  Be sure that you have universe, multiverse and restricted enabled in your package sources (/etc/apt/sources.list), as anything MP3-related is going to come from the non-standard locations.

sudo apt-get install sox liblame0 liblame-dev
sudo apt-get build-dep sox

Switch to where you’re going to build it (e.g., cd /usr/local/src).

sudo apt-get source sox

Unpack the source and switch to the source directory (your version is likely to vary):

sudo dpkg-source -x sox_14.0.0-5.dsc
cd sox-14.0.0

Now you need to enable LAME support:

sudo vi debian/rules

And comment out the line that reads:

DEB_CONFIGURE_EXTRA_FLAGS := --disable-lame

Build it:

sudo dpkg-buildpackage -b

Back up to the parent directory (cd ..) and install the newly-built packages.  For me, only new Sox packages were in the directory, so I just swept them up together.  Otherwise, you’d need to do them one by one, or with a filter such as *sox*.deb:

sudo dpkg -i *.deb

And, now, you should be able to confirm that you have support with:

sox -h

Success!