Duplicity fails on 3GB /tmp

Backup application Duplicity has just filled up my /tmp, located on / partition with 3GB free space, during verification of a full backup and, finally, reported ‘failure’!
After quitting Duplicity, I have 1GB less free space on my / partition.

I wonder how much bigger the /tmp space need to be…

—-

Update [2013/11/19]:

I’ve found the ‘lost’ GB into ~/.cache/deja-dup directory [naturally] (or /root/.cache/deja-dup if it runs under gksudo), but I still don’t know why 3 GB of temporary space are not enough.


Visit The Light of the LAMP blog for more…

CRON error: grandchild failed with exit status 1 [Solution]

Just after I upgraded my Ubuntu to 10.10 version, I noticed some cron-related error messages in my /var/log/syslog file, like these:

Jul  5 13:40:01 agriope2 CRON[971]: (CRON) error (grandchild #974 failed with exit status 1)
Jul  5 13:50:01 agriope2 CRON[7775]: (CRON) error (grandchild #7778 failed with exit status 1)
Jul  5 14:00:01 agriope2 CRON[14520]: (CRON) error (grandchild #14526 failed with exit status 1)
Jul  5 14:00:01 agriope2 CRON[14521]: (CRON) error (grandchild #14531 failed with exit status 1)

At first, I thought they were generated by one of my scripts, which runs via cron every 10 minutes and I wondered what happened during upgrade that caused its exit status to turn to 1 instead of 0. Unfortunately, I didn’t had the time to delve deeper back then and I ignored it, since everything seemed to work as expected.

Until today, when (after one more successful system upgrade in the meantime) I decided to examine it further.
Given that nothing has changed into my script, I thought that it would be a good idea to search into /etc/cron* directories and there I found that there was the script /etc/cron.d/update-motd, which was scheduled to run every 10 minutes and tried to run /usr/sbin/update-motd. The problem is that /usr/sbin/update-motd was not present any more at its whereabouts, probably because “the functionality formerly provided by update-motd package is now integrated into pam_motd, in libpam-modules” (as the description of update-motd package says).

So, I moved /etc/cron.d/update-motd to another location, just in case I will need it again some time in the future.

Update [2011/12/25]:
In case you are running fetchmail from cron, you ‘ll have messages like the above whenever you have no new mail. For this situation, the solution is already in the man page of fetchmail:

If you do not want “no mail” to be an error condition (for instance, for cron jobs), use a POSIX-compliant shell and add

              || [ $? -eq 1 ]
to  the  end  of  the  fetchmail command line, note that this leaves 0 untouched, maps 1 to 0, and maps all other codes to 1. See also item #C8 in the FAQ.


Visit The Light of the LAMP blog for more…

System upgrade to Ubuntu 9.10

I know that I’m some months late on upgrading, but for a production system I prefer to let things cool down a little before attempting the system upgrade. However, even with this methodology, there may be still some problems after finishing, and I have found two of them today, after upgrading to karmic koala version of Ubuntu (9.10 for number-centric fans).

The first one was a problem with mysql-client, which remained in 5.0 version for some reason. I was able to resolve this issue by manually installing version 5.1:

aptitude install mysql-client-5.1

The second problem had to do with updating vnstat‘s database (‘vnstat -u’). After testing various ideas, I came up with no other solution than deleting the previous databases and recreating new ones with the following commands:

rm /var/lib/vnstat/eth* /var/lib/vnstat/.eth*
vnstat -u -i eth0
vnstat -u -i eth1

I hope that there will be no other surprises (until the next upgrade of course)


Visit The Light of the LAMP blog for more…

send mail to correct local host

After upgrading my previous server with a new one, I run a lot of migration scripts and update procedures to make sure that everything transferred OK and worked as expected. However, a little thing kept bugging me until today.

Usually, when you want to send an email message to a local user, you either send it to user@localhost or just to user and the mail service makes sure that the local hostname is added after the ‘@’ (if there is nothing there of course). But the problem for me was that messages to local users relayed through my external mailgate after the upgrade.

The /etc/hosts and the configuration files of postfix were already filled with the correct hostnames and I could not find anything until I tried to search all the files in /etc hierarchy for the old hostname.

To my surprise, I found that the old hostname was still in /etc/mailname which, according to its man page, is a plain ASCII configuration file, which on a Debian system contains the visible mail name of the system.

I don’t know if the upgrade kept it intact or it was the restoration of /etc data files that caused this discrepancy. The good thing is that I found it easily by searching with grep.


Visit The Light of the LAMP blog for more…

Sample data and how to produce them

Sometimes a programmer needs to test/debug one application by feeding it with data, random or not. In a situation like this, there are some utilities one can use to produce as many data as she needs.

Let’s start with a simple example of feeding our test application (we will call it “test_data”) with zeroes, say 100 bytes with a value of zero.

head –bytes=100 /dev/zero | ./test_data

The above command has two parts; the first one (before the ‘|’ delimiter), ‘head‘, reads /dev/zero device, which provides us with zero-valued bytes as a stream, up to a count of 100 bytes and sends them to the second one, ‘test_data‘, as input. If we wanted more, say 100 KBytes, we would replace the parameter of ‘bytes’ argument with “100k”, as follows:

head –bytes=100k /dev/zero | ./test_data

Now, let’s say we want random data. It’s as simple as replacing ‘/dev/zero’ device with ‘/dev/random‘:

head –bytes=100k /dev/random | ./test_data

With the above command combination, we produce 100 KBytes of random data as input to our test_data application.

And what if we wanted some specific data, like “testing with sample data”, repeated 40 times?

yes “testing with sample data” | head -40 | ./test_data

The ‘yes‘ command just outputs its input continuously; ‘head’ keeps only the first 40 lines; ‘test_data’ works with these 40 lines of the sample string.

Programming life can be very simple (and entertaining) some times!


Visit The Light of the LAMP blog for more…

Use the source, Luke… even for HTML pages!

These days I’m experimenting with a few desktop clients to read/update status messages from/to some socail networks I participate and I want to be involved more from now on. But there is a small catch!

Most desktop clients nowadays are being built using Adobe’s AIR. So, at first, I downloaded and installed the AIR package for Linux and all went well.

After that, I had to visit each client’s homepage to install the client. In order to make things easier for the end user to complete the installation, there is an automated procedure through the webpage. The only problem is that this procedure fails to accept the fact that I have already installed AIR and doesn’t let me proceed with the download/installation.

So, what can a poor developer/user do in such a situation?
Simple… “use the source, Luke!” and look at the HTML code of the webpage and try to find the link reference to the “.air” file, then download it manually and continue the installation from the command line using AIR’s “Adobe AIR Application Installer” (located at /usr/bin directory).

From Firefox, while viewing the desktop client’s webpage, press Ctrl+U to view the source, search for “.air” (without the quotes) and copy to clipboard the URL in the ‘a href=’ clause. Paste the copied URL into the location bar and save the file to your hard disk. Then run

Adobe AIR Application Installer

and guide it to the just-downloaded file to start the installation.

As usual, “the devil is in the details”, right?  🙂


Visit The Light of the LAMP blog for more…

Backup files before editing

It’s a fact that many text editors (or word processors) have the ability to keep a backup copy of every file you edit, just to assure you that you can always restore the previous version of the file. This is usually implemented by creating a copy of the file with a file extension of .bak or by appending a “~” character at the end of the current file extension, like .c~

Although being a useful option for a programmer (or a writer), it lacks the capability of keeping more backup copies, like a CVS (Concurrent Versions System) where the author can go back in time and find the text file as is was e.g. a month ago (very handy if you want to restore something you have deleted just before one week).

In order to avoid the complexity of a CVS, yet having the option to “travel” back in time, I wrote a small shell script, which I’m running just before my editing sessions. The script looks like this:

#!/bin/bash
datetime=`date +%Y%m%d_%H%M`
bdir=”.backup”

mkdir $bdir 2> /dev/null

for a in $*; do
cp -av $a $bdir/$datetime.$a
done
gzip -9 $bdir/$datetime.*

All it does is to keep a compressed copy of every file you want to edit in a directory named “.backup”, hence I named it “backup2.backup” and I run it as:

backup2.backup *.php *.html

It doesn’t check many things, but I’m using it for some time now. Feel free to enhance it as you like it.


Visit The Light of the LAMP blog for more…

Always check your fingers while being root

Yesterday, while I was reading some very old magazine articles, I remembered a “horror” story that happened to me a long time ago, when I was learning to administer my first Sun Solaris system. It goes like this…

I was following the instructions to install some new application and I had to add a new user in /etc/passwd file. I kept a backup copy and I started editing it with vi. What I didn’t knew at that time, was that cursor (arrow) keys were not used for moving the cursor and they produced “~” instead. OK, I thought, back to H-J-K-L keys. I added the new user at the end of the file and saved it. I also logged out from my ‘root’ account, as my job was finished.

What I didn’t noticed, when I was fiddling with the cursor keys, was that the first two letters of the username of the first account of the file changed to capitals.

“No big deal”, you might say, except that the first account in /etc/passwd file was ‘root’! And it became ‘ROot’, without noticing it!

Imagine my frustration when I tried to login back to root’s account, without knowing what actually had happen! 🙁

I don’t remember exactly what error messages I saw, but I ended that by rebooting the machine in single user mode, mounting the root partition in a directory and recovering back the root account.

What this little story taught me, was to double-check always what I’m doing as root, especially if the keys I’m pressing don’t have the expected result.


Visit The Light of the LAMP blog for more…

Can’t eject cdrom? Try fuser

Just a quick note to myself (and anyone else who might be in a similar situation)…

Sometimes, after I’ve played some video file written on a cd-rom, I can’t eject it because the file is locked/used by another application/daemon (like dbus-launch or dbus-daemon).

The quicker way I’ve found to unlock/free the file (and eject the disc) is to use “fuser -k” with the full path specification of the file as an argument, like this:

fuser -k /cdrom/file.avi

If, on the other hand, you just want to see which processes are using a certain file, try these commands from any terminal:

fuser -v /var/spool/mail/$LOGNAME
lsof /var/spool/mail/$LOGNAME


Visit The Light of the LAMP blog for more…