How I solved my memory problems (for free)

ript>

For the last week, I experienced some low memory problems, both in RAM and swap space. I admit that I’m using Firefox with a lot of addons that I find useful, so I tried, at first, to uninstall some of them (those I’m not using too often). I had to close also, when not needed, some “always running” utilities, like gkrellm, amarok, kopete and korganizer.

I have 512 MB RAM in my system and a separate swap partition of the same size in my hard disk. Although the system worked most of the time (even when I used gcc compiler), it was constantly swapping in and out applications and yesterday evening my swap space reached 0 bytes of free space.

I thought that an increase of the swap partition size might solve the problem, but I didn’t want to fiddle with my disk and I called this solution “plan B”.

As “plan A” I decided to make more swap space by creating a swapfile. The commands I used are the following:

# dd if=/dev/zero of=/home/swapfile bs=1024 count=1048576
# mkswap /home/swapfile

Setting up swapspace version 1, size = 1073737 kB
no label, UUID=267d82f3-0f53-42d6-bf72-6fc4bf36b8f4

# swapon /home/swapfile
# vim /etc/fstab

where I added the following line:
/home/swapfile none swap sw 0 0

After executing these commands, my swap space increased instantly to 1 GB, without a need to reboot and without any cost (except for 1 GB of disk space of course).

So far the system is healthy again, with a lot of usable memory, and all my favorite applications are running together as before.


Visit The Light of the LAMP blog for more…

Edit your directories fast

Suppose you need to rename many files in a directory and you can’t remember any nifty way to quickly rename them. In such a case, try vidir.

All it does is creating a temporary file with the name of each file properly numbered in a separate line and loading it into the text editor, where you can edit it. Every change you make in the name or the number of a file is applied to the appropriate directory entry when you save the file.

Let’s say, for example, you want to exchange the names of two files, file1 and file2. When you run vidir, you ‘ll see something like this:

1        file1
2 file2

change it to

2        file1
1 file2

and save the file. That’s it!

Although its name implies the vi editor, don’t worry. You can use any text editor you like, as long as you declare it as your EDITOR:

$ EDITOR=kate vidir


Visit The Light of the LAMP blog for more…

Daily distribution of emails

If you have a busy mailbox on a daily basis, then the following method will give you some more information about your e-mail messages. This is very useful if you are subscribed to many mailing lists (professional or not) and you want a clean inbox.

  1. You have to use procmail to distribute your incoming e-mail messages into appropriate mailboxes.
  2. You must enable procmail’s logging feature by including a line like

    LOGFILE=$HOME/mail-from

    into your ‘~/.procmailrc’ file (man procmailrc for more details)

  3. Download the statmail PERL script and install it in your ‘~/bin’ directory
  4. Set up a cron job like the following (everything goes in one line and remember to edit the pathnames):

    00 00 * * * /home/username/bin/statmail < /home/username/mail-from | /usr/bin/mailx -s "daily mail statistics" username@localhost && rm /home/username/mail-from

From now on (and every midnight), you ‘ll receive a new e-mail message containing an analytical report of how many messages you ‘ve received today (or, should I write, yesterday?) and the mailbox they were stored.


Visit The Light of the LAMP blog for more…

Imprisoned or not?

Sometimes it is imperative for an application to run in a protected environment, especially if it provides a service (like for example, the apache server). For this reason, an administrator can use the chroot system call to force a process (or process group) to run under a subset of the file system, denying access to any other parts of it.

Another common use of this mechanism is for creating a sandbox for a user, even root, in order to test something without the fear of accidentally destroying the system (although this is not entirely true, since the chroot mechanism cannot by itself be used to block low-level access to system devices).

So, how can we find if some application is running in a chrooted jail or not?

One way is by running

ls -id /

to check the inode of the root directory. If it is a large number, then our application is jailed in a chrooted environment.


Visit The Light of the LAMP blog for more…

GPartEd saves the day!

Due to an installation misconfiguration, one of my machines had a very small /boot partition, which I always wanted to increase, but without going into too much trouble like repartitioning/reformatting the disk or reinstalling the whole system, since it was already working as it should.

Since I haven’t tried anything similar before, I decided to use GNOME Partition Editor, which I found that it is available as a package (in the usual repositories), as a LiveCD and as a LiveUSB. Jobs like this can’t be done in a live system, so I downloaded the LiveCD version (size: 50 MBytes) and I burned it in a rewritable CD.

I rebooted the system from the CD (which is a minimal Gentoo distribution with FluxBox window manager) and, after a while, the main screen of the partition editor appeared. Changing the partition sizes is very easy: all I had to do was to select the partition and press the “Resize/Move” button. After rebooting, it proved reliable too (although I already had a backup, just in case something happened).

Since I was happy with it, I tested it also with a friend’s system, who wanted to increase the size of his C:, NTFS formatted, drive (his operating system, for the last month, insisted that it had no room available to download and install the updates of the OS). Boy, 4 GBytes for the system partition are not enough these days?

Needless to say that GPartEd worked flawlessly both times.

Disclaimer: always have a working backup before doing things like that or you’ll remember Murphy’s law!


Visit The Light of the LAMP blog for more…

RSS feeds to my mailbox

I wrote before about how I prefer to read, organize and archive (almost) everything with my mail reader (hey, I just love my Mutt ), and that is also true for RSS feeds (from the blogs I read frequently and other news sites).

Until now, I used to fire up a GUI news reader, gather all new posts from my feeds, export them in a mailbox-formatted file and import them to my mailbox to read. I have even tried some online (i.e. browser-based) aggregators/readers, but I was not very happy with the whole procedure!

RSS2email is a very simple program. All it does, when it runs, is to check if there are any new items in the feeds it knows and, if there are, deliver them to a (configurable) email address. I installed the (small) package, added some feeds to test and added a cronjob (my preference is to run it hourly like this: 11 08-23 * * * /usr/bin/r2e run) to ensure frequent updates.

One minor problem however, is that it doesn’t read OPML files (special files for exchanging RSS feeds between readers) and since I have many (>100) feeds to read from, I used a bash command to import all of them to RSS2email’s database.

egrep -o ‘”http://[^”]*”‘ newsfeeds.opml | xargs r2e add

Explanation:

  • newsfeeds.opml is the file where I have my RSS feeds (exported by the other program I used)
  • egrep is GNU grep with extended regular expressions support
  • -o ‘”http://[^”]*”‘ parses each input line and outputs just the URL of each feed
  • xargs runs “r2e add” with every URL previously extracted, adding them to the database file of RSS2email

OK, I admit it! I’m a hard-core computer-dinosaur looking for ways to convert all those fantastic GUI programs to “simple” CLUE ones! Guilty as charged! 🙂


Visit The Light of the LAMP blog for more…

Wikipedia to my mailbox

I’m a frequent reader of Wikipedia. I’m even subscribed to “Article of the day” mailing list and every day I receive a random article to my mailbox. But when I wanted to find information about something, I had to open a new tab in my browser, visit wikipedia, find and read the article and, if there was value for me, save it for future reference… until yesterday!

Yesterday, I discovered (and installed) the wikipedia2text package and, to make things easier for me, I wrote a little wrapper function, which I promptly inserted into my ~/.bashrc:

wp() {
    what=”$*”
    wikipedia2text “$what” | mailx -s “wikipedia article: $what” myemail
}

From now on, while working on a terminal, each time I issue a

wp title-terms

a new article will be delivered to my mailbox.
Example: wp context switch


Visit The Light of the LAMP blog for more…

Split this e-mail [digest] message

Some times we receive e-mail messages in a digest format, either from mailing lists or other informational newsletters. From these messages, occasionally, I want to keep just one paragraph or one message and throw away all the rest.

So I wrote a little PERL script, called “splitdigest.pl“, for doing that and I’m using it from inside Mutt just by pressing ‘Z’. It works like this:

  1. open ~/.muttrc file and append the following two lines after its last line:

    macro index Z |’~/bin/splitdigest.pl’nd
    macro pager Z |’~/bin/splitdigest.pl’nd

    (the above lines remap the key Z to (1) filter the body of the message through splitdigest script, which is located in my ~/bin directory and (2) delete the message)

  2. create a new file with your favorite editor and insert the following lines to it (just remember to replace username with your login name):
    #!/usr/bin/perl -w
    use strict;
    use diagnostics;
    my ($outfile, $line, $i, $k);
    my (@header, @body);
    $outfile = "/var/spool/mail/username";
    open PF, ">> $outfile" or die "Couldn't open $outfile for writing: $!n";
    push @header, "From username@localhost ",scalar(localtime()),"n";
    $i = 0; $k = 0;
    foreach $line () {
    if (($i==0) && ($line !~ /^$/)) { # read header
    if ($line =~ /^Content-Type: multipart/alternative; /) {
    $line = "Content-Type: text/plain; charset=utf-8n";
    }
    push @header, $line if
    ($line =~ /^(From|Date|Subject|To|Message-ID|Content-Type|Content-Transfer-Encoding): /i);
    }
    # read message body
    if ($i>0) { push @body, $line if ($line !~ /^$/); }
    # flush @body and restart with a new message
    if ($line =~ /^$/) {
    $i=1;
    if ($#body > 0) {
    $k++;
    print PF @header, $line; # first print header
    print PF @body, $line; # then body
    @body = (); # finally flush body
    }
    }
    }
    # in case the message ends with a non-null line, write the rest
    if ($#body > 0) { print PF @header,"n",@body,"n"; }
    close PF;
    print "$k messages found and written to file [$outfile]n";
    __END__

  3. Save the file as “splitdigest.pl” in your ~/bin directory (or wherever you like) and make it executable with chmod:

    chmod +x ~/bin/splitdigest.pl

That’s all!

Additional information:

  • The code above splits the digests where there is a blank line in it, i.e. the separator is the blank line. With a little tweaking you can use a different separator.
  • I know that one can edit the whole message in Mutt by ‘e’diting it. I’m using this method for its speed and convenience.

Visit The Light of the LAMP blog for more…

Memory effects by (K)Ubuntu upgrade

A few days after upgrading KUbuntu on my laptop (from version 6.10 to 7.04), I noticed that the free memory indication decreased rapidly when I was loading some applications, much faster than before. Additionally, when it was reaching the amount of 50-60 megabytes left, the whole system went to a situation of continuous hard disk writing/swapping which left no other option for me than to power-cycle it the hard way! 🙁

At first, I thought it was something with the new kernel (2.6.20, in place of 2.6.17 I was using until then), so I made the change to my GRUB menu.lst file to load by default the 2.6.17 kernel. Nevertheless, although the situation got a little better, the problem remained and I had to close Firefox in order to be able to run Opera and vice versa (one very annoying situation for someone who must test his LAMP projects with the most popular browsers).

Today I made another change to my system by purging the mdadm package (responsible for software RAID arrays, which I don’t use in this computer) and, so far, the system is more stable; at least I can use 2-3 browsers (don’t forget Konqueror) simultaneously.

The next days will show if the mdadm package was the culprit!

UPDATE:
I think I found it (finally)… my swap partition was disabled!!!
The upgrade process changed the relevant line in /etc/fstab file from

/dev/sda5 none swap sw 0 0

to

UUID=fac6f2c3-34a6-48c6-83f2-eb59c90cb944 none swap sw 0 0

which didn’t worked as expected. After I restored the first line in place of the second, I got back my 500 MB swap space.

Furthermore, in order to be consistent with the new UUID method, by using the blkid command I found that the correct UUID for the swap partition was “1e93c994-8da0-4666-838f-4dd1452f9a15”, so I changed the above line to be:

UUID=1e93c994-8da0-4666-838f-4dd1452f9a15 none swap sw 0 0

The problem now is that the free command shows that I have 1 GB swap space, whereas with the first method I had 500 MB. I have to investigate this further (hoping that there will be no data corruption in the meantime)…


Visit The Light of the LAMP blog for more…

Watch my tails

When in charge of server monitoring, one has to pay great attention on server’s log files. Be them web logs, mail logs or other application’s logs, sometimes they offer us too much information, either to help locating the origin of a certain problem or to confuse by displaying many messages, informational but useless!

Monitoring log files in real time is very easy by using the tail command in its follow (“-f”) mode. For example:

tail -f /var/log/syslog

will show on screen every line that appears at the end of the syslog. Additionally, filtering any undesired lines will require the use of a grep filter, like:

tail -f /var/log/syslog | grep -v -i “smtp”

But what if you want to watch more than one log file or if you want to highlight certain parts, such as IP addresses or error codes?

Multitail to the rescue!
Multitail is very versatile and highly configurable utility. It can monitor many log files in parallel, either in a window of their own or in a single one by merging them. It can also display the (differential or not) output of other commands, such as “ping -c 1 my.host.ip” or “ls /tmp” and, of course, it can colorize or filter certain fields or lines the way we want.

When I discovered (via DebianPackageADay), installed and run it, I felt very happy because I could have all the running information I needed in just one window and the more useful bits of it highlighted and easily spotted!

A very nice work indeed!


Visit The Light of the LAMP blog for more…