Backup files before editing

It’s a fact that many text editors (or word processors) have the ability to keep a backup copy of every file you edit, just to assure you that you can always restore the previous version of the file. This is usually implemented by creating a copy of the file with a file extension of .bak or by appending a “~” character at the end of the current file extension, like .c~

Although being a useful option for a programmer (or a writer), it lacks the capability of keeping more backup copies, like a CVS (Concurrent Versions System) where the author can go back in time and find the text file as is was e.g. a month ago (very handy if you want to restore something you have deleted just before one week).

In order to avoid the complexity of a CVS, yet having the option to “travel” back in time, I wrote a small shell script, which I’m running just before my editing sessions. The script looks like this:

datetime=`date +%Y%m%d_%H%M`

mkdir $bdir 2> /dev/null

for a in $*; do
cp -av $a $bdir/$datetime.$a
gzip -9 $bdir/$datetime.*

All it does is to keep a compressed copy of every file you want to edit in a directory named “.backup”, hence I named it “backup2.backup” and I run it as:

backup2.backup *.php *.html

It doesn’t check many things, but I’m using it for some time now. Feel free to enhance it as you like it.

Visit The Light of the LAMP blog for more…

Can’t eject cdrom? Try fuser

Just a quick note to myself (and anyone else who might be in a similar situation)…

Sometimes, after I’ve played some video file written on a cd-rom, I can’t eject it because the file is locked/used by another application/daemon (like dbus-launch or dbus-daemon).

The quicker way I’ve found to unlock/free the file (and eject the disc) is to use “fuser -k” with the full path specification of the file as an argument, like this:

fuser -k /cdrom/file.avi

If, on the other hand, you just want to see which processes are using a certain file, try these commands from any terminal:

fuser -v /var/spool/mail/$LOGNAME
lsof /var/spool/mail/$LOGNAME

Visit The Light of the LAMP blog for more…

No more ~/.bashrc running?

After upgrading KUbuntu 7.04 to 7.10 some days ago, I noticed that my ~/.bashrc startup file stopped running when I opened a new terminal, be it konsole, xterm or (my favorite) gnome-terminal!

Today I decided to debug this strange behavior to find what was wrong…

I started my research by inserting a line at the start of /etc/bash.bashrc:

echo running /etc/bash.bashrc

and I noticed that it was displayed twice when I was starting a new terminal. Then I looked into my ~/.bashrc file (which was working as it should before the upgrade) and found these 3 lines:

if [ -f /etc/bash.bashrc ]; then
. /etc/bash.bashrc

After commenting out these lines, everything worked as expected again and I reinstated the original /etc/bash.bashrc file (without the debugging echo command).

Visit The Light of the LAMP blog for more…

Refresh the memory of bash

Suppose you have the same utility in two different places in your filesystem, one installed by a package you don’t really need since you have just installed the same files from source (but in a different directory). You promptly uninstall the packaged version, but when you try to run the utility, you get something like this:

bash: /usr/bin/utility: No such file or directory

The shell is still looking to find the old version!

To refresh the memory of bash, just run the command:

hash -r

and after that, when you rerun the utility, the new version will appear.

Visit The Light of the LAMP blog for more…

RSS feeds to my mailbox

I wrote before about how I prefer to read, organize and archive (almost) everything with my mail reader (hey, I just love my Mutt ), and that is also true for RSS feeds (from the blogs I read frequently and other news sites).

Until now, I used to fire up a GUI news reader, gather all new posts from my feeds, export them in a mailbox-formatted file and import them to my mailbox to read. I have even tried some online (i.e. browser-based) aggregators/readers, but I was not very happy with the whole procedure!

RSS2email is a very simple program. All it does, when it runs, is to check if there are any new items in the feeds it knows and, if there are, deliver them to a (configurable) email address. I installed the (small) package, added some feeds to test and added a cronjob (my preference is to run it hourly like this: 11 08-23 * * * /usr/bin/r2e run) to ensure frequent updates.

One minor problem however, is that it doesn’t read OPML files (special files for exchanging RSS feeds between readers) and since I have many (>100) feeds to read from, I used a bash command to import all of them to RSS2email’s database.

egrep -o ‘”http://[^”]*”‘ newsfeeds.opml | xargs r2e add


  • newsfeeds.opml is the file where I have my RSS feeds (exported by the other program I used)
  • egrep is GNU grep with extended regular expressions support
  • -o ‘”http://[^”]*”‘ parses each input line and outputs just the URL of each feed
  • xargs runs “r2e add” with every URL previously extracted, adding them to the database file of RSS2email

OK, I admit it! I’m a hard-core computer-dinosaur looking for ways to convert all those fantastic GUI programs to “simple” CLUE ones! Guilty as charged! 🙂

Visit The Light of the LAMP blog for more…

Replace in place

What can you do when you want to replace certain text strings with their “equivalents” or “substitutes” very fast?

Of course you can fire up your favorite text editor and start the well-known and frequently used ‘search-and-replace’ procedure.

Or you can do the same thing using sed, like this:

$ sed -i -e ‘s/this/that/g’ -e ‘s/foo/bar/g’ filename

But, if you have MySQL installed, you can find the replace utility very handy, like this:

$ replace this that foo bar — filename

So? Except for the fewer characters typed (in the case of replace), both commands had the same result, right? Right!

But replace can do the following in just one step:

$ replace this that that this — filename

or even

$ replace foo bar bar baz baz foo — filename1 filename2 filename3

replace is a utility written just to replace text strings and it does it very good.
sed, on the other hand, can do a lot (and then some) more, but it’s not for someone less experienced.

Visit The Light of the LAMP blog for more…

Environmental issues (part 2)

When working a lot with command line utilities, I usually find myself retyping certain commands. This fact is a good reason for someone to use aliases to save typing time and errors.

At first we have to check our most loved/typed commands. This can be done using:

history | cut -c 8- | sort | uniq -c | sort -nr | head -20

in order to show the 20 most frequently used [1]. Of course, the above commands combination must be an alias too [2].

After analyzing the output, we can decide which commands could be substituted by aliases, insert them in the ~/.alias file and use them after we “source” it [3].

[1] Explanation of the commands used:

  • history: show the last commands entered
  • cut -c 8-: cut the first 7 characters from each line of “history” output (cut off the number)
  • sort: sort the commands alphabetically
  • uniq -c: count same commands in a frequency table
  • sort -nr: sort the frequency table by descending counts
  • head -20: show only the first 20 lines

[2] alias hist_top20=’history | cut -c 8- | sort | uniq -c | sort -nr | head -20′ >> ~/.alias
[3] after editing ~/.alias, `source ~/.alias`

Visit The Light of the LAMP blog for more…

Environmental issues (part 1)

As a developer, I prefer working with command line utilities. That’s why I have devoted my first workspace area to an almost full-screen sized terminal window (I prefer gnome-terminal, although I’m working with KDE and konsole is just as good to work with) comprising of 3 or more tabs.

I want the first tab to always open mutt (the e-mail client that does almost everything), the second one to change to my most recent project’s directory (where I can edit the source files with vim — tabbed, using “vim -p”) and make a backup of the project’s database, the third to check some log-file, and the list goes on.

So, what’s the best way to automate these procedures?

Currently I’m using the output of the “tty” command and a case statement at the end of my ~/.bashrc file (comments included):

# show me from where I logged in
echo Logged in from $(tty)
case "$(tty)" in
# run mutt
# first change directory
cd ~/projects/projectName
# then perform a quick database backup
make back
# show me web visits, ignoring some of them based on certain criteria
tail -f /var/log/apache2/access.log | egrep -v "localhost|127.0.0|/(Thumb|images|Photo)/|favicon"

With arrangements like the above, one can “feel like home” by just logging in!

Visit The Light of the LAMP blog for more…

bash: quickly rename files

Since one cannot always have the tools she likes, here are some one-liners to rename certain files using only bash.
Rename all ‘jpeg’ files to ‘jpg’:

  • for a in *.jpeg; do mv $a ${a%jpeg}jpg; done

Remove the ‘photo-‘ prefix:

  • for a in photo-*; do mv $a ${a/photo-}; done

Rename ‘dsc’ prefix to ‘photo-‘:

  • for a in dsc*; do mv $a ${a/dsc/photo-}; done

Where can I find these recipes?
man bash 🙂

Visit The Light of the LAMP blog for more…