It’s a fact that many text editors (or word processors) have the ability to keep a backup copy of every file you edit, just to assure you that you can always restore the previous version of the file. This is usually implemented by creating a copy of the file with a file extension of .bak or by appending a “~” character at the end of the current file extension, like .c~
Although being a useful option for a programmer (or a writer), it lacks the capability of keeping more backup copies, like a CVS (Concurrent Versions System) where the author can go back in time and find the text file as is was e.g. a month ago (very handy if you want to restore something you have deleted just before one week).
In order to avoid the complexity of a CVS, yet having the option to “travel” back in time, I wrote a small shell script, which I’m running just before my editing sessions. The script looks like this:
Suppose you have the same utility in two different places in your filesystem, one installed by a package you don’t really need since you have just installed the same files from source (but in a different directory). You promptly uninstall the packaged version, but when you try to run the utility, you get something like this:
bash: /usr/bin/utility: No such file or directory
The shell is still looking to find the old version!
To refresh the memory of bash, just run the command:
and after that, when you rerun the utility, the new version will appear.
I wrote before about how I prefer to read, organize and archive (almost) everything with my mail reader (hey, I just love my Mutt ), and that is also true for RSS feeds (from the blogs I read frequently and other news sites).
Until now, I used to fire up a GUI news reader, gather all new posts from my feeds, export them in a mailbox-formatted file and import them to my mailbox to read. I have even tried some online (i.e. browser-based) aggregators/readers, but I was not very happy with the whole procedure!
RSS2email is a very simple program. All it does, when it runs, is to check if there are any new items in the feeds it knows and, if there are, deliver them to a (configurable) email address. I installed the (small) package, added some feeds to test and added a cronjob (my preference is to run it hourly like this: 11 08-23 * * * /usr/bin/r2e run) to ensure frequent updates.
One minor problem however, is that it doesn’t read OPML files (special files for exchanging RSS feeds between readers) and since I have many (>100) feeds to read from, I used a bash command to import all of them to RSS2email’s database.
As a developer, I prefer working with command line utilities. That’s why I have devoted my first workspace area to an almost full-screen sized terminal window (I prefer gnome-terminal, although I’m working with KDE and konsole is just as good to work with) comprising of 3 or more tabs.
I want the first tab to always open mutt (the e-mail client that does almost everything), the second one to change to my most recent project’s directory (where I can edit the source files with vim — tabbed, using “vim -p”) and make a backup of the project’s database, the third to check some log-file, and the list goes on.
So, what’s the best way to automate these procedures?
Currently I’m using the output of the “tty” command and a case statement at the end of my ~/.bashrc file (comments included):
# show me from where I logged in echo Logged in from $(tty) case "$(tty)" in "/dev/pts/1") # run mutt mutt ;; "/dev/pts/2") # first change directory cd ~/projects/projectName # then perform a quick database backup make back ;; "/dev/pts/3") # show me web visits, ignoring some of them based on certain criteria tail -f /var/log/apache2/access.log | egrep -v "localhost|127.0.0|/(Thumb|images|Photo)/|favicon" ;; esac
With arrangements like the above, one can “feel like home” by just logging in!