DevLog –

In the end I got tired of publishing this developer blog manually, so I moved to Blogger. You'll find the continuation of this linked from my home page.


Just added (plagerised) an example of the difference between left,right and full outer joins to the Oracle pages.


Published a few notes from Tom Kyte's lectures (of interest to Oracle developers and DBAs). You'll find new features of Oracle 10g, tuning tips and design & development tips. They are intended to be a summary that can be Googled for detailed information.

Started (as in not fit for publication but I'll forget where it is) an intro to tsearch here. Will probably pad out with other stuff such as postgresql installation and using it with nsd.


Added notes on using external tables in Oracle 9i and up to my data load page. Handy for accessing CSV files and the like. More to come soon on handling dates.


I have reams of stuff to add to this developer blog, but if I wait till they're all typed up I'll never post anything new!

I'm currently working on a task in Quest to populate an AIMS database with realistic data from 2006-2008 for the purposes of volume testing. In the process of doing this I'll learn a thing or two and put it on this data load page. Other Oracle stuff that I've accumulated can be found here.


Our documentation for logging into client sites in Quest is all linked from our bug tracker, but stored on a network drive. These links won't open in Firefox because the default security settings block local links. It's been bugging me for a while, so I finally got around to fixing it. It's surprisingly simple: just enter "about:config" in the address bar and double click on "security.checkloaduri" to change it to false. Restart Firefox and local links will work.

There are lots of configurable settings in about:config. You'll find a good guide to them here.


My colleague Paul Fitzsimons @ iQ Content, who knows I'm taking an interest in the recent developments in rich web interfaces, pointed out this article which summarises the technique Google is using on it's new services. It calls it Ajax - Asynchronous JavaScript and XML. I'm also watching what's happening on the Laszlo front, a Flash based UI that has it's own XML programming language. It's quite heavy on the server since it parses the XML and generates Flash on the fly. Last time I tried it on my PC some fairly simple scripts (e.g. an animated logo) were taking 10 seconds to compile! There are some slick demos here.


I just noticed a recent posting on the OpenACS forums which talks about XMLHTTP. This is a technology that's been around for a while, but not so well supported until lately. It allows JavaScript to request a URI and play around with whatever is returned. Originally this was intended to be XML which JS could then parse, but in fact any content type can be returned. This seems to be catching on as a good idea since the Google team demonstrated it's usefulness with their Suggest and Gmail products. For more ideas and a simple example, click here.


Wrote a handy little web page for OpenACS to display the date on the database and the web server, and tick away the seconds in real-time. I might post it up soon. It's useful for us because none of them are synchronised (except when the web server is the database server) and no one can be arsed fixing it.


Nixer note

If you happen to be asked to make a change to a static web site for someone, and you're the obliging sort who always says yes, watch out for this little gotcha. You'll ask for their FTP details (because if their hosts are anything like as bad as a crowd I know, they'll be afraid of SSH). Let's say by some stroke of luck they remember their login and password, and their FTP server is their web server, so you're able to get in.

Being a conscientious developer, you'll grab the page(s) in question and keep them in a safe place in case of mishaps or altercations. You'll make copies of them and work on the copies. Being a lazy developer, you'll pick one of the pages, maybe the home page if you're cocky, and transfer the new version back to the web site. And when you do this you might get a message like this:

552 Transfer aborted. Disc quota exceeded.

The FTP program has not transferred the new version of the page to the web server because there isn't enough space. What it has done is empty out the contents of the existing file before attempting the transfer. So now the web page is gone and the only way to fix it is to start deleting files - not so easy when it's not your site!

Yes, I made this mistake. And I'd tried to update the home page, so the whole web site was inaccessible. I was doing the work out of office hours, so I couldn't ring support to get the quota temporarily extended (which they probably wouldn't do anyway). I did find a directory called "origanaljpgs" (ouch) which seemed to be old images, so I made copies of these and deleted them all. There were a few Photoshop files in there so this cleared a decent amount of space, but still no joy.

The FTP server wasn't giving anything away as to the disk quota or usage, so I tried SSH. Encouragingly this was open and displayed a nice policy message. Unfortunately it also closed the connection again before I could say "ISPs are pants".

I could have searched around the various folders using FTP, but that's no fun. Very luckily I was able to contact a friend of the site owner who was familiar with their hosting arrangements and said he could try deleting a few mails from their account. After doing so I was able to transfer the new home page and everything was cool again. I was a bit upset I didn't solve the problem technically, but the relief more than made up for that!

Obviously the moral of this tale is to always transfer a test file first, or even better create a sub-directory where you can put the new work and get it approved before making it live. Of course, this can be a bit fiddly if relative paths are used a lot and if the site owner is clueless it's probably a waste of time.


I had a job to do for a static site made using an authoring tool (probably Dreamweaver). The task was to change a few of the standard navigation links that appear on the top of each page. Being a static site, this meant going into the HTML of each page and making the change. I wanted an easy way to crawl the site to see how many pages there were, since there could be old unlinked HTML files hanging around that I didn't need to bother with. A quick Google turned up a free Java program called WebSPHINX, based on Sphinx, by a guy at Carnegie Melon Uni, Pittsburgh, and a guy at Digital in Palo Alto. This is a very interesting tool for mapping the content of web sites, allowing you to view your site as a spider would. The application allows you to specify a starting URL and a depth to crawl to, then displays a graphical representation of the site showing pages as nodes in a tree-like structure. You can also view the URLs, titles, etc. of pages found. For Java programmers there is also an API to let you use the crawling engine in your own applications.


After the fun with the Firefox Googlebar, I noticed a few other extensions available. I downloaded the developer one and decided it was the best thing since, oh, erm, the electric blanket. With all the accessibility work we're doing now, the nicest thing is the DIV outliner, which is like putting style="border: 1px dotted #f00" into all your DIVs. It also displays various info inline in the page, such as link paths, form info, comments, widths and heights of block elements. You can turn images on/off and highlight missing alt attributes. Lots of very cool features for anyone doing front end work.

I've sacrificed navigability for accessibility on, just until I get a chunk of time to fix up my standard page includes. Any old pages use JavaScript to display a standard header and footer, but use tables for layout, have formatting in the HTML (should be CSS really) and aren't valid XHTML. Hopefully any new content will be valid, as indicated by the W3C link in the footer of this page.