Unfortunately my hosting provider doesn't have half a clue (or maybe wants to minimise disk usage):
Like every self-respecting geek blogger, I spent my New Years holiday crunching the numbers from my 2002 access logs. I only have logs starting in February; before then I was using another hosting provider that didnt provide raw access logs. Note to all geek bloggers: if your hosting provider does not provide raw access logs, dump them immediately and switch to Cornerhost (which I use) or some other provider with half a clue. Youll thank me one day.
Mark Pilgrim analyzes his 2002 server logs.
I can only get logging through use of a Server Side Include (SSI) file to exec a Perl script. The SSI file must have a .shtml extension, which means until now Ive only been able to get logging for http://www.cookcomputing.com/ which defaults to index.shtml.
However Ive devised a method of logging access to individual archive pages. Each of these pages now contains a script element which has its src attribute set to a file called log.shtml. The file is empty except for the exec comment line which invokes the Perl script to update the log. To ensure that log.shtml is retrieved on every request, thereby updating the log, instead of the client using a cached copy held either by itself or a proxy server, the Perl script touches log.shtml and invalidates any cached instances.
The big drawback with this technique is that I dont see the real referrer because the referrer for log.shtml is the individual archive page. But for the first time I can see individual pages being accessed, which will have to do until I decide upon another provider.