The CMS (called it so, it isn't really one) should take content out of invisible files stored somewhere not accessible by common web requests, and put them into dynamically generated environments - content row with links to other pages, some tables for nice formatting, page headers, counter etc.
Problem is that if I code all the pages for myself, I've the hell of a lot to do with just keeping the links up-to-date, but if I only link to the pages via ?page=this_page or something like that via GET, robots won't index those pages, so I've got to find some other solution.
Pointing every single page to an appropiate CGI argument would be possible, but still more hand-work than wanted, if avoidable.
|