The need for web based GIS to evolve

With all the hype surrounding google/yahoo/msn maps, it has certainly raised the profile of web mapping on the net. No longer do users need to click and wait, they now have sexy looking, responsive interfaces which are actually simple to use. So where does that leave the rest of our clunky, slow, complex web GIS systems?

I have been pondering this thought for quite a while and i think its really a direction GIS needs to look at moving should the situation suit the use of “non-live data”.

We have a tendancy to expect users will always need the most up to date data available, all the time. This of course suits particular situations but i think most will agree that more often than not, static data does the job just as well. Highlighting the needs and expectations of the users using the GIS is the key; assume nothing.

The best way to illustrate the differences in architecture between the new and the old, lets take the standard road map application. It has the standard functionality of zoom and pan, direction routing, search and geocoding.

In the existing GIS mindset, this application would be built using an application server referencing the spatial and attribute databases. Every map zoom, pan and search the users submit will need to be sent in the usual REQUEST <-> RESPONSE communication. It is this communication which effectively kills the responsiveness of any web application using this ‘click and wait’ method. Not only does the user need to send a sometimes detailed request, the server must then

  • parse the request
  • execute its procedures and queries
  • reference the database(s)
  • perform any spatial operations
  • Retrieve the data
  • Perform any reprojections
  • Run cartographic formatting
  • Create the image
  • Send the response

Sound a bit too complex for someone just wanting to pan around the map? Thats because it is.

Fast forward to the past year and all the new players have ditched the old method in favour of visually appealing, quick and responsive interfaces. The one key difference? Architecture.

To give a bit of background, applications such as google maps buy their street data and imagery off a variety of suppliers. This data as soon as it is cut off to google is static. They then generate about 7tb of image tiles and then consume these inside their AJAX (Asynchronous Javascript and XML) frontend. The result? Asynchronous movement.


AJAX applications can send requests to the web server to retrieve only the data that is needed, usually using SOAP or some other XML-based web services dialect, and using JavaScript in the client to process the web server response. The result is more responsive applications, since the amount of data interchanged between the web browser and web server is vastly reduced. Web server processing time is also saved, since a lot of this is done on the computer from which the request came.

In this new architecture, there is still REQUEST<->RESPONSE communication, but the request is made asynchronously (ie. no click and wait) and the request is a simple image retrieval. This simplifies the application in that it only needs to,

  • Parse the request
  • Retrieve the image
  • Send the result

In Australian terms, this is just another way to “skin the cat” (doing the same thing two different ways). There are obvious positives and negatives for both implementations, but in the end its a battle between finding a balance between usability/responsiveness and the data driven applications themselves.

There is a real need out there to realise our shortcomings in the existing implementation and move forward in making these data driven apps more “user friendly”. Think about the following situation and the practicality of using an AJAX solution,

Manually performing a cadastral parcel selection,

  • Current
    • The user sends the active layer and the click coordinates
    • The server parses the request and performs the spatial intersect on the layer
    • It then highlights the parcel and draws the remaining 300 non-selected parcels as an image
    • Server returns the image
  • AJAX implementation
    • The user sends the active layer and the click coordinates
    • The server parses the request and performs the spatial intersect on the layer
    • It then extracts either, the coordinates as GML, or an image with only the selected parcel
    • Server returns the image or GML
    • Client processes the response and overlays the new image/GML over the existing image

Sound familiar? This example can be modified to handle alot of other common situations. If someone was to calculate the statistics, im sure GIS systems on general would create 70% of redundant images due to poorly implemented cache systems, or situations where the whole map is regenerated just to add a single point (or even worse, regenerated when nothing has changed).

So what do we do? Well, unfortunately a lot of the implementation lies with the application developers (Arcims/Mapserver/Geomedia/Geoserver devs, id love to hear your thoughts). The ability to perform a select, for example, without actually retrieving an image of the entire extents is still unavailable on any application as far as i’m aware. If we all just took one step back and looked at our implementations it would be obvious that we do the simplest things (drawing map extents), very very poorly.

My final open question … Do we really need live cadastral layers? If yes, are we really talking about the spatial data, or just the attributes attached to it?

Ka-map is one such implementation of an asynchronous mapserver client. It is capable of generating mapserver tiled images on the fly or from its cache and can easily be scripted to perform “live” attribute retrieval for interrogating the “static” tiled layers.