It doesn’t seem as though the usual suspects picked up on this, but Openlayers is now officially at v2.0
The OpenLayers development team is proud to announce the release of
OpenLayers makes it easy to put a dynamic map in any web page. It can
display map tiles and markers loaded from any source. OpenLayers is
This new release of OpenLayers supports a number of new layer types,
including support for Virtual Earth, Google, and more, alongside WMS,
WFS, KaMap, and GeoRSS.
The main development crew including Chris Schmidt certainly brings new meaning to “release early, release often”. Check out the examples for some ideas of what this js library is capable of.
If you’re wanting a client you can set up in 10 seconds, requires little configuration, has no serverside component and consumes a variety of data services .. this is for you! Watch the next release for some funky new canvas code too
I may be digging up some past rants here, but some of the posts written by Dimitri Rotow, PM for Manifold(?) just have to be quoted. Howard Butler did a previous post (02/03/05) responding to Dimitri’s letter to GISMonitor which seems to be titled quite fittingly as “off his rocker”.
Some more classics that i stumbled upon on the Manifold user forum posted in March 06… (please visit to get these in context)
From a strategic perspective, as much as I understand the appeal of “open” formats, I don’t recommend buying into OGC. It’s basically a bureaucracy of legacy guys unaquainted with modern technology who end up writing low-performance, inept standards that exclude modern ideas.
My experience is that most people who gush on about OGC (some GIS analysts, for example) don’t have any actual experience with it. I find that once people actually try to *do* anything with OGC, unless it is their job to tinker with never-ending things that don’t have to produce actual results, well, then they very rapidly lose enthusiasm for it. There is a cadre of professional OGC “interop demonstrators” who get paid to cobble up one-of-a-kind interop rallies, and those people are quite understandably happy no matter what practical effect OGC has. But in the real world, once someone actually gets hands-on experience with OGC the enthusiasm usually wanes very rapidly as it gets compared to effective alternatives.
Manifold really appear to be taking a “burn your bridges” approach to its competitors (ESRI & OGC in particular). Everythings crap, expensive, slow, bloated …. except ours!! I am looking into purchasing a v7 license for personal use but comments like that are really off-putting for the wider community.
While not everything Dimitri says is quite as radical, i am quite gobsmacked at the manner at which he tries to get his point across. Taking the “us and them” approach really wont help Manifold establish itself in the industry (nor will continual blog comments/forum replies saying “Manifold is so much better than X” with its basis solely around the price tag).
That said, i will still buy a licence because its a good product (well 6 was) … even if its use will be largely looking at its support of the “inept”, “ancient” OGC standards.
Watch this space.
James Fee pointed out something that i did miss in the 9.2 changelog and that is by default, ArcIMS will now no longer ship with the GET_FEATURES ArcXML request enabled.
Take advantage of improved security for served vector data. Image and ArcMap Image Services will not include geometry by default in the response to GET_FEATURES requests if the output mode is binary. With this change, ArcMap cannot be used to download vector data served in ArcIMS without the knowledge of the service provider.
I had actually pointed out this fatal security flaw in an earlier post last year … but its good to see its now “fixed”. I’m interested in how the 9.2 support tools such as ArcExplorer or even ArcGIS will handle connecting to ArcIMS services as i am pretty certain they use the feature request for rendering … maybe they are just using GET_IMAGE requests now?
One point that has always troubled me is the confusion over WMS Scalehint calculations.
For anyone in a similar boat, the following post by Craig Bruce probably made the most sense out of all the other material
If we take a standard pixel size as being a square that is 0.28mm on
each side, the length of the hypotenuse is 0.396mm. (The scale hint is
defined in terms of a diagonal distance.)
This flies in the face of the usual calculation mentioned at various sites promoting
scale / 3846.15 = scalehint
as the method of choice. As Craig Bruce points out, this value is the inverse length/height of a 0.26mm pixel (meters) which goes against using the diagonal hypotenuse of a 0.28mm pixel (0.396mm).
As an example, if i’d like to express 1:50k in terms of a ScaleHint,
Method 1: 50,000 / 3846.15 = ~13 (0.26mm length)
Method 2: 50,000 / 2525.25 = ~19.8 (hypotenuse of 0.28mm)
Whats even more confusing, my UMN Mapserver produces,
Method 3: 50,000 / 2004.4 = ~24.94 (looks to be a hypotenuse of 0.35mm)
I haven’t had time to look at how Mapserver is running the calculation, but i’d certainly prefer the good old cartographic representation in the Capabilities doc
If anyone has any definitive articles on this issue please drop me a link. SLD Scale Denominator’s are clearly the winner though … it would also be interesting comparing the various apps on how they do the transform