Image Server Benchmark

Background

The following tests are an extension to the initial OGC Web Map Service raster benchmarks conducted at FOSS4g 2009 in Sydney. Unfortunately ERDAS were unable to participate this year so I kinda took in upon myself to not only run the same suite of tests but expand and test on a completely different platform.

Full disclosure is that I am currently employed by ERDAS. In conducting these tests and publishing them on my own blog I have taken on a fair amount of risk. Not because the tests were in anyway doctored, but because my reputation is on the line and therefore I better be damned sure of the results. ERDAS will be publishing official benchmarks on the 2010 release, but they will not mirror the format of these tests.

The mantra I followed in configuring each app (including Apollo), was to follow the product documentation to the letter in terms of performance tuning. If it wasnt in the doc, it wasn’t done. For any developers who will inevitably email, flame or comment that I did not set some obscure value to 234534532 … is it in the doc? If not, put it in and I will be more than happy to re-run all the tests!

When the results are updated, I will add a new post so just subscribe to my feed to be notified

Undoubtedly I will need to rerun some tests so please only link back to the image locations so you can be sure the results are current. I will do my best to maintain these results when new versions are available or when configurations change , time permitting of course.

Configuration

Client Machine

  • Windows Vista SP2
  • Intel Core 2 Duo T9600 @ 2.8ghz
  • 4gb RAM
  • 100mb network to server
  • JMeter 2.3.4
  • JDK 1.6.0_16

Server

  • Windows Server 2008 x64
  • Intel Xeon E5410 @ 2.33ghz (2×4 core)
  • 16gb RAM

Test plan

Download JMX plans and corresponding CSV’s

They are identical to the FOSS4G Benchmarking version except it has been extended from 40 threads up to 150 (where’s the fun in 40?)

Otherwise, testing follows the same rules of engagement

  1. Configure applications against sample dataset
  2. Create a sample set CSVof bbox and image dimensions using wms_request.py
  3. Create new JMX with new dataset to test and CSV
  4. Stop all other apps. Start JMeter and let it crunch away
  5. Use the summarizer python script to parse the log
  6. Run the tests multiple times and take the best result

Apart from when identified as reading external pyramids, all applications are to read the test data directly. There is to be no middleware cache or intermediary format used.

There is a single error assertion to check for errors. If the response header does not include a content-type of image/jpeg, then the assumption is an error occured (no further analysis is performed as to why)

For those unfamiliar with the FOSS4G test plan, it uses the CSV to randomise the bbox extents and the image output dimension (width/height) or each request. Image sizes range from 256x256px up to 1024×768 px to mimic real world WMS usage.

Test Data

ECW

For serving ECW in a third-party server application, you must have a ECW SDK license. More info

  • world-topo-bathy-200406-3x86400x43200.ecw
  • 86400 x  43200 px
  • 263,691 kb

Band 1 Block=86400×1 Type=Byte, ColorInterp=Red
Overviews: arbitrary
Band 2 Block=86400×1 Type=Byte, ColorInterp=Green
Overviews: arbitrary
Band 3 Block=86400×1 Type=Byte, ColorInterp=Blue
Overviews: arbitrary

ECW 2

Although the bluemarble image is a good comparable test across different benchmarks, the problem is that it really aint all that big. So, in order to test the strength of ECW I’ve added the following second image to the mix and also increased the number of samples to 5000

  • melbourne.ecw
  • 413333 x 346667 px
  • 30,626,900 kb
  • EPSG 28355 MGA Z55

Band 1 Block=413333×1 Type=Byte, ColorInterp=Red Overviews: arbitrary Band 2 Block=413333×1 Type=Byte, ColorInterp=Green Overviews: arbitrary Band 3 Block=413333×1 Type=Byte, ColorInterp=Blue Overviews: arbitrary

python wms_request.py -count 5000 -region 299326 5799116 344409 5830316 -minres 1 -maxres 1000 -maxsize 1024 758 -minsize 256 256 > melbourne.csv

IMG

  • fullearth.img 18kb
  • fullearth.ige 2,737,882 kb
  • fullearth.rrd 920,207 kb
  • 43200 x 21600 px

TIFF raw

  • 138E014S.tif
  • 15653 x 15653px
  • 717,945 kb
  • Raw in terms of no tiling or internal / external pyramids of any kind

Band 1 Block=15653×1 Type=Byte, ColorInterp=Red
Band 2 Block=15653×1 Type=Byte, ColorInterp=Green
Band 3 Block=15653×1 Type=Byte, ColorInterp=Blue

python wms_request.py -count 2000 -region 136 -14 138 -12 -minres 0.0002 -maxres 0.5 -maxsize 1024 758 -minsize 256 256 > landsat-tiles.csv

TIFF tiled

  • Identical input image as TIFF Raw
  • 738,084 kb
  • gdal_translate -co TILED=YES 138E014S.tif 138E014S-tiled.tif

Band 1 Block=256×256 Type=Byte, ColorInterp=Red
Band 2 Block=256×256 Type=Byte, ColorInterp=Green
Band 3 Block=256×256 Type=Byte, ColorInterp=Blue

TIFF External

  • OVR 138,158 kb
  • RRD 200,663 kb
  • Create both ovr and rrd external pyramids. It will be up to the relevant application to determine what associated file, if any, it will use
  • Create external *.ovr file
  • gdaladdo -ro –config COMPRESS_OVERVIEW DEFLATE 138E014S.tif 2 4 8 16
  • Create external *.rrd file

Band 1 Block=15653×1 Type=Byte, ColorInterp=Red
Overviews: 7827×7827, 3914×3914, 1957×1957, 979×979
Band 2 Block=15653×1 Type=Byte, ColorInterp=Green
Overviews: 7827×7827, 3914×3914, 1957×1957, 979×979
Band 3 Block=15653×1 Type=Byte, ColorInterp=Blue
Overviews: 7827×7827, 3914×3914, 1957×1957, 979×979

TIFF Internal

  • gdaladdo 138E014Sinternal.tif 2 4 8 16
  • 963,986 kb

Band 1 Block=15653×1 Type=Byte, ColorInterp=Red
Overviews: 7827×7827, 3914×3914, 1957×1957, 979×979
Band 2 Block=15653×1 Type=Byte, ColorInterp=Green
Overviews: 7827×7827, 3914×3914, 1957×1957, 979×979
Band 3 Block=15653×1 Type=Byte, ColorInterp=Blue
Overviews: 7827×7827, 3914×3914, 1957×1957, 979×979

JPEG2000

  • 5378 x 5242 px
  • 29,277 kb

Band 1 Block=5378×1 Type=Byte, ColorInterp=Undefined
Overviews: arbitrary
Band 2 Block=5378×1 Type=Byte, ColorInterp=Undefined
Overviews: arbitrary
Band 3 Block=5378×1 Type=Byte, ColorInterp=Undefined
Overviews: arbitrary
Band 4 Block=5378×1 Type=Byte, ColorInterp=Undefined
Overviews: arbitrary
Band 5 Block=5378×1 Type=Byte, ColorInterp=Undefined
Overviews: arbitrary

python wms_request.py -count 2000 -region 142.54 -22.74 143.29 -22.27 -minres 0.0002 -maxres 0.005 -maxsize 1024 758 -minsize 256 256 > winton-jp2.csv

MrSID

  • 3840 x 5760 px
  • 6,573 kb

Band 1 Block=1024×128 Type=Byte, ColorInterp=Red
Overviews: 1920×2880, 960×1440, 480×720, 240×360, 120×180, 60×90, 30×45
Band 2 Block=1024×128 Type=Byte, ColorInterp=Green
Overviews: 1920×2880, 960×1440, 480×720, 240×360, 120×180, 60×90, 30×45
Band 3 Block=1024×128 Type=Byte, ColorInterp=Blue
Overviews: 1920×2880, 960×1440, 480×720, 240×360, 120×180, 60×90, 30×45

python wms_request.py -count 2000 -region 1777600 5889120 1778080 5889840 -minres 500 -maxres 10000 -maxsize 1024 758 -minsize 256 256 > nztm-sid.csv

Benchmark Applications

Geoserver 2.0

  • Tomcat 6.0.20
  • APR 1.1.6, JVM 1.6 options applied as per wiki http://geoserver.org/display/GEOSDOC/2.6+GeoServer+in+Production+Environment
  • XMs 128M Xmx 1024M … -XX:SoftRefLRUPolicyMSPerMB=36000 -XX:MaxPermSize=512m
  • Reconfigured server.xml with 8 threads
  • Production logging enabled
  • Resource limit set to default 60secs
  • GDAL extension
  • windows32-imageio-ext-installer-gdal-mrsid-ecw-1.0.4
  • SUGGESTED_TILE_SIZE: 512,512
  • USE_JAI_IMAGEREAD true
  • USE_MULTITHREADING false (when set to true, errors with java.lang.OutOfMemoryError: unable to create new native thread cause the service to fail entirely at 40 user load)
  • Was unable to configure ImageIO for MrSID support. The format is available to configure, but throws “The Provided input is not supported by this reader”.
  • I am getting severe behaviour with Tomcat falling completely offline after 40 user load. Is this expected? I didnt have time to investigate, but no real logs of use anyway as to the cause.

ERDAS APOLLO 2010

  • Tuned as per documentation
  • apollo-ds pool 5/50 max
  • process manager, GCthreads=5, min.count=5, max.count=20

Mapserver 5.4.2

  • No Mapserver 5.6 binary available as at 5/11/09 in either OSGEO4W or MS4W (and no, I didnt want to compile..)
  • GDAL 1.6
  • Apache configured for fcgi on port 85
  • Configured with 20 worker processes

IPCCommTimeout 60
IdleTimeout 60
DefaultMinClassProcessCount 20
DefaultMaxClassProcessCount 20


OUTPUTFORMAT
NAME jpeg
DRIVER 'GD/JPEG'
MIMETYPE 'image/jpeg'
IMAGEMODE RGB
EXTENSION 'jpeg'
END
OUTPUTFORMAT
NAME gif
DRIVER "GD/GIF"
MIMETYPE "image/gif"
IMAGEMODE RGB
EXTENSION "gif"
END
OUTPUTFORMAT
NAME png
DRIVER "GD/PNG"
MIMETYPE "image/png"
IMAGEMODE RGB
EXTENSION "png"
FORMATOPTION "QUANTIZE_FORCE=on"
FORMATOPTION "QUANTIZE_COLORS=256"
END

Example Layer snippet

LAYER
NAME         topo_ecw
DATA         "world-topo-bathy-200406-3x86400x43200.ecw"
TYPE         RASTER
METADATA
"wms_title"    "World Topography/Bathymetry (ECW)"
"wms_extent" "-180 -90 180 90"
END
PROJECTION
"+init=epsg:4326"
END
END

Mapguide 2.1

  • Installed 2.1 bundled configuration
  • Installed ECW libs
  • Installed Maestro
  • When configuring a new data connection using the GDAL FDO, as soon as I try to refresh the default coordinate system I am presented with the following crash using any GDAL connection. CPL_DEBUG shows the image being read correctly, mgserver seems to be ok … its just the bundled Apache that really doesnt like something ..
  • mapguide-crashOk, time to re-evaluate. Lets go with the IIS config option and reinstall …

Deegree 2.3c

  • Documentation unavailable for configuring ECW support
  • Documentation unclear on how to configure WCS without using pre-cached tile grid. Am I blind??
  • Due to limited support for other formats and time constraints, no tests were completed. Ideas again are welcome. Please point me in the right direction!

Result Summary

Maximum throughput for each application,

  • ERDAS Apollo :
  1. ECW (120.0 maps / sec) @ 300 users
  2. IMG (82.7 maps / sec) @ 80 users
  3. TIFF Internal Pyramid (82.6 maps /sec) @ 20 users
  • Mapserver:
  1. ECW (54.4 maps / sec) @ 150 users
  2. TIFF Internal Pyramid (41.3 maps / sec) @ 80 users
  3. TIFF External Pyramid (38.6 maps / sec) @ 80 users
  • Geoserver:
  1. TIFF Internal pyramid (27.9 maps /sec) @ 20 users
  2. ECW (25.2 maps / sec) @ 20 users
  3. TIFF Tiled (10.3 maps / sec) @ 40 users

Image Server by Format

  1. An overview of each image server capabilities at serving common image formats
  2. Only throughput is available, for average response times please see next section
  3. All output is in the native input SRS (no reprojection) and of type image/jpeg
  4. Mapserver 5.6.1 highlighted as solid bold stroke, dotted style are the original 5.4 results
  5. Apollo 10.1 highlighted as solid bold stroke, dotted style are the original 10.0 results

format-by-product-erdas-apolloformat-by-product-geoserver

Format by Image Server

  1. A head to head comparison between servers for each data source
  2. Each discrete format has 2 graphs, throughput and average response

product-by-format-ecw-throughputproduct-by-format-ecw-responseproduct-by-format-TIFF-throughputproduct-by-format-TIFF-responseproduct-by-format-TIFF-tiled-throughputproduct-by-format-TIFF-tiled-responseproduct-by-format-TIFF-internal-throughputproduct-by-format-TIFF-internal-responseproduct-by-format-TIFF-external-throughputproduct-by-format-TIFF-external-responseproduct-by-format-jp2-throughputproduct-by-format-jp2-responseproduct-by-format-sid-throughputproduct-by-format-sid-response

ECW Reprojection

  1. Take the same bluemarble ECW in WGS84 (4326) and start reprojecting on the fly to the common web mercator (3785, 900913) projection
  2. Remember that the type of reprojection transform will play a big part in the performance drop. Some transforms are a lot worse than others so please keep this in mind as usage will vary
  3. Comparing these results to the initial ECW test will give you a quantitative value of around 40-50%

product-by-format-ecw-reproject-throughputproduct-by-format-ecw-reproject-response

256px ECW Tiled Output

  1. By taking out the variability of the image dimensions, how do the servers improve?
  2. Lets narrow down the previous test plan to request the same bbox’s but this time with a fixed width/heigh of 256px which essentially mimics the absolute worst case scenario serving tiles.

product-by-format-ecw-tiled-throughputproduct-by-format-ecw-tiled-response

Output Format by Server

  1. How is performance effected by the WMS output format?
  2. Is there a correlation between filesize and throughput?
  3. Does any application prefer a particular format over another?
  4. The ECW JMX test plan was modified to request the other output types. The test runs are therefore identical

output-format-throughputoutput-format-size

Configuration Confirmation

By comparing the identical ECW test results against the FOSS4G2009 results conducted on RHEL and on a lower specced machine I can make a general conclusion that the servers were configured “correctly”. The improvement in performance is most likely attributed to the increase in computing power.

Mapserver

  • FOSS4G2009 max throughput: 20 maps / sec @ 10 users
  • My test throughput: 42 maps / sec @ 150 users

Geoserver

  • FOSS4G2009 max thoughput: 11.3 maps / sec @ 10 users
  • My max thoughput: 25.2 maps / sec @ 20 users

Mapserver workers

As per Paul Ramsey’s comment, in order to be more of a fair fight I have updated the Mapserver configuration to use 20 FastCGI workers instead of 8 to (in theory) better utilise the additional server resources. ERDAS Apollo RDS is also capped at 20 (although the technology obviously differs). As the following graph highlights however, the only format to realise the benefit was ECW. Food for thought.

mapserver-8-vs-20-processes

Band 1 Block=1024×128 Type=Byte, ColorInterp=Red
Overviews: 1920×2880, 960×1440, 480×720, 240×360, 120×180, 60×90, 30×45
Band 2 Block=1024×128 Type=Byte, ColorInterp=Green
Overviews: 1920×2880, 960×1440, 480×720, 240×360, 120×180, 60×90, 30×45
Band 3 Block=1024×128 Type=Byte, ColorInterp=Blue
Overviews: 1920×2880, 960×1440, 480×720, 240×360, 120×180, 60×90, 30×45

2 thoughts on “Image Server Benchmark”

Comments are closed.