Heuristic for iBeacon detection of zones in open areas

The mobile app team at EDINA recently developed an app for the University of Edinburgh Main Library “Open Doorsâ€� event. This was our first attempt to use Apple’s iBeacon technology in anger, in a real environment. We had done some evaluation of iBeacons previously so had some idea of what to expect, and what not to expect from the technology. Nevertheless, the environment we deployed beacons to, a large open lobby area, was very challenging. We had to create a bespoke detection heuristic to create a reasonable user experience. In this post, I’ll demonstrate the problem and then explain how our algorithm works and discuss is performance, and potential for improvement or alternatives.

The user experience we were after should in theory have been a fairly simple one (you might think).

  • We divide the floorplan into non-contiguous zones, ensuring a fair amount of distance (> 5m) between zones.
  • As a users enters a zone, we pan to the area on the floorpan viewer and some content (in this case a video) is highlighted.

 

IMG_0116

Screenshot from the Library Tour app showing zones in the open lobby space

Therefore all we needed to know was which beacon was closest. The exact distance was not that important, so we could ignore inaccuracies in actual distance, so long as we could determine which of the beacons in range was the closest.

  1. Where more than one beacon is deployed in the same open floor space, it is very difficult to find positions and range settings where the beacon readings do not collide with one another.
  2. Setting a beacon range to a small value (<1 meter) requires the user to position themselves very close to the beacon and it is likely the user will walk through the zone without anything happening.
  3. If we set the range to > 1 meter, responsiveness is better, but the reliability of the signals strength readings becomes increasingly inaccurate.
  4. For Android devices, the measured distance varies greatly across difference devices, making it hard to set a range value that will create a good user experience for all users, as documented in this excellent ThoughtWorks blog.

 

A brief look at some tracking data should help to visualise the problem.

 

tracking data for Nexus 5 device showing measured distance of 3 beacons against time, highlighting where ENTER events were activated.

tracking data for Nexus 5 device showing measured distance of 3 beacons against time, highlighting where ENTER events were activated.

The figure above shows the output from a test we ran in our open plan office space ( not the library – we didn’t have time to capture the data when we were deploying the app in the Library). This data collected on a Nexus 5 is close to what we were hoping for. It shows a user following a route from zone 1 (blue line representing beacon 64404), then entering zone 2 (orange line segment), then entering zone 3 (green), before turning around and returning to zone 2 and finally back to zone 1. We were using Estimote beacons, but rather than using the proprietary Estimate SDK, we used the alt beacon library instead. Listening to the beacons in ranging mode, we receive a batch of beacon readings every second or so, where each beacon in range reports its signal strength from which an estimation of the distance is derived. The data above is pretty good scenario for our use case, as for the most part only one beacon is detected in range at anyone time.  There is a period of 7 seconds between   14:13:04 and 14:13:11, where the ranging data batch includes readings for both beacon 30295 and 64404.

 

zoomed in detail of tracking data for Nexus 5 device showing measured distance of 3 beacons against time, highlighting where ENTER events were activated.

zoomed in detail of tracking data for Nexus 5 device, showing measured distance of 3 beacons against time, highlighting where ENTER events were activated.

As we might expect the readings for the orange beacon gradually increase as we walk away from zone 2 and approach zone 1, while the values for the blue beacon decrease as we approach zone 1. Even though both beacons are in range during this period, we don’t want both beacons to trigger events at the same time. We want the algorithm to decide which zone the user is currently located in even if more than one beacon is in range. Two simple solutions present themselves:

  1. Choose the nearest beacon in the batch. In the case above, with the Nexus 5 readings, this would work perfectly. The zone 1 (blue) ENTER event would actually have occurred a second before the one recorded above, and so this simple heuristic would be more responsive than our implementation in this case. You’ll see shortly why we can’t rely on this all the time though.
  2. Require a minimum distance before a beacon can trigger an application event. This would not work over the full period above Nexus 5 track (figure 1).  If we choose a threshold of <1 meter, the first time the user enters zone 2 ( the first orange line segment) would not trigger a zone ENTER event as it should. If we raise the threshold to 1.5 meters, then the entry into this is zone 2 is detected, but during the 7 second period shown if figure 2, the zone 1 readings shown in blue would also activate zone ENTER events, colliding with simultaneous ENTER events for zone 2.

 

So at first sight, the simple solution of choosing the closest reading in the batch looks good. But let’s take a look at tracking data for a Moto4G device instead.

tracking data for Moto4G device showing measured distance of 3 beacons against time, highlighting where ENTER events were activated.

tracking data for Moto4G device showing measured distance of 3 beacons against time, highlighting where ENTER events were activated.

The first thing to notice is that the distance range is larger than in the previous example, ranging from 1.3 m to 3.5m for the Motorola device, compared to 0.37m to 1.95m for the Nexus 5. This difference between platforms is another reason why setting a minimum threshold for activation is tricky to get right. As you would expect, we found consistency across iOS devices is much better. The next thing to notice is how patchy the the data can be in places. For some reason, this device recorded hardly any beacon readings at all between a 30 second period between 9:34:37 and 9:35:05. A period that includes the sole reading for zone 3 (green). We are not quite sure why this happens (some feature of the underlying bluetooth implementation for these devices perhaps, or maybe a quirk in the altbeacon library?). What is clear is that patchy scanning data that can cause the “choose the nearest beacon” solution to come undone. Take a look at the highlighted datapoint below.

tracking data from Moto4G device highlighted data point

For the highlighted batch the orange beacon was the only beacon detected in range, so the “closest in batch” heuristic would trigger an ENTER zone event at this point. But the subsequent 3 batches of readings (spanning 3 seconds) have only blue beacons recorded. So the “closest in batch” would immediately trigger a blue ENTER zone event. This is typical behaviour on zone boundaries where readings are patchy and flip between 2 or more beacons in range. It takes 7 seconds before we see both beacons in the same batch of readings, and can pick the closest (orange) without subsequent flip back to blue . This data point is highlighted in the chart below. Note our algorithm, which I’ll explain below, did not trigger the ENTER zone event at this point, but instead has to wait 4 seconds for the next batch of readings. So in this case, the algorithm pays a high price to avoid flipping between zones.

Screen Shot 2016-03-09 at 17.54.38

It might be possible to devise a way to mitigate the effect of patchy data and zonal boundaries, by examining the values of beacons over two or three previous batches of ranging scans, instead of relying on just one batch of readings for comparison. There is a danger though that averaging (even when weighting the most recent batch) could slow down the responsiveness of the application. In the case where a single data point is critical, such as the (green) zone 3 ENTER event in the chart above, it’s not clear if we should average the green data point against zero values for previous batches, or just take the current value as the weighted average for the beacon.  It looks like the latter technique would have worked quite well in the case above, but I have not had time to explore this alternative solution properly.

Hopefully this section of the post has helped to explain some of the nuances of deploying beacons as a way of representing zones in an open space. Generally, the “choose the closest in batch” heuristic seems to work quite well, but is not immune to flipping behaviour in places where ranging data is patchy. In the next post, I’ll present the solution we used.

Our solution:

Our solution for dealing with the kind of issues described above, is based on the State pattern. Each beacon is associated with a geofence zone around the beacon, with the beacon registering either an INSIDE (zone)  or OUTSIDE (zone) state. The class representing each zone is called BeaconGeoFence and performs two functions. The first is maintain a BeaconGeofenceState, which subclasses into GeoFenceInsideState and GeoFenceOutsideState. Each BeaconGeoFence can only reference one of these states at a time. The second function that BeaconGeoFence class performs is to implement a BroadcastReceiver that listens for events broadcast by the other BeaconGeoFence zones. The BeaconGeofenceState class implements a single method ( evaluateGeofence ), which determines whether a instance of  BeaconGeoFence should change its state, and then broadcast the result of this evaluation to all other BeaconGeoFence instances.  So the general idea is to create a model where beacons (geofence zones) can broadcast messages to one another and potentially change each others state, based on an evaluation of their own state.

To work through how this works in a bit more detail. Initially, all BeaconGeoFence instances are initialised to the “outside” state with a default radius (6m) defining the geofence zone. When the main application class FloorPlanApplication is initialised, the geofence ranging process is started with a call to
beaconManager.startRangingBeaconsInRegion , which kicks off the scanning process where the didRangeBeaconsInRegion(Collection<Beacon> beacons...) method is called every second or so. The collection of beacons represents the current batch of beacons in range. As explained above, to avoid flipping between beacons within range, we sort the batch by estimated distance and only consider the closest in the batch for evaluation. The corresponding GeofenceBeacon is the only one that has a chance to evaluate and change its state.  What happens next depends on the distance of the selected GeofenceBeacon and on its current state.

If the current state is OUTSIDE, one of two things can happen:

  1. if the distance is less than the current radius value for the beacon, the BeaconGeofence changes its state to the GeofenceInsideState and broadcasts an ENTER event to all the other beacons.
  2. if the distance is greater than the current radius value for the beacon, the BeaconGeofence does not change its state and broadcasts a STAY_OUTSIDE event to the other BeaconGeofence instances.

In either case, the other BeaconGeofence instances must work out what to do in response to the broadcast event.

  1. If an ENTER event is received, the receiving BeaconGeofence must immediately change its state to the OutsideGeoFence state. This action is meant to prevent more than one beacon being in an INSIDE state at the same time. The receiving BeaconGeofence also changes it’s radius threshold value to the value passed by the ENTER event. This ensures that only a beacon that has a distance closer than the one that triggered the ENTER event can produce a subsequent ENTER event.
  2. If a STAY_OUTSIDE event is received, the receiving BeaconGeofence instances do not need to change their state, as the closet beacon in the previous batch of readings was not near enough to trigger an ENTER event. But all the beacons increase their radius threshold, to make it easier next time for this or another “outside” BeaconGeoFence to push out the current “inside” beacon.

If on the hand, the closest in batch has an INSIDE state, one of two things below will happen:

  1. if the distance is great that the current radius setting for the beacon, the BeaconGeofence changes its state to OUTSIDE and broadcasts an EXIT event
  2. if the distance is less than the current radius setting the BeaconGeoFence does not change its state and broadcasts a STAY_INSIDE event to the other BeaconGeoFence instances.

Again, the other BeaconGeofences have to decide how to respond to each broadcast event.

  1. For an EXIT event, other beacons do not need to change anything, The purpose of this event is to capture a situation where the user walks out of a zone. As the state has now changed to OUTSIDE,  the device will be able to trigger a new ENTER event if they turn back and enter the zone again.
  2. For a STAY_INSIDE event all other beacons change their radius to the latest reading from the INSIDE beacon. The INSIDE beacon still keeps its original ENTER distance radius.

The above algorithm solves two problems we encountered using iBeacons on Android devices. First, the difference in range distances calculated for different devices. In this algorithm, the optimal range (radius) for each Beacon can change from the initial default value, allowing the device to self calibrate. The first ENTER event sets and benchmark range for all the beacons to beat, and subsequent STAY_INSIDE broadcast events reinforces the distance used to evaluate whether a beacon should change its state. The algorithm also handles a situation where a single “outside” beacon is the only beacon in range, so by definition is the closest in the batch. When this BeaconGeofence is evaluated, to produce an ENTER event it has to beat (lower value than) the last distance produced by a previous STAY_INSIDE broadcast – that is it must beat the last known distance of the current “inside” beacon. However, unlike a floating average, this algorithm does not preclude a single reading from generating an ENTER event.

Overall we found this solution worked reasonably well, mostly preventing annoying flipping between zones, even when data was patchy. There was an impact on responsiveness, but not too severe. Looking at a range of log traces, we found the algorithm required an extra scan (typically requiring  1 second per batch) delay, beyond the optimal point for triggering a state transition. So generally, we found the cost was a second or so. If your zones are reasonable large, this is probably an acceptable level of delay, as the user will take a few seconds to walk through the zone and therefore will will provide enough time to trigger an ENTER event. For smaller zones, it is still possible for the user to walk straight through the zone without generating an event. There is clearly still a lot of work to do in tweaking the algorithm, or trying out some alternative techniques, but we did feel we made some progress deploying iBeacons as a way of detecting a device as its moves through non contiguous zones. It will also be useful to check whether the new Android Eddystone protocol produces more consistent behaviour across Android devices.

 

Integrating Openlayers and HTML5 Canvas (Revisited)

The WordPress stats tell me there is still a lot of interest in our previous post on integrating OpenLayers and HTML5 Canvas from way back in 2010.
Time has passed, technology has moved on and I’ve started buying shoes in bulk like Mr Magorium. So below, I provide an update on how I integrate OL and HTML5 Canvas 3 years on.

Previously my approach was to replace each individual tile image with a corresponding canvas element, the same size as the tile (typically 256*256). Also we used JQuery to capture tile rendering events. The updated approach is to capture tile images as they are rendered by OpenLayers using OL built in event listeners and then draw these onto a single HTML5 canvas element.
Apart from being more efficient and producing cleaner, more robust code, this approach has the advantage that you can use HTML5 to draw shapes, lines and manipulate pixels on a single canvas tile, crossing tile boundaries. This is particularly useful for drawing lines and shapes using paths (e.g. lineTo() , moveTo() functions).

To demonstrate this I’ve set up a simple demo that shows the HTML5 Canvas adjacent to a simple OpenLayers map, where the canvas version (on the right hand side) is manipulated to show a grayscale and inverted version of the original map image (grayscale is triggered by loadend and the invert function by moveend) The source code is available on EDINAs gitHub page. (https://github.com/edina/geomobile) and on JS Fiddle page.
The solution hinges on using the OpenLayers.Layer loadend event to capture the tiles when OpenLayers has finished loading all the tiles for a layer, and also the OpenLayers.Map moveend event, which OpenLayers triggers when it has dealt with the user panning the map. The former is shown in the code snippet below:
// register loadend event for the layer so that once OL has loaded all tiles we can redraw them on the canvas. Triggered by zooming and page refresh.

layer.events.register("loadend", layer, function()
{

// create a canvas if not already created

....

var mapCanvas = document.getElementById("mapcvs" ) ; // get the canvas element
var mapContainer = document.getElementById("OpenLayers.Map_2_OpenLayers_Container") ; // WARNING: Brittle to changes in OL

if(mapCanvas !== null)
{
var ctx = mapCanvas.getContext("2d") ;
var layers = document.getElementsByClassName("olLayerDiv") ; // WARNING: Brittle to changes in OL

// loop through layers starting with base layer
for(var i = 0 ; i < layers.length ; i++)
{

var layertiles = layers[i].getElementsByClassName("olTileImage") ; // WARNING: Brittle to changes on OL

// loop through the tiles loaded for this layer
for(var j = 0 ; j < layertiles.length ; j++ )

{
var tileImg = layertiles[j] ;
// get position of tile relative to map container
var offsetLeft = tileImg.offsetLeft;
var offsetTop = tileImg.offsetTop ;
// get postion of map container
var left = Number(mapContainer.style.left.slice(0, mapContainer.style.left.indexOf("p"))) ; // extract value from style e.g. left: 30px
var top = Number(mapContainer.style.top.slice(0, mapContainer.style.top.indexOf("p"))) ;
// draw the tile on the canvas in same relative postion it appears in OL map
ctx.drawImage(tileImg, offsetLeft + left, offsetTop + top) ;

}

greyscale(mapCanvas, 0, 0, mapCanvas.width, mapCanvas.height) ;
// uncomment below to toggle OL map on /off can only be done after layer has loaded
// mapDiv.style.display = "none" ;

}
}
});


Note that some of the code here comes with a health warning. The DOM functions used to navigate the OpenLayers hierarchy is susceptible to changes in the Open Layers API so you need to use a local copy of OpenLayers (as is case in GitHub sample) rather than point to the OpenLayers URL (as is case in the JS Fiddle version).  Also note that all layers are drawn to the Canvas, not just the one that Open Layers triggered the loadend event for. This is necessary to ensure that the order of layers is maintained. Another issue to be aware of when using Canvas drawing methods on maps  is the likelihood of a CrossOrigin tainting error. This is due to map images being loaded from a different domain to that of the HTML5 code. The error will not get triggered simply by drawing the tiles to canvas using the drawImage() function, but does fail when you attempt pixel manipulation using functions such as putImageData() . OpenLayers handles this using the Cross-Origin-Resource-Sharing protocol which by default is set to ‘anonymous’ as below. So long as the map server you are pointing to is configured to handle CORS requests from anonymous sources you will be fine.

layer.tileOptions = {crossOriginKeyword: ‘anonymous’} ;

Would be interested to hear if others are doing similar or have other solutions to doing Canvasy things with OpenLayers.

OpenLayers Canvas Capture

Fieldtrip GB – Mapserver 6.2 Mask Layers

By Fiona Hemsley-Flint (GIS Engineer)

Whilst developing the background mapping for the Fieldtrip GB App, it became clear that there was going to have to be some cartographic compromises between urban and rural areas at larger scales; Since we were restricted to using OS Open products, we had a choice between Streetview and Vector Map District (VMD) – Streetview works nicely in urban environments, but not so much in rural areas, where VMD works best  (with the addition of some nice EDINA–crafted relief mapping) . This contrast can be seen in images below.

streetview1VectorMapDistrict1

Streetview (L) and Vector Map District (R) maps in an urban area.

streetview2VectorMapDistrict2

Streetview (L) and Vector Map District (R) maps in a rural area.

In an off-the-cuff comment, Ben set me a challenge – “It would be good if we could have the Streetview maps in urban areas, and VMD maps in rural areas “.

I laughed.

Since these products are continuous over the whole of the country, I didn’t see how we could have two different maps showing at the same time.

Then, because I like a challenge, I thought about it some more and found that the newer versions of MapServer (from 6.2) support something called “Mask Layersâ€�  – where one layer is only displayed in places where it intersects another layer.

I realised if I could define something that constitutes an ‘Urban’ area, then I could create a mask layer of these, which could then be used to only display the Streetview mapping in those areas, and all other areas could display a different map – in this case Vector Map District (we used the beta product although are currently updating to the latest version).

I used the Strategi ‘Large Urban Areas’ classification as my means of defining an ‘Urban’ area – with a buffer to take into account suburbia and differences in scale between Strategi and Streetview products.

The resulting set of layers (simplified!) looks a bit like this:

masking example

Using Mask layers in MapServer 6.2 to display only certain parts of a raster image.

Although this doesn’t necessarily look very pretty in the borders between the two products, I feel that the overall result meets the challenge – in urban areas it is now possible to view street names and building details, and in rural areas, contours and other topographic features are more visible. This hopefully provides a flexibility  for users on different types of field trips to successfully implement the background mapping.

Here’s a snippet of the mapfile showing the implementation of the masking, in case you’re really keen…

#VMD_layer(s) defined before mask

LAYER

…..

END

#Streetview mask layer

LAYER

NAME “Streetview_Mask”

METADATA

…..

END

#Data comes from a shapefile (polygons of urban areas only):

DATA “streetview_mask”

TYPE POLYGON

STATUS OFF

END

#Streetview

LAYER

NAME “Streetview”

METADATA

…..

END

#Data is a series of tiff files, location stored in a tileindex

TYPE Raster

STATUS off

TILEINDEX “streetview.shp”

TILEITEM “Location”

#*****The important bit – setting the mask for the layer*****

MASK “Streetview_Mask”

POSTLABELCACHE TRUE

END

What Women Intent

I noticed a recent BBC news report stating that more women than men in the UK now own a tablet. It seems that the days when an iPad was most frequently coveted by middle-aged men such as me have long gone.

One question this raised in my mind though is why tablets are particularly popular with women in a way that laptops and netbooks were not. Does this tell us anything about the mobile revolution? Does it tell us anything about men and women? Probably not! Perhaps it is just natural that something as convenient as a tablet computer is popular with both men and women.

However I’ll ignore that perfectly reasonable explanation and speculate on the gender angle.

So, certainly in my household the opportunity to sit down at a laptop for, say, thirty minutes uninterrupted is a luxury mostly enjoyed by, well, er… me. My partner has commented that my ability to filter out bickering kids, ignore a saucepan boiling over, forget I started the kids’ bath running and completely not hear the important information she is telling me about the school run tomorrow is nothing short of a supernatural gift. An ability to remain sitting down at a computer when all that is going on is certainly not something I’ve observed in her or other women I know.

So my theory is that tablets are popular with women because they are designed to cope with interruptions (the tablets I mean, not the women). Or at least, the smartphones from which the tablets inherited their OS were designed to be interrupted – by phone calls specifically.

People think of operating systems such as Android as a set of Apps, but really they are a set of interruptible views called Activities (View-Controllers in iOS). The only difference is that the initial Activity in an App has an icon on the Home screen.

Developers are required to implement life-cycle methods on each Activity (AppDelegate in iOS) to ensure that if the OS interrupts the action at any point, the user can pick up again exactly where they left off. This is so critical that in Android the transition from one activity to another is encapsulated in a class of its own called an “Intentâ€�. The name reminds the developer that they might be intending a change in application state but the OS can butt in at any time – so they must make sure it stores everything from the previous Activity first.

This explanation is helpful to me in understanding the success of tablets. When the iPad first came out I have to admit I didn’t think it would be anywhere as popular as the ubiquitous iPhone. At the time, I thought the meteoric success of smartphones was down to their portability and geo-location capabilities. I loved the shiny beauty of the iPad design but couldn’t help thinking it was a bunch of iPhones stuck together. I wondered why I’d want one when I could get a more powerful netbook with a proper keyboard built in. But netbooks are less portable, they take more time to boot up and you have to save what you are doing to ensure you don’t lose your data. The batteries do not last as long and using a keyboard and mouse requires you to sit down. Not good for the interruptible computer user.

This all could be seen as a reason to avoid web apps or hybrid apps that use a WebView embedded into the app. As the Web View or Web Browser uses the stateless HTTP protocol, there are no activity life-cycle methods for developers to honour and maintaining the state between activities is much trickier to get right. So web-based apps could break the interruptible App and annoy users. Especially those who are being constantly interrupted.

Posted in Uncategorized

Hacking Mapcache with ImageMagick

To generate tiles for the map stack used by FieldTrip GB we are using 4 Mapserver instances deployed to an OpenStack private cloud. This means we can get all our tiles generated relatively quickly using inexpensive commodity hardware. A problem we have is that the resulting PNG tile images look beautiful but are way too big for users to download to their mobile device in any quantity. So we looked to using Mapserver’s built in JPEG format but our cartographers were not happy with the results. One of my colleagues came up with the bright idea of using ImageMagick to compress the PNG to JPEG instead, and the result (using 75% compression) was much better. We can use the ImageMagick command line  with the following syntax:

convert_png_to_jpeg_delete_png.sh

for var in "$@"
do
echo "converting $var to jpg";
convert $var -quality 75 `echo $var | tr '.png' '.jpg'`;
# rm $var
done

and pipe this script using xargs to traverse an existing cache with the PNG generated tiles.

find . -name '*.png' -print0 |  xargs -0 -P4 ../convert_png_to_jpeg_delete_png.sh

So the cartographers finally relented and we now have much smaller files to download to devices. The only problem is that the script to run the ImageMagick convert takes for ever to run ( well all right – 2 days). It’s not because ImageMagick is slow at compression – it’s super fast. It’s just that the IO overhead involved is huge as we are iterating over  16 million inodes. So our plan of scaling up commodity hardware (4 CPU virtual machine) is failing. A solution is to do the jpeg conversion at the same time as the tile caching – this way you are only dealing with one tile at the point you are writing to the cache – so there is much less overhead.

So it’s time to hack some of the Mapcache code and get ImageMagic to add the above compression just after it writes the PNG to the cache.

This just involves editing a single source file found in the lib directory of the Mapcache source distribution  ( mapcache-master/lib/cache_disk.c ). I’m assuming below you have already downloaded and compiled Mapcache and also have downloaded ImageMagick packages including the devel package.

First of all include the ImageMagick header file

#include  <wand/magick_wand.h>

Then locate the method  _mapcache_cache_disk_set. This is the method where Mapcache actually writes the image tile to disk.

First we add some variables and an Exception macro at the top of the method.

MagickWand *m_wand = NULL ;
MagickBooleanType status;

#define ThrowWandException(wand) \
{ \
char \
*description; \
\
ExceptionType \
severity; \
\
description=MagickGetException(wand,&severity); \
(void) fprintf(stderr,”%s %s %lu %s\n”,GetMagickModule(),description); \
description=(char *) MagickRelinquishMemory(description); \
exit(-1); \
}

Add then right at the end of the method we add the MagickWand equivalent of the convert command line shown above. The compression code is highlighted

if(ret != APR_SUCCESS) {
ctx->set_error(ctx, 500, "failed to close file %s:%s",filename, apr_strerror(ret,errmsg,120));
return; /* we could not create the file */
}

// *******ImageMagick code here ********

ctx->log(ctx, MAPCACHE_INFO, “filename for tile: %s”, filename);
MagickWandGenesis() ;
m_wand=NewMagickWand() ;
status=MagickReadImage(m_wand,filename);
if (status == MagickFalse)
ThrowWandException(m_wand);
// MagickSetImageFormat(m_wand, ‘JPG’) ;
char newfilename[200];
strcpy(newfilename, filename) ;
int blen = strlen(newfilename) ;
if(blen > 3)
{

newfilename[blen-3]=’j’ ;
newfilename[blen-2]=’p’ ;
newfilename[blen-1]=’g’ ;
MagickSetImageCompression(m_wand, JPEGCompression) ;
MagickSetCompressionQuality(m_wand, 75 ) ;
ctx->log(ctx, MAPCACHE_INFO, “filename for new image: %s”, newfilename);
MagickWriteImage(m_wand, newfilename ) ;
}
/* Clean up */
if(m_wand)m_wand = DestroyMagickWand(m_wand);
MagickWandTerminus();

And that’s it. Now just the simple matter of working how to compile it, link it etc.

After a lot of hmm’ing and ah-ha’ing (and reinstalling ImageMagick to more recent version using excellent advice from here ) it meant making the following changes to the Makefile.inc in mapcache src root dir.

INCLUDES=-I../include $(CURL_CFLAGS) $(PNG_INC) $(JPEG_INC) $(TIFF_INC) $(GEOTIFF_INC) $(APR_INC) $(APU_INC) $(PCRE_CFLAGS) $(SQLITE_INC) $(PIXMAN_INC) $(BDB_INC) $(TC_INC) -I/usr/include/ImageMagick
LIBS=$(CURL_LIBS) $(PNG_LIB) $(JPEG_LIB) $(APR_LIBS) $(APU_LIBS) $(PCRE_LIBS) $(SQLITE_LIB) -lMagickWand -lMagickCore $(PIXMAN_LIB) $(TIFF_LIB) $(GEOTIFF_LIB) $(MAPSERVER_LIB) $(BDB_LIB) $(TC_LIB)

Then run make as usual to compile Mapcache and you’re done! The listing below shows the output and difference in compression:

ls -l MyCache/00/000/000/000/000/000/
total 176
-rw-r–r–. 1 root root 4794 Jul 23 13:56 000.jpg
-rw-r–r–. 1 root root 21740 Jul 23 13:56 000.png
-rw-r–r–. 1 root root 2396 Jul 23 13:56 001.jpg
-rw-r–r–. 1 root root 9134 Jul 23 13:56 001.png
-rw-r–r–. 1 root root 8822 Jul 23 13:56 002.jpg
-rw-r–r–. 1 root root 46637 Jul 23 13:56 002.png
-rw-r–r–. 1 root root 8284 Jul 23 13:56 003.jpg
-rw-r–r–. 1 root root 45852 Jul 23 13:56 003.png
-rw-r–r–. 1 root root 755 Jul 23 13:55 004.jpg
-rw-r–r–. 1 root root 2652 Jul 23 13:55 004.png

original PNG tile

converted to JPEG at 75% compression

Fieldtrip GB App

First of all – apologies for this blog going quiet for so long. Due to resource issues its been hard to keep up with documenting our activities. All the same we have been quietly busy continuing work on geo mobile activity and I’m please to announce that we have now releases our Fieldtrip GB app in the Google Play Store  

GooglePlayCapture

We expect the iOS version to go through the Apple App Store  in a few weeks.

Over the next few weeks I’ll be posting to blog with details of how we implemented this app and why we choose certain technologies and solutions.

Hopefully this will prove a useful resource to the community out there trying to do similar things.

A brief summary. The app uses PhoneGap and OpenLayers so is largely using HTML5 web technologies but wrapped up in a native framework. The unique mapping uses OS Open data including Strategi , Vector Map District  and Land-Form PANORAMA mashed together with path and cycleway data from OpenStreetMap and Natural England.

DSC02978screenshot-1363210183404
screenshot-1363210085080screenshot-1363210738430

Fourth International Augmented Reality Standards Meeting

I’m just back from the Fourth International AR Standards Meeting that took place in Basel, Switzerland and trying hard to collect my thoughts after two days of intense and stimulating discussion. Apart from anything else, it was a great opportunity to finally meet some people I’ve known from email and discussion boards  on “the left hand side of the reality-virtuality continuum“.

Christine  Perry, the driving spirit, inspiration and editor at large of  AR Standards Group has done a fantastic job bringing so many stakeholders together representing Standards Organisations such as the OGC, Khronos, Web3d Consortium, W3C, OMA and WHATWG  Browser and SDK vendors such as Wikitude, Layar, Opera, ARGON and Qualcomm AR and hardware manufacturers ( Canon, SonyEricsson, NVIDIA) as well as several solution providers such as MOB Labs and mCrumbs – oh and a light sprinkling of academics ( Georgia Tech, Fraunhofer iDG ).

I knew I’d be impressed and slightly awe struck by these highly accomplished people, but what did  surprise me was the lack of  any serious turf fighting. Instead, there was a real sense of pioneering spirit in the room.  Of course everyone had their own story to tell (which just happened to be a story that fitted nicely into their organizational interests), but it really was more about people trying to make some sense of a confusing landscape of technologies and thinking in good faith about what we can do to make it easier.  In particular, it seemed clear that the Standards Organizations felt they could separate the problem space fairly cleanly between their specialist area of interest (geospatial, 3d, hardware/firmware, AR content, web etc). The only area where these groups had significant overlap was on sensor APIs, and some actions were taken to link in with the various Working Groups working on sensors to reduce redundancies.

In seemed to me that there was some agreement about how things will look for AR Content Providers and developers (eventually). Most people appeared to favour the idea of  declarative content mark-up language working in combination with a  scripting language (Javascript) similar to the geolocation API model. Some were keen on the idea of this all being embedded into a standard web browsers Document Object Model. Indeed, Rob Manson, from MobLabs has already achieved a prototype AR experience using various existing (pseduo) standards for web sensor and processing APIs. The two existing markup content proposals ARML and KARML are both based on the OGC’s KML, but even here the idea would be to eventually integrate a KML content and styling model into a generic html model, perhaps following the html/css paradigm.

This shared ambition to  converge AR standards with generic web browser standards is  a recognition that the convergence of hardware, sensors, 3d, computer vision and geo location is a bigger phenomenon than AR browsers or augmented reality. AR is just the first manifestation of this convergence and “anywhere, anytime” access to the virtual world as discussed by Rob Manson on his blog.

To a certain extent, the work we have been discussing here on geo mobile blog, using HTML5 to create web based mapping applications, is a precursor to a much broader sensor enabled web that uses devices such as camera, GPS, compass etc. not just to enable 2d mapping content but all kinds of application that can exploit the sudden happen-chance of  millions of people carrying around dozens of sensors, cameras and powerful compute/graphic processors in their pockets.

Coming back from this meeting, I’m feeling pretty upbeat about the prospects for AR and emerging sensor augmented web. Let’s hope we are able to keep the momentum going for the next meeting in Austin.

App Ecosystem

Earlier this week I attended the Open Source Junction Context Aware Mobile Technologies event organized by OSS Watch. Due to a prior engagement I missed the second day and had to leave early to catch a train. It was a pity as the programme was excellent and there was some terrific networking opportunities, although it sounds like I was fortunate to miss the geocaching activity which the twitter feed suggested was very wet and involved an encounter with some bovine aggression.

During the first two sessions I did attend there were quite a few people, including myself, talking about the mobile web approach to app development. I made the comment that the whole mobile web vs. native debate was fascinating and current and that mobile web was losing. But everyone seemed to agree that apps are a pretty bad deal for developers and that making any money from this is about as likely as winning the lottery. This got me thinking on the train to Edinburgh about the “App ecosystem” and what that actually means. A very brief Google search did not enlighten me much so I sketched my own App food chain, shown below.

It no surprise that the user is right at the bottom as all the energy that flows through this ecosystem comes from the guy with the electronic wallet.

But I think it’s going to be a bit of a surprise for app developers ( content providers ) to see themselves at the top of this food chain (along with Apple and Google) as it doesn’t feel like you are king of the jungle when the App retail cut is so high and prices paid by users is so low.

It will be interesting to see if Google, who are not happy with the number of paid apps in the Google Marketplace cut the developer a better deal. Or if the Microsoft Apps built on top of Nokia try to gain market penetration by attracting more high quality content. My guess is not yet. The problem for developers is that the App retailers can grow at the moment just by the sheer number of new people buying smartphones. This is keeping prices artificially low and means app retailers are not competing all that much for content. But smartphone ownership is in fact growing so fast that pretty soon ( approx 2 years?) everyone who wants or can afford a smartphone is going to have one. How do app retailers grow then? They are going to have to get users to part with more money for apps and content either by charging more or attracting advertsing revenue. Even though there are a lot of app developers out there, apps users will pay for are scarce and retailers are going to have to either pay more to attract the best developers and content to their platform, or make life easier for content providers by adopting open standards. So maybe the mobile web might emerge triumphant after all.

JISC Observatory Augmented Reality Report

My report on Augmented Reality for Smartphones is now open for comments here on the JISC Observatory blogSome of the content has previously been previewed on this blog.  Aimed at developers and content publishers who want to take advantage of the latest developments in smartphone and augmented reality (AR) technology, the report has an overview and comparison of different AR “browsers”, frameworks and publishing platforms. It also discusses  emerging standards and usability issues and anti-patterns that developers should be aware of before designing an AR experience. And there is a  section reviewing existing applications of AR in education that should inspire educators to give AR a bash. There is still a bit of time to incorporate some changes to the report so please do leave comments either here or on the JISC Observatory blog.  If you are interested you might also be interested in the interactive session I’ll be giving on this at the Institutional Web Managers Workshop in July.