Fourth International Augmented Reality Standards Meeting

I’m just back from the Fourth International AR Standards Meeting that took place in Basel, Switzerland and trying hard to collect my thoughts after two days of intense and stimulating discussion. Apart from anything else, it was a great opportunity to finally meet some people I’ve known from email and discussion boards  on “the left hand side of the reality-virtuality continuum“.

Christine  Perry, the driving spirit, inspiration and editor at large of  AR Standards Group has done a fantastic job bringing so many stakeholders together representing Standards Organisations such as the OGC, Khronos, Web3d Consortium, W3C, OMA and WHATWG  Browser and SDK vendors such as Wikitude, Layar, Opera, ARGON and Qualcomm AR and hardware manufacturers ( Canon, SonyEricsson, NVIDIA) as well as several solution providers such as MOB Labs and mCrumbs – oh and a light sprinkling of academics ( Georgia Tech, Fraunhofer iDG ).

I knew I’d be impressed and slightly awe struck by these highly accomplished people, but what did  surprise me was the lack of  any serious turf fighting. Instead, there was a real sense of pioneering spirit in the room.  Of course everyone had their own story to tell (which just happened to be a story that fitted nicely into their organizational interests), but it really was more about people trying to make some sense of a confusing landscape of technologies and thinking in good faith about what we can do to make it easier.  In particular, it seemed clear that the Standards Organizations felt they could separate the problem space fairly cleanly between their specialist area of interest (geospatial, 3d, hardware/firmware, AR content, web etc). The only area where these groups had significant overlap was on sensor APIs, and some actions were taken to link in with the various Working Groups working on sensors to reduce redundancies.

In seemed to me that there was some agreement about how things will look for AR Content Providers and developers (eventually). Most people appeared to favour the idea of  declarative content mark-up language working in combination with a  scripting language (Javascript) similar to the geolocation API model. Some were keen on the idea of this all being embedded into a standard web browsers Document Object Model. Indeed, Rob Manson, from MobLabs has already achieved a prototype AR experience using various existing (pseduo) standards for web sensor and processing APIs. The two existing markup content proposals ARML and KARML are both based on the OGC’s KML, but even here the idea would be to eventually integrate a KML content and styling model into a generic html model, perhaps following the html/css paradigm.

This shared ambition to  converge AR standards with generic web browser standards is  a recognition that the convergence of hardware, sensors, 3d, computer vision and geo location is a bigger phenomenon than AR browsers or augmented reality. AR is just the first manifestation of this convergence and “anywhere, anytime” access to the virtual world as discussed by Rob Manson on his blog.

To a certain extent, the work we have been discussing here on geo mobile blog, using HTML5 to create web based mapping applications, is a precursor to a much broader sensor enabled web that uses devices such as camera, GPS, compass etc. not just to enable 2d mapping content but all kinds of application that can exploit the sudden happen-chance of  millions of people carrying around dozens of sensors, cameras and powerful compute/graphic processors in their pockets.

Coming back from this meeting, I’m feeling pretty upbeat about the prospects for AR and emerging sensor augmented web. Let’s hope we are able to keep the momentum going for the next meeting in Austin.

Usability and Time Sliders

As we move into the final phases of STEEV thoughts now turn to user testing and usability. OK, so we’ve built a visualisation tool to view time-series energy efficiency variables for a specific geographic area. But just how intuitive is the interface? How easy it is to use, for the practitioner, or for the novice user? What functionality is missing, and what is superfluous?

First step was to meet with the EDINA training officer (who has experience in conducting usability and user testing for EDINA projects and services). It was immediately apparent that work was required in terms of workflow and instruction. A detailed list of requirements has been assembled for implementation.

For the next step in this process we have approached a ‘Usability Expert’ with a view to having an overall look at the tool in terms of features and functionality in order to articulate and finesse possible ambiguities. We hope to have at the end of this process a usability guide detailing both process and outcome and make this available through the STEEV blog.

Our aim is to have conducted this exercise in time for the STEEV/GECO Green Energy Tech Workshop on on 13 October. This will allow practitioners the opportunity to use the tool in earnest whilst providing further feedback from an experts perspective.

Expect a future blog post detailing the results of the extended usability exercise.

Regarding part 2 of the title. OK, so there’s wasn’t a fit between STEEV and Memento. What does fit however, is the deployment of the Google Earth Time Slider to view the policy-based scenarios (as provided by our project partner) for each of the four modelled output over time (namely: SAP Rating, Energy, COs emissions, CO2 emissions based on 1990 levels). Our GI Analyst (Lasma Sietinsone – replacement for Fiona who’s currently on maternity leave) has created a dozen KML files which can be viewed in Google Earth using the Time Slider utility. The KML files can be downloaded from http://steevsrv.edina.ac.uk/data/.

Note: Guidance notes on viewing the KML files in Google Earth are available.

Alternatively view the ‘Using the Time Slider bar in Google Earth’ You Tube clip:

Click here to view the embedded video.