How can you use the data?

The Push Snowboarding team are always discussing ideas for what could be built, developed, created or even plugged into the the data we’ve captured. To name but a few, we’ve conjured up ideas of slowing down music tracks linked to the data, sound effects when you stack your ride or geo-tagging a certain jump on the mountain. But yesterday we asked our Twitter followers how they’d been thinking of using the data. The  responses, a mixture of ideas and questions, prompted this blog post – read on for some of the more technical responses from Clovis, who’s developed the Push Snowboarding Qt app you can see below.

Hello All!

It’s been awesome to hear the feedback from the community today on Twitter,  for example, Iain Wallace (@the_accidental) was quick to point that we’re now using GPS for speed.

That change happened after our first R&D tests in Kaunertal, Austria, where we discovered that our earlier speed measuring systems were too sensitive to orientation (e.g. the pitot tubes) or other external factors. R&D Episode 1 was filmed during those first tests in Austria. The GPS system embedded on the phone was first tested by us back during our first snow test in Austria and it produced some really nice and clean data (not perfect, but pretty good) so, in favour of robustness and to make the app cool even for people without the external sensors, we opted to use the N8′s GPS.

Even better than realising our latter option for measuring speed, Iain also came up with one of the ideas we’ve been talking about lately: Kalman Filtering to better estimate the variables. For the ones not familiar with Kalman Filtering, it’s a type of Sensor Fusion technique that, in short, takes measurements from different sensors together to produce better estimates. Different sensors generally fail under  different circumstances and in different scales, using this kind of technique you can compute more accurate estimates for a variable (e.g. speed, orientation) than the raw values you had before from each sensor separately. A good example of this in use would be auto-pilot systems for airplanes and rockets.

Right now, we’re just logging and displaying the raw values for most variables, but we have been spending a lot of time in the app architecture to build it in such a way that if somebody wants to design those kinds of filtering, they don’t need to spend time worrying about how to capture each sensor’s readings, nor about the Bluetooth internals, nor even if the sensor is on the phone or attached to the board; the developer can just make his module “subscribe” to a certain type of sensor.

For example, Iain Wallace also suggested/requested “velocity/pose estimation from kalman filtering (or similar) GPS + accelerometer + magnetometer readings.” For this, a developer could implement a new device (subclassing a generic device class) and “tell it” to subscribe to measurements from the phone’s GPS and from the Motion box (which screws into the bindings channel and includes accelerometers and magnetometers) and “magically” these readings would be delivered to his class when available.

This goes even further! If someone wants to use that [better] pose estimation to make something like @lcuk‘s  suggestion of an ”Augmented Reality app and goggles that let you see ghost riders from previous runs.”, he could just “subscribe” to the pose estimates generated by the other guy’s abstract “device”.

Soon we’ll have a more detailed description on how to implement these kind of things (class diagrams, examples, hello worlds) in the forum/wiki for developers.

I hope I answered some questions and we always love to hear from you guys!

And remember this is all open! GPL!

Now I’ll get back to my current coding (having fun with QBluetooth to make it deal better with devices that go out of range and then come back)!

Bye!

Clovis.

Leave a Reply

Your email address will not be published. Required fields are marked *