June 11, 2004

Recording the Forces on Extreme Sports

skateboard

I went to a Ubiquitous Computing Capstone course final project review today. There were lots of projects being reviewed, but I was interested in one in particular.

This group of undergrads built a custom-made PCB board with a gyroscope, a three-axis accelerometer and a wireless mote. Their goal was to build a device which could be placed on the equipment of extreme sports competitors (skateboarding, snow-boarding, etc.) that could record the forces being experienced by the competitor. The idea is that if there was some appropriate software interpretation of the data, then a real-time evaluation of the competition could be provided to judges to help them disambiguate particular stunts that might be hard to see. Kind of like instant video replay but with sensors.

Anyway I thought the idea was pretty cool and really marketable in that niche.

Unfortunately, I think that the hardest part is the software interpretation of the data and I don't think the group really even took a crack at that. I was also hoping to see some real data from skate boarders on a half-pipe, but I didn't get that either. Nonetheless the idea seemed like a good one and making the hardware is part of the battle.

Posted by djp3 at 3:08 PM | Comments (0) | TrackBack (0)

June 8, 2004

Discussion of WiFi Tag/Beacons

wifi

This is a talk by Adam Rea about some new hardware that might be interesting for new research. It's a matchbook-sized device that runs a Wi-Fi access point and a web server and some basic I/O. It's sort of like the iMote with Wi-Fi. It costs about $80.00. It's could be a high-bandwidth localized data source.

It has the perennial problem of power consumption for ubiquitous computing devices.

In my opinion, the key interesting feature of this device is that it could be the authoritative source of data for something that you don't want to be available to the greater world. The data would have to quickly lose value as it got old because you are making it available locally and it would be easy to transmit once you had copy of it. The only thing that springs to mind that is like this are grocery store prices. It might be nice for a customer to know what the prices of everything in a store are when you walk into the store, but the grocery store doesn't want that information getting out because it would enable people to comparison shop without coming to the physical location. Someone who walked into the store could transmit that information to someone outside the store, but the prices would quickly grow stale and unreliable.

I am not overwhelmed by the impact of this class of data, though, so I don't think there is a lot of interesting research to be done here.

Posted by djp3 at 10:18 AM | Comments (0) | TrackBack (0)

Learning Models of Human Behavior with Sequential Patterns

graph image

This paper was further reading along the lines of model building for the anesthesiology application. It was presented at a AAAI workshop in 2002 (which I actually attended). The problem that the authors were addressing was that of trying to pull statistically significantly interesting patterns out of a stream of sensor data.

Their data came from a handful of households that had been instrumented with simple sensors along the lines of Intille's work. The sensors were primarily motion sensors set up in various rooms of four real households. There were between 10 and 20 sensors installed that collected data for between 63 and 123 days about 60% of which turned out to be useful data

From this data, they applied some sequential pattern discovery algorithms to try and find sequences of sensor firings which were well supported by the data. The results were swamped with noise. The basically ended up finding support for many patterns. So many patterns in fact that the problem of interpreting the raw data became the problem of interpreting the patterns that they discovered. As a result they resorted to putting a number of heuristic filters into the system. The filters made sense but were unsatisfying from a design point of view. The treated events close in time as likely being from the same activity, they "manually" removed subpatterns which were subsumed by others and the wrote some filters to process individual sensors whose design didn't match well with their application.

I agree with their own critique of their work in which they say that they didn't do any sort of analysis to determine novelty of a pattern. Comparing what they saw to a background model would have caused the longer sequences to appear more unusual in a natural way without adding hand tuned rules.

I felt like both their sensors and their patterns lacked anything in the way of semantics. It was impossible to draw any conclusion about what was going on in the patterns, primarily because the sensors themselves lacked any type of semantics. It's hard to interpret a series of motion sensors firing even if it happens regularly.

They also had the real world problem of strange user routines, differing patterns on weekdays, pets messing up the data, sensors failing and vacations, which we, as researchers, tend to write off as anomalies, but are probably more likely the norm in a household.

I appreciated this paper because it helped to clarify what our anesthesiology work is not doing. In particular, we are not trying to pull signal out of a lot of noise. We are going to be doing more of a model merging operation than a model discovery operation. Our models are clearly delineated in time and have a semantics associated with them that is provided by the person who is recording the data stream.

This paper had some good data and seemed a little bit like computational biology work in which computers are put to the task of finding conserved sequences in genomes.

Posted by djp3 at 9:43 AM | Comments (0) | TrackBack (0)