## January 14, 2005

### IJCAI update

Data collection is complete
11 activities total
23 training runs of the activities by themselves (2 each plus an extra)
10 full breakfast runs with all 11 activities completed and interleaved
2 incomplete breakfast runs with a subset of 11 activities completed and interleaved
A total of 60 tags were deployed. Two different types, the round dishwasher type and the flat sticker type.
The data collection was done with two hands and two hand readers

Posted by djp3 at 11:20 AM | Comments (0) | TrackBack (0)

## January 12, 2005

### IJCAI update

Math review for learning with particle filters.
Posted by djp3 at 3:25 PM | Comments (0) | TrackBack (0)

## January 11, 2005

### IJCAI update

 Here is a viz of the data run from this morning. x axis is time in seconds. y axis is objects. A circle is an observation. Some features to note are the vertical stripe on the upper left...that is setting the table. The three horizontal lines on the upper right are eating... The dark swatch of hits in the middle is steaming the milk for espresso!
Posted by djp3 at 12:38 PM | Comments (0) | TrackBack (0)

## January 9, 2005

### IJCAI update

 "Current model" Okay I've got training data for the individual activities. That's 2 traces of 11 activities, mostly in the kitchen, all in the house. I started collecting a full run of all the activities when I ran out of juice in the gloves. I'm working with Matthai to get redundant gloves. The gloves work for about 8 minutes and then trail off, and I think I need about 30 minutes of data per run. If I'm not able to get in touch with Matthai by the end of today, I'm going to swing by IRS to see if I can find the charger and other gloves myself. Data collection is currently stalled. Now I'm trying to figure out how to model this stuff scalably with the recently discovered limits of GMTK. These limits end up being about the per time-slice limits which I wasn't paying much attention to. I was focussing on the inter-temporal limits. The limits are: I can model four activities total, allowing complete interleaving, with the current model. That ends up being 233,280 states per second. That works for 30 minutes with half-second samples. I can model five activiites total, allowing complete interleaving, with the current model for only 15 seconds before I run out of memory. That uses 874.800 states per slice. Using the current model, I need about 8,418,025,440 states per second to do inference on the breakfast runs- not feasible with GMTK. So I put together a model in which only three activities can be interleaved at a time, although there are 11 total activities running. In order to do exact reasoning, this new model needs 1,138,484,160 states. Also not feasible with GMTK. So the conclusion seems to be to switch to particle filters. The big downside of this is that learning isn't implemented yet which is going to be hard to do in the time frame we've got. Nonetheless, I'm going to push ahead in this direction working with the code that I started at Intel this summer.
Posted by djp3 at 12:38 PM | Comments (0) | TrackBack (0)