July 1, 2004

GPPS: A Gaussian Process Positioning System for Cellular Networks

Lighthouse beaconing into the fog

This paper presented a method of performing location estimation based on observed wireless signal strength.

There are basically two ways of approaching this problem: Fingerprinting and Access Point Modeling.

Fingerprinting means taking a bunch of samples in the area over which you want to localize. Then when you want to infer location you match the signal strength that you see with the calibration points and conclude that you are collocated with the most similar calibration point. There are many ways to extend this technique through intelligent weighting of the k-nearest calibration points, or adding a user model that provides a prior and prevents inferring a location which is too far from your previous position estimate.

Access Point Modeling means that you try and develop an accurate model of the wireless beacons whose signal strength you are observing. Then when you make an observation of signal strength you can determine, based on the model, which location is likely to generate such a reading.

This paper presented a method of fingerprinting which is a generalization of the k-nearest neighbors to include all the calibration points. It is bad because it does in fact require calibration points, but it is better than most fingerprinting techniques because it gets reasonable accuracy with far fewer calibration samples. Based on the calibration points it used a kernel method they called Gaussian Process Positioning.

The idea is to treat each calibration point as a Gaussian process and learn a kernel function which can provide the entries of a covariance matrix. Then when you want to predict the signal strength at a new location, you pick a location, plug it into the kernel function to create a covariance matrix for all the calibration points plus the new location and use the covariance matrix to estimate the signal strength.

They tested the system in a complex indoor environment. They performed worse than RADAR with dense calibration, but gradually did better as the number of calibration points becomes sparser. Notably the choice of kernel is critical to the success of the system when you have a density of calibration points of between 1 every 25 m^2 and 1 every 12 m^2. Less dense and GPPS does better than nearest neighbor techniques, more dense and nearest neighbor does better.

This system is basically learning which calibration points vary in a similar way and is therefore learning an infrastructure based model, it assumes that all dynamic variance is modeled by the gaussian white noise error model. There is no dynamic calibration point selection.

Each access point has its own kernel function.

Nearest neighbor is a specific case of the GPPS technique. GPPS uses all the calibration points to estimate position and nearest neighbor uses some subset. The kernel method allows for a principled way of weighting all the calibration points.

Posted by djp3 at 9:46 AM | Comments (0) | TrackBack (0)

June 29, 2004

Java vs. C++ benchmarking

Performance graph Recently, at a joint Intel/University of Washington meeting, we had a discussion in which the performance of Java was called into question. Discussions like this rarely have much ammunition to support either side. However, in this case, there seems to be some evidence that if you run your JVM correctly then Java is at least as good as, and sometimes better than C++. This evidence comes from CS major at Rensselaer Polytechnic Institute named Keith Lea and was recently featured on Slashdot. His original article is here.

Update:

Henry just did a test that further supports this guy's conclusion. Here's the guts from him:

I just compared the following two little programs, in C and Java
(Sun 1.42, linux).

gcc CSpeedArray.c -o CSpeedArray

time CSpeedArray

real 0m11.837s
user 0m11.830s
sys 0m0.000s

gcc -O2 CSpeedArray.c -o CSpeedArray

time CSpeedArray

real 0m3.451s
user 0m3.420s
sys 0m0.030s

time java SpeedArray

real 0m3.725s
user 0m3.640s
sys 0m0.030s

I didn't believe Don when he told me that Java was now just as fast as
C... but seeing is believing. No more C / C++ for me!

-------------------------------------------------------
CSpeedArray.c
-------------------------------------------------------
#include <stdlib.h>

int main(int argc, char ** argv){
int * a;
int i;
int j;

a = malloc( sizeof(int) * 1000000 );

for (i = 0; i < 1000000; i++){
a[i] = 0;
}
for (j = 0; j < 1000; j++){
for (i = 0; i < 1000000; i++){
a[i] = a[i] + 1;
}
}
return 0;
}

-------------------------------------------------------
SpeedArray.java
-------------------------------------------------------
public class SpeedArray {
public static void main(String arg[]) {
int[] a = new int[1000000];
for (int i = 0; i < 1000000; i++){
a[i] = 0;
}
for (int j = 0; j < 1000; j++){
for (int i = 0; i < 1000000; i++){
a[i] = a[i] + 1;
}
}
}
}

Posted by djp3 at 10:30 AM | Comments (0) | TrackBack (0)