Planning the Rest of the Quarter

Feedback from the Microsoft Presentation

I gave a presentation at Microsoft QuArC a few weeks ago; visit my previous post for details and slides.  Group member Alex Bocharov encouraged me to focus on demonstrating the value of the MITM optimization.  The whole group wanted me to back up more of my claims numerically (and they probably wanted more interesting claims period!).

Specifically, Alex was interested in the efficiency of the one-dimensional index I used to match up left and right matrices: for a given left matrix L, how well does the index find right matrices that are actually close to L?  As it turns out, only .0003% of the matrices returned by the index are close.  Since the number of matrices it returns will probably grow exponentially, a speedup here could noticeably improve the algorithm performance!

A Quick Review…

One key part of the “meet in the middle” algorithm is the “meeting” part: given a left matrix, my program needs to find a corresponding nearby right matrix.  It basically searches for all points in a point cloud that are near a desired point, using the “Fowler” distance measure.

To optimize this “meeting” process in my original paper, I used a one-dimensional index.  All of the right matrices are indexed by their distance from the target gate.  To query for all right matrices near some left matrix L, I found the left matrix’s distance D from the target gate.  Since the Fowler distance measure uses the triangle inequality, I know that any matrices which are close to L also have a distance to the target gate that’s within some range R around D.  I used a binary tree to query for all target gates whose distances are within R.

However, one dimension is not enough: only .0003% of the right matrices the query returned were close to L.  Thus, I need a better algorithm that supports range queries over more dimensions, such as a k-d tree.  (In fact, in my original MITM implementation, I never really explored the performance improvement offered by the 1D index over a linear search.  I hope to remedy this in the next iteration of the paper.)

Introducing FLANN

FLANN stands for Fast Library for Approximate Nearest Neighbor.  If you give it a set of data points, it will build an index that lets you rapidly find:

  1. all points within some radius around a given point.
  2. the k nearest neighbors of a given point (where k is an integer greater than 0).

Compared to other similar libraries, FLANN is very flexible:

  • It allows me to specify my own distance measure.  Most k-d tree libraries use Euclidean distance measures, which don’t apply to my problem.
  • It supports points with any dimensionality (3D, 4D, etc.).  Many libraries are limited to just 3D.
  • FLANN supports a number of different index types, including k-d trees.  In fact, if I use three-dimensional points, it even has a k-d tree implementation that runs on nVidia graphics cards!

The manual mentions a special requirement for distance measures when using k-d trees: the distance must be additive.  In other words: the final distance should be computable by adding distances between individual components.  Fortunately, with the matrix parameterization that Aram and the QuArC group use, I can represent all matrices as vectors of just four (or even just three) real numbers.  I can simplify the Fowler distance calculation to a dot product of two matrices’ vectors.  Since the dot product meets our definition of additive, I can take advantage of FLANN’s k-d tree implementation with relatively little effort.

Overcoming the Batch Limitation

FLANN’s API requires my program to provide all of the data for the index at once.  However, I would like to add new unique matrices as I find them.  This could require some mucking around within FLANN, which I’d prefer to avoid (at least initially).

Instead, I’ll use FLANN alongside my original index.  All unique matrices will be added to the 1D index as they are discovered.  Then, once the 1D index becomes “big enough”, the program will rebuild the FLANN index and empty the 1D index.  I’ll need to figure out good values for “big enough” that balance FLANN index build time with the performance gain of higher quality search results.  An optimal value will offer the lowest run time to enumerate all sequences of some fixed length.

The Rest of the Quarter

Now that half of the quarter is gone, I’ve pared down my expectations quite a bit.  This quarter, I plan to:

  1. Get a k-d tree optimization, such as FLANN, working optimally (and quantify its optimality).
  2. Finish the FPGA implementation of the algorithm, with Austin’s optimizations.
  3. Complete my research paper, answering some unanswered research questions such as the performance improvement of the 1D index.

I don’t think I’ll be able to extend my algorithm to run on a MapReduce system by the end of the quarter.  However, if there’s time, it would be an interesting thing to try!

This entry was posted in Quantum Compiler Research. Bookmark the permalink.

Comments Closed

Comments are closed.