I finally finished the first draft of my paper, and I submitted it for my honors project. Apparently I improved the speed of Austin Fowler’s algorithm by about two orders of magnitude, by making it use a lot more memory. Here’s a graph relating precision to time: my optimized versions are nearly orders of magnitude beneath the version without my optimizations (they have only the existing optimizations by Fowler).
NOTE: Ignore the trend lines; they should be fit to the last five or six points because the different runs don’t have distinct consistent behavior until then.
There are a ton of tiny fixes and optimizations I plan to try when I have time. I hope to really finish the research next quarter. Also, that’s when Paul, Aram, and Carl, the advisors who advised a lot of my work, will tell me all of the things I’ll need to fix before this paper is published. I’m looking forward to the k-d tree optimization in particular: intuition tells me that it’s the next largest bottleneck in this method. I know that I probably ought to have a finished product for my Honors project, but true research is never finished!