Question : Size of the DPF keys.
In the paper, the authors mentioned that for dataset SIFT (for example), n = 35, which means 2^35 is the universal hash range. As a result, “The processing time is dominated by the DPF…”(Quoted from Section 8.2 of the paper). So the choice of value n greatly affects the overall performance in my point of view.
But when I think from another angle as follows: there are only 1 million(2^20) elements in dataset SIFT, and after the Leech lattice hashing operation, tablesizes become even smaller. Then buckets in every table are mapped to a fixed output size by an universal hash function. So the elements in SIFT are quite sparse in the universal hash range (2^20 points in a 2^35 size space).
My question is : How did the authors get this n? And why not something smaller, like 30 or 25. According to my understanding of DPF, even n = 34 could improve the performance a lot.