A young computer scientist and two colleagues show that searches within data structures called hash tables can be much faster than previously deemed possible.
This is incredible, but why does the article end by stating that this might not have any immediate applications? Shouldn’t this immediately result in more efficient hash tables in everyday programming languages?
After reading through the abstract the article is pop sci bunk: They developed a method to save additional space with constant-time overhead.
Which is certainly novel and nice and all kinds of things but it’s just a tool in the toolbox, making things more optimal in theory says little about things being faster in practice because the theoretical cost models never match what real-world machines are actually doing. In algorithm classes we learn to analyse sorting algorithms by number of comparisons, and indeed the minimum necessary is O(n log n), in the real world, it’s numbers of cache invalidation that matters: CPUs can compare numbers basically instantly, getting the stuff you want to compare from memory to the CPU is where time is spent. It can very well be faster to make more comparisons if it means you get fewer, or more regular (so that the CPU can predict and pre-fetch), data transfers.
Consulting my crystal ball, I see this trickling down into at least the minds of people who develop the usual KV stores, database engineers, etc, maybe it’ll help maybe it won’t those things are already incredibly optimized. Never trust a data structure optimisation you didn’t benchmark. Never trust any optimisation you didn’t benchmark, actually. Do your benchmarks, you’re not smarter than reality. In case it does help, it’s going to trickle down into standard implementations of data structures languages ship with.
EDIT: I was looking an this paper, not this. It’s actually disproving a conjecture of Yao, who has a Turing prize, certainly a nice feather to have in your cap. It’s also way more into the theoretical weeds than I’m comfortable with. This may have applications, or this may go along the lines of the Karatsuba algorithm: Faster only if your data is astronomically large, for (most) real-world applications the constant overhead out-weighs the asymptotic speedup.
Also never even start optimizing until you profile and are sure the bit you are trying to optimize even matters to the overall performance of your program.
Infrastructural APIs are much slower to change, and in a lot of cases the use of those APIs are dependent on a specific version. The change will definitely occur over time as the practical limitations are discovered
I’ve only used java but java hash tables are stupid fast in my experience, like everything else in my crap programs was 1000 times slower than the hash table access or storage.
Just reading the title, it’s talking about searching hash tables, which wasn’t something I was specifically doing.
Using bencode over json would probably speed up the web more. Not to mention good ole ASN.1 (well, at least some binary schemes for ASN.1). The web is completely cooked when it comes to efficiency.
JSON libraries are stupidly well optimized. There are binary encoding schemes that are faster and more compact, but its hard to beat JSON for text-based.
This is incredible, but why does the article end by stating that this might not have any immediate applications? Shouldn’t this immediately result in more efficient hash tables in everyday programming languages?
After reading through the abstract the article is pop sci bunk: They developed a method to save additional space with constant-time overhead.
Which is certainly novel and nice and all kinds of things but it’s just a tool in the toolbox, making things more optimal in theory says little about things being faster in practice because the theoretical cost models never match what real-world machines are actually doing. In algorithm classes we learn to analyse sorting algorithms by number of comparisons, and indeed the minimum necessary is O(n log n), in the real world, it’s numbers of cache invalidation that matters: CPUs can compare numbers basically instantly, getting the stuff you want to compare from memory to the CPU is where time is spent. It can very well be faster to make more comparisons if it means you get fewer, or more regular (so that the CPU can predict and pre-fetch), data transfers.
Consulting my crystal ball, I see this trickling down into at least the minds of people who develop the usual KV stores, database engineers, etc, maybe it’ll help maybe it won’t those things are already incredibly optimized. Never trust a data structure optimisation you didn’t benchmark. Never trust any optimisation you didn’t benchmark, actually. Do your benchmarks, you’re not smarter than reality. In case it does help, it’s going to trickle down into standard implementations of data structures languages ship with.
EDIT: I was looking an this paper, not this. It’s actually disproving a conjecture of Yao, who has a Turing prize, certainly a nice feather to have in your cap. It’s also way more into the theoretical weeds than I’m comfortable with. This may have applications, or this may go along the lines of the Karatsuba algorithm: Faster only if your data is astronomically large, for (most) real-world applications the constant overhead out-weighs the asymptotic speedup.
Also never even start optimizing until you profile and are sure the bit you are trying to optimize even matters to the overall performance of your program.
Hash trees are super efficient when they’re not nearly full. So the standard trick is just to resize them when they’re too close to capacity.
The new approach is probably only going to be useful in highly memory constrained applications, where resizing isn’t an option.
Hash tables are used in literally everything and they always need to minimize resizing because it’s a very expensive operation.
I suspect this will silently trickle into lots of things once it gets picked up by standard Python and JavaScript platforms, but that will take years.
So… databases? Especially in data centers? Still a nice boost in that case
Infrastructural APIs are much slower to change, and in a lot of cases the use of those APIs are dependent on a specific version. The change will definitely occur over time as the practical limitations are discovered
deleted by creator
I don’t think the speed of hash tables is a problem in most applications except benchmarks.
Hash tables are often used behind the scenes. dicts and sets in python both utilize hash tables internally, for example.
I’ve only used java but java hash tables are stupid fast in my experience, like everything else in my crap programs was 1000 times slower than the hash table access or storage.
Just reading the title, it’s talking about searching hash tables, which wasn’t something I was specifically doing.
If you use a hash table, you search every time you retrieve an object.
If you didn’t retrieve, why would you be storing the data in the first place?
Sorry to be blunt, but you don’t know what you’re talking about.
anything that deserializes arbitrary json will put it into a hash table, right? it would definitely speed up the web.
Depends on the implementation, but most will, yes. There are other forms of associative arrays, like trie or binary tree, but hash is the most common.
Using bencode over json would probably speed up the web more. Not to mention good ole ASN.1 (well, at least some binary schemes for ASN.1). The web is completely cooked when it comes to efficiency.
the biggest speedup would probably come from using proper schemas that can be efficiently parsed. but we’ve made our bed out of ad-hoc protocols.
And yet all that pales in comparison to using react (or whatever framework) over vanilla js. Enter McMaster-Carr.
yupyup, just send HTML over the wire. it’s fine.
JSON libraries are stupidly well optimized. There are binary encoding schemes that are faster and more compact, but its hard to beat JSON for text-based.
Everyone prepare for your minds to be blown: