[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[bug-gperf] A small optimisation option for gperf
From: |
Rick van Rein |
Subject: |
[bug-gperf] A small optimisation option for gperf |
Date: |
Tue, 10 Jan 2017 12:44:27 +0100 |
User-agent: |
Postbox 3.0.11 (Macintosh/20140602) |
Hello gperf Team,
Yesterday, I found gperf because I wanted a static hash generator that
avoided buckets, and thought it had to be out there. You made my day;
gperf works very well.
Looking into the generated code, I found it could be made much more
compact with little effort. A few test programs in Python are attached.
DESCRIPTION:
You present gperf for the purpose of static keyword lookups, but I am
actually most interested in keeping up with developing standards; in my
case, a first list of OIDs of LDAP operations (and later perhaps the
controls). When a new one arrives, I update the input to gperf and
it'll take care of the hash table. Thank you :)
FWIW, I might end up embedding LDAP into a smart card chip; don't know
yet. But that would mean that gperf compaction is really important.
ASSOC_TABLE
The hash functions are surprisingly simple! But for OIDs I found the
assoc_tables to be sparse -- most entries were filled with mostly
MAX_HASH_VALUE+1 values. There must be many such tabels generated from
gperf? Sparse tables are usually reducible -- in my case, from 257
entries to 16 (using & 0x0f on the computed index) or 9 (using % 9).
Modulo isn't expensive on modern CPUs anymore, bitwise AND never was.
Cache lines however, are. So might this be a useful option to add?
The demo program proves that it works,
GPERF,GPERF_MOD16,GPERF_MOD9,KEYLEN,KEY
23, 23, 23, 22, "1.3.6.1.4.1.1466.20037"
27, 27, 27, 23, "1.3.6.1.4.1.4203.1.11.1"
24, 24, 24, 23, "1.3.6.1.4.1.4203.1.11.3"
11, 11, 11, 11, "1.3.6.1.1.8"
18, 18, 18, 14, "1.3.6.1.1.17.1"
16, 16, 16, 14, "1.3.6.1.1.17.2"
15, 15, 15, 14, "1.3.6.1.1.17.3"
20, 20, 20, 14, "1.3.6.1.1.17.4"
22, 22, 22, 14, "1.3.6.1.1.17.5"
21, 21, 21, 14, "1.3.6.1.1.17.6"
13, 13, 13, 12, "1.3.6.1.1.19"
17, 17, 17, 14, "1.3.6.1.1.21.1"
14, 14, 14, 14, "1.3.6.1.1.21.3"
19, 19, 19, 14, "1.3.6.1.1.21.4"
KEY/STRUCT TABLE
The table of keys or structs ended up being twice as large as minimal.
Running 1000 iterations didn't help, and was still very fast -- is there
a cut-off?
With many empty entries in the beginning, I knew that using a modulus
would be helpful. And indeed,
....
### REDUCTION FROM 28 TO 15
['1.3.6.1.1.17.3', '1.3.6.1.1.17.2', '1.3.6.1.1.21.1', '1.3.6.1.1.17.1',
'1.3.6.1.1.21.4', '1.3.6.1.1.17.4', '1.3.6.1.1.17.6', '1.3.6.1.1.17.5',
'1.3.6.1.4.1.1466.20037', '1.3.6.1.4.1.4203.1.11.3', '', '1.3.6.1.1.8',
'1.3.6.1.4.1.4203.1.11.1', '1.3.6.1.1.19', '1.3.6.1.1.21.3']
### Failed to reduce from 28 to 14
So I'm stuck with only one lost entry! Given the potential size of the
structs, this is a great benefit.
I suppose having control over the assoc_table as well as this modulus
scheme could be really useful for getting at or near the minimum. What
I don't know is how generally useful these optimisations are, that is up
to you -- or the person manipulating flags.
GENERALISING
The precautions testing for string length and such are optimisations,
but they are not needed if a correct table entry is always found; the
strcmp() is then always possible. Note that some programs, such as
security software and realtime applications, may appreciate more
consistent running times (and turn to hash algorithms precisely for that
reason).
The best compaction is probably possible when the values in the
assoc_table can construct much larger values than the hash table size,
and then reduced through a modulus. And that makes it all sound awfully
much like group theory, specifically the (mod n) groups.
I hope this is a useful payback for a nice and very useful program. I
that gperf found was really easy to get going -- not at all like lex and
yacc in that respect ;-)
Thanks,
-Rick
gperf-hash-compaction.py
Description: Text Data
gperf-msgoptab-compaction.py
Description: Text Data
msgops.tab
Description: Text document
[Prev in Thread] |
Current Thread |
[Next in Thread] |
- [bug-gperf] A small optimisation option for gperf,
Rick van Rein <=