Skip to content

Commit 32f8d5e

Browse files
Clarify the need for higher dimension in HashGNN docs
Co-Authored-By: Jacob Sznajdman <breakanalysis@gmail.com>
1 parent a55916e commit 32f8d5e

File tree

1 file changed

+5
-0
lines changed
  • doc/modules/ROOT/pages/machine-learning/node-embeddings

1 file changed

+5
-0
lines changed

doc/modules/ROOT/pages/machine-learning/node-embeddings/hashgnn.adoc

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -197,6 +197,8 @@ As a loose guideline, one may try to set `embeddingDensity` to 128, 256, 512, or
197197
The `dimension` parameter determines the number of binary features when feature generation is applied.
198198
A high dimension increases expressiveness but requires more data in order to be useful and can lead to the curse of high dimensionality for downstream machine learning tasks.
199199
Additionally, more computation resources will be required.
200+
However, there is only one bit of information per dimension with binary embeddings, whereas dense `Float` embeddings have 64 bits of information per dimension.
201+
Consequently, one typically needs a significantly higher dimension for HashGNN compared to algorithms producing dense embeddings like FastRP or GraphSAGE, in order to get as good embeddings.
200202
Some values to consider trying for `densityLevel` are very low values such as `1` or `2`, or increase as appropriate.
201203

202204

@@ -208,6 +210,9 @@ Therefore, a higher dimension should also be coupled with higher `embeddingDensi
208210
Higher dimension also leads to longer training times of downstream models and higher memory footprint.
209211
Increasing the threshold leads to sparser feature vectors.
210212

213+
There is only one bit of information per dimension with binary embeddings, whereas dense `Float` embeddings have 64 bits of information per dimension.
214+
Consequently, one typically needs a significantly higher dimension for HashGNN compared to algorithms producing dense embeddings like FastRP or GraphSAGE, in order to get as good embeddings.
215+
211216
The default threshold of `0` leads to fairly many features being active for each node.
212217
Often sparse feature vectors are better, and it may therefore be useful to increase the threshold beyond the default.
213218
One heuristic for choosing a good threshold is based on using the average and standard deviation of the hyperplane dot products plus with the node feature vectors.

0 commit comments

Comments
 (0)