标签:
杂谈 |
分类: 半监督图排序 |
Sparse graphs
fast. They also tend to enjoy good empirical performance. We surmise(假设) it is because spurious(假的) connections between dissimilar nodes (which tend to be in different classes) are removed. With sparse graphs, the edges can be unweighted or weighted. One disadvantage is weight learning – a change in weight hyperparameters will likely change the neighborhood, making optimization awkward.
kNN graphs
Nodes i, j are connected by an edge if i is
in j's k-nearest neighborhood
or vice versa. k is a hyperparameter that controls the density of the graph.kNN has the nice property of “adaptive scales,” because the neighborhood radius is different in low and high data density regions. Small k may result in disconnected graphs. For Label Propagation this is not a problem if each connected component has some labeled points. For other algorithms introduced later in the thesis, one can smooth the Laplacian.
NN graphs
or vice versa. k is a hyperparameter that controls the density of the graph.kNN has the nice property of “adaptive scales,” because the neighborhood radius is different in low and high data density regions. Small k may result in disconnected graphs. For Label Propagation this is not a problem if each connected component has some labeled points. For other algorithms introduced later in the thesis, one can smooth the Laplacian.
NN graphs
Nodes i, j are connected by an edge, if the
distance d(i, j)< . The
hyperparameter controls neighborhood radius. Although is continuous,
the search for the optimal value is discrete, with at most O(n2) values (the edge lengths in the graph).
tanh-weighted graphs
hyperparameter controls neighborhood radius. Although is continuous,
the search for the optimal value is discrete, with at most O(n2) values (the edge lengths in the graph).
tanh-weighted graphs
wij = (tanh( a1(d(i, j)