Skip to content

Commit 9d3cdfc

Browse files
arjungtensorflow-copybara
authored andcommitted
Fix broken links to the Neural Graph Learning paper in all NSL documentation.
PiperOrigin-RevId: 299199588
1 parent e848a7a commit 9d3cdfc

File tree

5 files changed

+13
-14
lines changed

5 files changed

+13
-14
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ Please see the [release notes](RELEASE.md) for detailed version updates.
106106
## References
107107

108108
[[1] T. Bui, S. Ravi and V. Ramavajjala. "Neural Graph Learning: Training Neural
109-
Networks Using Graphs." WSDM 2018](https://ai.google/research/pubs/pub46568.pdf)
109+
Networks Using Graphs." WSDM 2018](https://research.google/pubs/pub46568.pdf)
110110

111111
[[2] T. Kipf and M. Welling. "Semi-supervised classification with graph
112112
convolutional networks." ICLR 2017](https://arxiv.org/pdf/1609.02907.pdf)

g3doc/_index.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ landing_page:
2323
generated by adding adversarial perturbation have been shown to be
2424
<b>robust against malicious attacks</b>, which are designed to mislead a model's
2525
prediction or classification.</p>
26-
<p>NSL generalizes to <a href="https://ai.google/research/pubs/pub46568.pdf">Neural Graph Learning</a> as well as <a href="https://arxiv.org/pdf/1412.6572.pdf">Adversarial Learning</a>. The NSL framework in TensorFlow provides the following easy-to-use
26+
<p>NSL generalizes to <a href="https://research.google/pubs/pub46568.pdf">Neural Graph Learning</a> as well as <a href="https://arxiv.org/pdf/1412.6572.pdf">Adversarial Learning</a>. The NSL framework in TensorFlow provides the following easy-to-use
2727
APIs and tools for developers to train models with structured signals:</p>
2828
<ul style="padding-left: 20px;">
2929
<li><b>Keras APIs</b> to enable training with graphs (explicit structure) and adversarial perturbations (implicit structure).</li>

g3doc/framework.md

Lines changed: 9 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,13 @@
22

33
Neural Structured Learning (NSL) focuses on training deep neural networks by
44
leveraging structured signals (when available) along with feature inputs. As
5-
introduced by
6-
[Bui et al. (WSDM'18)](https://ai.google/research/pubs/pub46568.pdf), these
7-
structured signals are used to regularize the training of a neural network,
8-
forcing the model to learn accurate predictions (by minimizing supervised loss),
9-
while at the same time maintaining the input structural similarity (by
10-
minimizing the neighbor loss, see the figure below). This technique is generic
11-
and can be applied on arbitrary neural architectures (such as Feed-forward NNs,
12-
Convolutional NNs and Recurrent NNs).
5+
introduced by [Bui et al. (WSDM'18)](https://research.google/pubs/pub46568.pdf),
6+
these structured signals are used to regularize the training of a neural
7+
network, forcing the model to learn accurate predictions (by minimizing
8+
supervised loss), while at the same time maintaining the input structural
9+
similarity (by minimizing the neighbor loss, see the figure below). This
10+
technique is generic and can be applied on arbitrary neural architectures (such
11+
as Feed-forward NNs, Convolutional NNs and Recurrent NNs).
1312

1413
![NSL Concept](images/nlink_figure.png)
1514

@@ -53,7 +52,7 @@ NSL brings the following advantages:
5352
shown to outperform many existing methods (that rely on training with
5453
features only) on a wide range of tasks, such as document classification and
5554
semantic intent classification
56-
([Bui et al., WSDM'18](https://ai.google/research/pubs/pub46568.pdf) &
55+
([Bui et al., WSDM'18](https://research.google/pubs/pub46568.pdf) &
5756
[Kipf et al., ICLR'17](https://arxiv.org/pdf/1609.02907.pdf)).
5857
* **Robustness**: models trained with adversarial examples have been shown to
5958
be robust against adversarial perturbations designed for misleading a
@@ -71,7 +70,7 @@ NSL brings the following advantages:
7170
hidden representations for the "neighboring samples" that may or may not
7271
have labels. This technique has shown great promise for improving model
7372
accuracy when the amount of labeled data is relatively small
74-
([Bui et al., WSDM'18](https://ai.google/research/pubs/pub46568.pdf) &
73+
([Bui et al., WSDM'18](https://research.google/pubs/pub46568.pdf) &
7574
[Miyato et al., ICLR'16](https://arxiv.org/pdf/1704.03976.pdf)).
7675

7776
## Step-by-step Tutorials

g3doc/tutorials/graph_keras_mlp_cora.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@
7575
"\n",
7676
"Graph regularization is a specific technique under the broader paradigm of\n",
7777
"Neural Graph Learning\n",
78-
"([Bui et al., 2018](https://ai.google/research/pubs/pub46568.pdf)). The core\n",
78+
"([Bui et al., 2018](https://research.google/pubs/pub46568.pdf)). The core\n",
7979
"idea is to train neural network models with a graph-regularized objective,\n",
8080
"harnessing both labeled and unlabeled data.\n",
8181
"\n",

research/gam/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ Tomkins, S. Ravi. "Graph Agreement Models for Semi-Supervised Learning." NeurIPS
5353
2019](https://nips.cc/Conferences/2019/Schedule?showEvent=13925)
5454

5555
[[2] T. Bui, S. Ravi and V. Ramavajjala. "Neural Graph Learning: Training Neural
56-
Networks Using Graphs." WSDM 2018](https://ai.google/research/pubs/pub46568.pdf)
56+
Networks Using Graphs." WSDM 2018](https://research.google/pubs/pub46568.pdf)
5757

5858
[[3] T. Kipf and M. Welling. "Semi-supervised classification with graph
5959
convolutional networks." ICLR 2017](https://arxiv.org/pdf/1609.02907.pdf)

0 commit comments

Comments
 (0)