forked from tensorflow/models
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
1 changed file
with
77 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,77 @@ | ||
# Module networks for question answering on knowledge graph | ||
|
||
This code repository contains a TensorFlow model for question answering on | ||
knowledge graph with end-to-end module networks. The original paper describing | ||
end-to-end module networks is as follows. | ||
|
||
R. Hu, J. Andreas, M. Rohrbach, T. Darrell, K. Saenko, *Learning to Reason: | ||
End-to-End Module Networks for Visual Question Answering*. in arXiv preprint | ||
arXiv:1704.05526, 2017. ([PDF](https://arxiv.org/pdf/1704.05526.pdf)) | ||
|
||
``` | ||
@article{hu2017learning, | ||
title={Learning to Reason: End-to-End Module Networks for Visual Question Answering}, | ||
author={Hu, Ronghang and Andreas, Jacob and Rohrbach, Marcus and Darrell, Trevor and Saenko, Kate}, | ||
journal={arXiv preprint arXiv:1704.05526}, | ||
year={2017} | ||
} | ||
``` | ||
|
||
The code in this repository is based on the original | ||
[implementation](https://github.com/ronghanghu/n2nmn) for this paper. | ||
|
||
## Requirements | ||
|
||
1. Install TensorFlow 1.0.0. Follow the [official | ||
guide](https://www.tensorflow.org/install/). Please note that newer or older | ||
versions of TensorFlow may fail to work due to incompatibility with | ||
TensorFlow Fold. | ||
2. Install TensorFlow Fold. Follow the | ||
[setup instructions](https://github.com/tensorflow/fold/blob/master/tensorflow_fold/g3doc/setup.md). | ||
TensorFlow Fold only supports Linux platform. We have not tested | ||
the code on other platforms. | ||
|
||
## Data | ||
|
||
1. Download the [MetaQA dataset](https://goo.gl/f3AmcY). Read the documents | ||
there for dataset details. | ||
2. Put the MetaQA folder in the root directory of this repository. | ||
|
||
## How to use this code | ||
|
||
We provide an experiment folder `exp_1_hop`, which applies the implemented model | ||
to the 1-hop vanilla dataset in MetaQA. More experiment folders are coming soon. | ||
|
||
Currently, we provide code for training with ground truth layout, and testing | ||
the saved model. Configurations can be modified in `config.py`. They can also be | ||
set via command line parameters. | ||
|
||
To train the model: | ||
|
||
``` | ||
python exp_1_hop/train_gt_layout.py | ||
``` | ||
|
||
To test the saved model (need to provide the snapshot name): | ||
|
||
``` | ||
python exp_1_hop/test.py --snapshot_name 00010000 | ||
``` | ||
|
||
## Model introduction | ||
|
||
1. In this model, we store the knowledge graph in a key-value based memory. For | ||
each knowledge graph edge (subject, relation, object), we use the (subject, | ||
relation) as the key and the object as the value. | ||
2. All entities and relations are embedded as fixed-dimension vectors. These | ||
embeddings are also end-to-end learned. | ||
3. Neural modules can separately operate on either the key side or the value | ||
side. | ||
4. The attention is shared between keys and corresponding values. | ||
5. The answer output is based on the attention-weighted sum over keys or | ||
values, depending on the output module. | ||
|
||
## Contact | ||
Authors: Yuyu Zhang, Xin Pan | ||
|
||
Pull requests and issues: @yuyuz |