Skip to content

Commit

Permalink
Merge branch 'main' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
rostro36 authored Jul 18, 2024
2 parents b38b607 + f37d0d9 commit 7d22739
Show file tree
Hide file tree
Showing 53 changed files with 42,546 additions and 737 deletions.
4 changes: 3 additions & 1 deletion .github/workflows/docker-image.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,5 +11,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Cleanup # https://github.com/actions/virtual-environments/issues/2840
run: sudo rm -rf /usr/share/dotnet && sudo rm -rf /opt/ghc && sudo rm -rf "/usr/local/share/boost" && sudo rm -rf "$AGENT_TOOLSDIRECTORY"
- name: Build the Docker image
run: docker build . --file Dockerfile --tag openfold:$(date +%s)
run: docker build . --file Dockerfile --tag openfold:$(date +%s)
14 changes: 14 additions & 0 deletions .readthedocs.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
version: 2

# Set the OS, Python version and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "mambaforge-4.10"

# Build documentation in the "docs/" directory with Sphinx
sphinx:
configuration: docs/source/conf.py

conda:
environment: docs/environment.yml
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.

Copyright [yyyy] [name of copyright owner]
Copyright 2024 AlQuraishi Laboratory

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
Expand Down
20 changes: 20 additions & 0 deletions docs/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#

# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source
BUILDDIR = build

# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

.PHONY: help Makefile

# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
7 changes: 7 additions & 0 deletions docs/environment.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
name: openfold-docs
channels:
- conda-forge
dependencies:
- sphinx=7
- myst-parser=3
- furo
Binary file added docs/imgs/of_banner.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
203 changes: 203 additions & 0 deletions docs/source/Aux_seq_files.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,203 @@
# Auxiliary Sequence Files for OpenFold Training

The training dataset of OpenFold is very large. The `pdb` directory alone contains 185,000 mmcifs; each chain for has multiple sequence alignment files and mmcif files.

OpenFold introduces a few new file structures for faster access to alignments and mmcif data.

This documentation will explain the benefits of having the condensed file structure, and explain the contents of each of the files.

## Default alignment file structure

One way to store mmcifs and alignments files would be to have a directory for each mmcif chain.

For example, consider two protein as a case study
```
- OpenProteinSet
└── mmcifs
├── 3lrm.cif
└── 6kwc.cif
...
```

In the `alignments` directory, [PDB:6KWC](https://www.rcsb.org/structure/6KWC) is a monomer with one chain, and thus would have one alignment direcotry. [PDB:3LRM](https://www.rcsb.org/structure/3lrm), a homotetramer, would have one alignment directory for each of its four chains.
```
- OpenProteinSet
└── alignments
└── 3lrm_A
├── bfd_uniclust_hits.a3m
├── mgnify_hits.a3m
├── pdb70_hits.hhr
└── uniref90_hits.a3m
└── 3lrm_B
├── bfd_uniclust_hits.a3m
├── mgnify_hits.a3m
├── pdb70_hits.hhr
└── uniref90_hits.a3m
└── 3lrm_C
├── bfd_uniclust_hits.a3m
├── mgnify_hits.a3m
├── pdb70_hits.hhr
└── uniref90_hits.a3m
└── 3lrm_D
├── bfd_uniclust_hits.a3m
├── mgnify_hits.a3m
├── pdb70_hits.hhr
└── uniref90_hits.a3m
└── 6kwc_A
├── bfd_uniclust_hits.a3m
├── mgnify_hits.a3m
├── pdb70_hits.hhr
└── uniref90_hits.a3m
...
```

In practice, the IO overhead of having one directory per protein chain makes accessing the alignments slow.

## OpenFold DB file structure

Here we describe a new filesystem that can be used by OpenFold for more efficient access of alignment file and index file contents

All together, the file directory would look like:
```
- OpenProteinSet
├── duplicate_pdb_chains.txt
└── pdb
├── mmcif_cache.json
└── mmcifs
├── 3lrm.cif
└── 6kwc.cif
└── alignment_db
├── alignment_db_0.db
├── alignment_db_1.db
...
├── alignment_db_9.db
└── alignment_db.index
```

We will describe each of the file types here.

### Alignments db files and index files

To speed up access of MSAs, OpenFold has an alternate alignments storage procedure. Instead of storing dedicated files for each single alignment, we consolidate large sets of alignments to single files referred to as _alignments_db's_. This can reduce I/O overhead and in practice we recommend using around 10 `alignments_db_x.db` files to store the total training set of OpenFold. During training, OpenFold can access each alignment using byte index pointers that are stored in a separate index file (`alignments_db.index`). The alignments for the `3LRM` and `6KWC` examples would be recorded in the index file as follows:

```alignments_db.index
{
...
"3lrm_A": {
"db": "alignment_db_0.db",
"files": [
["bfd_uniclust_hits.a3m", 212896478938, 1680200],
["mgnify_hits.a3m", 212893696883, 2782055],
["pdb70_hits.hhr", 212898159138, 614978],
["uniref90_hits.a3m", 212898774116, 6165789]
]
},
"6kwc_A": {
"db": "alignment_db_1.db",
"files": [
["bfd_uniclust_hits.a3m", 415618723280, 380289],
["mgnify_hits.a3m", 415618556077, 167203],
["pdb70_hits.hhr", 415619103569, 148672],
["uniref90_hits.a3m", 415617547852, 1008225]
]
}
...
}
```

For each entry, the corresponding `alignment_db` file and the byte start location and number of bytes to read the respective alignments are given. For example, the alignment information in `bfd_uniclust_hits.a3m` for chain `3lrm_A` can be found in the database file `alignment_db_0.db`, starting at byte location `212896478938` and reading in the next `1680200` bytes.

### Chain cache files and mmCIF cache files

Information from the mmcif files can be parsed in advance to create a `chain_cache.json` or a `mmcif_cache.json`. For OpenFold, the `chain_cache.json` is used to sample chains for training, and the `mmcif_cache.json` is used to prefilter templates.

Here's what the chain_cache.json entry looks like for our examples:

```chain_cache.json
{
...
"3lrm_A": {
"release_date": "2010-06-30",
"seq": "MFAFYFLTACISLKGVFGVSPSYNGLGLTPQMGWDNWNTFACDVSEQLLLDTADRISDLGLKDMGYKYIILDDCWSSGRDSDGFLVADEQKFPNGMGHVADHLHNNSFLFGMYSSAGEYTCAGYPGSLGREEEDAQFFANNRVDYLKYANCYNKGQFGTPEISYHRYKAMSDALNKTGRPVFYSLCNWGQDLTFYWGSGIANSWRMSGDVTAEFTRPDSRCPCDGDEYDCKYAGFHCSIMNILNKAAPMGQNAGVGGWNDLDNLEVGVGNLTDDEEKAHFSMWAMVKSPLIIGANVNNLKASSYSIYSQASVIAINQDSNGIPATRVWRYYVSDTDEYGQGEIQMWSGPLDNGDQVVALLNGGSVSRPMNTTLEEIFFDSNLGSKKLTSTWDIYDLWANRVDNSTASAILGRNKTATGILYNATEQSYKDGLSKNDTRLFGQKIGSLSPNAILNTTVPAHGIAFYRLRPSSDYKDDDDK",
"resolution": 2.7,
"cluster_size": 6
},
"3lrm_B": {
"release_date": "2010-06-30",
"seq": "MFAFYFLTACISLKGVFGVSPSYNGLGLTPQMGWDNWNTFACDVSEQLLLDTADRISDLGLKDMGYKYIILDDCWSSGRDSDGFLVADEQKFPNGMGHVADHLHNNSFLFGMYSSAGEYTCAGYPGSLGREEEDAQFFANNRVDYLKYANCYNKGQFGTPEISYHRYKAMSDALNKTGRPVFYSLCNWGQDLTFYWGSGIANSWRMSGDVTAEFTRPDSRCPCDGDEYDCKYAGFHCSIMNILNKAAPMGQNAGVGGWNDLDNLEVGVGNLTDDEEKAHFSMWAMVKSPLIIGANVNNLKASSYSIYSQASVIAINQDSNGIPATRVWRYYVSDTDEYGQGEIQMWSGPLDNGDQVVALLNGGSVSRPMNTTLEEIFFDSNLGSKKLTSTWDIYDLWANRVDNSTASAILGRNKTATGILYNATEQSYKDGLSKNDTRLFGQKIGSLSPNAILNTTVPAHGIAFYRLRPSSDYKDDDDK",
"resolution": 2.7,
"cluster_size": 6
},
"3lrm_C": {
"release_date": "2010-06-30",
"seq": "MFAFYFLTACISLKGVFGVSPSYNGLGLTPQMGWDNWNTFACDVSEQLLLDTADRISDLGLKDMGYKYIILDDCWSSGRDSDGFLVADEQKFPNGMGHVADHLHNNSFLFGMYSSAGEYTCAGYPGSLGREEEDAQFFANNRVDYLKYANCYNKGQFGTPEISYHRYKAMSDALNKTGRPVFYSLCNWGQDLTFYWGSGIANSWRMSGDVTAEFTRPDSRCPCDGDEYDCKYAGFHCSIMNILNKAAPMGQNAGVGGWNDLDNLEVGVGNLTDDEEKAHFSMWAMVKSPLIIGANVNNLKASSYSIYSQASVIAINQDSNGIPATRVWRYYVSDTDEYGQGEIQMWSGPLDNGDQVVALLNGGSVSRPMNTTLEEIFFDSNLGSKKLTSTWDIYDLWANRVDNSTASAILGRNKTATGILYNATEQSYKDGLSKNDTRLFGQKIGSLSPNAILNTTVPAHGIAFYRLRPSSDYKDDDDK",
"resolution": 2.7,
"cluster_size": 6
},
"3lrm_D": {
"release_date": "2010-06-30",
"seq": "MFAFYFLTACISLKGVFGVSPSYNGLGLTPQMGWDNWNTFACDVSEQLLLDTADRISDLGLKDMGYKYIILDDCWSSGRDSDGFLVADEQKFPNGMGHVADHLHNNSFLFGMYSSAGEYTCAGYPGSLGREEEDAQFFANNRVDYLKYANCYNKGQFGTPEISYHRYKAMSDALNKTGRPVFYSLCNWGQDLTFYWGSGIANSWRMSGDVTAEFTRPDSRCPCDGDEYDCKYAGFHCSIMNILNKAAPMGQNAGVGGWNDLDNLEVGVGNLTDDEEKAHFSMWAMVKSPLIIGANVNNLKASSYSIYSQASVIAINQDSNGIPATRVWRYYVSDTDEYGQGEIQMWSGPLDNGDQVVALLNGGSVSRPMNTTLEEIFFDSNLGSKKLTSTWDIYDLWANRVDNSTASAILGRNKTATGILYNATEQSYKDGLSKNDTRLFGQKIGSLSPNAILNTTVPAHGIAFYRLRPSSDYKDDDDK",
"resolution": 2.7,
"cluster_size": 6
},
"6kwc_A": {
"release_date": "2021-01-27",
"seq": "GSTIQPGTGYNNGYFYSYWNDGHGGVTYTNGPGGQFSVNWSNSGEFVGGKGWQPGTKNKVINFSGSYNPNGNSYLSVYGWSRNPLIEYYIVENFGTYNPSTGATKLGEVTSDGSVYDIYRTQRVNQPSIIGTATFYQYWSVRRNHRSSGSVNTANHFNAWAQQGLTLGTMDYQIVAVQGYFSSGSASITVS",
"resolution": 1.297,
"cluster_size": 195
},
...
}
```

The mmcif_cache.json file would contain similar information, but condensed by mmcif id, e.g.

```mmcif_cache.json
{
"3lrm": {
"release_date": "2010-06-30",
"chain_ids": [
"A",
"B",
"C",
"D"
],
"seqs": [
"MFAFYFLTACISLKGVFGVSPSYNGLGLTPQMGWDNWNTFACDVSEQLLLDTADRISDLGLKDMGYKYIILDDCWSSGRDSDGFLVADEQKFPNGMGHVADHLHNNSFLFGMYSSAGEYTCAGYPGSLGREEEDAQFFANNRVDYLKYANCYNKGQFGTPEISYHRYKAMSDALNKTGRPVFYSLCNWGQDLTFYWGSGIANSWRMSGDVTAEFTRPDSRCPCDGDEYDCKYAGFHCSIMNILNKAAPMGQNAGVGGWNDLDNLEVGVGNLTDDEEKAHFSMWAMVKSPLIIGANVNNLKASSYSIYSQASVIAINQDSNGIPATRVWRYYVSDTDEYGQGEIQMWSGPLDNGDQVVALLNGGSVSRPMNTTLEEIFFDSNLGSKKLTSTWDIYDLWANRVDNSTASAILGRNKTATGILYNATEQSYKDGLSKNDTRLFGQKIGSLSPNAILNTTVPAHGIAFYRLRPSSDYKDDDDK",
"MFAFYFLTACISLKGVFGVSPSYNGLGLTPQMGWDNWNTFACDVSEQLLLDTADRISDLGLKDMGYKYIILDDCWSSGRDSDGFLVADEQKFPNGMGHVADHLHNNSFLFGMYSSAGEYTCAGYPGSLGREEEDAQFFANNRVDYLKYANCYNKGQFGTPEISYHRYKAMSDALNKTGRPVFYSLCNWGQDLTFYWGSGIANSWRMSGDVTAEFTRPDSRCPCDGDEYDCKYAGFHCSIMNILNKAAPMGQNAGVGGWNDLDNLEVGVGNLTDDEEKAHFSMWAMVKSPLIIGANVNNLKASSYSIYSQASVIAINQDSNGIPATRVWRYYVSDTDEYGQGEIQMWSGPLDNGDQVVALLNGGSVSRPMNTTLEEIFFDSNLGSKKLTSTWDIYDLWANRVDNSTASAILGRNKTATGILYNATEQSYKDGLSKNDTRLFGQKIGSLSPNAILNTTVPAHGIAFYRLRPSSDYKDDDDK",
"MFAFYFLTACISLKGVFGVSPSYNGLGLTPQMGWDNWNTFACDVSEQLLLDTADRISDLGLKDMGYKYIILDDCWSSGRDSDGFLVADEQKFPNGMGHVADHLHNNSFLFGMYSSAGEYTCAGYPGSLGREEEDAQFFANNRVDYLKYANCYNKGQFGTPEISYHRYKAMSDALNKTGRPVFYSLCNWGQDLTFYWGSGIANSWRMSGDVTAEFTRPDSRCPCDGDEYDCKYAGFHCSIMNILNKAAPMGQNAGVGGWNDLDNLEVGVGNLTDDEEKAHFSMWAMVKSPLIIGANVNNLKASSYSIYSQASVIAINQDSNGIPATRVWRYYVSDTDEYGQGEIQMWSGPLDNGDQVVALLNGGSVSRPMNTTLEEIFFDSNLGSKKLTSTWDIYDLWANRVDNSTASAILGRNKTATGILYNATEQSYKDGLSKNDTRLFGQKIGSLSPNAILNTTVPAHGIAFYRLRPSSDYKDDDDK",
"MFAFYFLTACISLKGVFGVSPSYNGLGLTPQMGWDNWNTFACDVSEQLLLDTADRISDLGLKDMGYKYIILDDCWSSGRDSDGFLVADEQKFPNGMGHVADHLHNNSFLFGMYSSAGEYTCAGYPGSLGREEEDAQFFANNRVDYLKYANCYNKGQFGTPEISYHRYKAMSDALNKTGRPVFYSLCNWGQDLTFYWGSGIANSWRMSGDVTAEFTRPDSRCPCDGDEYDCKYAGFHCSIMNILNKAAPMGQNAGVGGWNDLDNLEVGVGNLTDDEEKAHFSMWAMVKSPLIIGANVNNLKASSYSIYSQASVIAINQDSNGIPATRVWRYYVSDTDEYGQGEIQMWSGPLDNGDQVVALLNGGSVSRPMNTTLEEIFFDSNLGSKKLTSTWDIYDLWANRVDNSTASAILGRNKTATGILYNATEQSYKDGLSKNDTRLFGQKIGSLSPNAILNTTVPAHGIAFYRLRPSSDYKDDDDK"
],
"no_chains": 4,
"resolution": 2.7
},
"6kwc": {
"release_date": "2021-01-27",
"chain_ids": [
"A"
],
"seqs": [
"GSTIQPGTGYNNGYFYSYWNDGHGGVTYTNGPGGQFSVNWSNSGEFVGGKGWQPGTKNKVINFSGSYNPNGNSYLSVYGWSRNPLIEYYIVENFGTYNPSTGATKLGEVTSDGSVYDIYRTQRVNQPSIIGTATFYQYWSVRRNHRSSGSVNTANHFNAWAQQGLTLGTMDYQIVAVQGYFSSGSASITVS"
],
"no_chains": 1,
"resolution": 1.297
},
...
}
```


### Duplicate pdb chain files

Duplicate chains occur across pdb entries. Some of these chains are the homomeric units of a multimer, others are subunits that are shared across different protein.

To reduce storage overhead of creating / storing identical data for duplicate entries, we have a duplicate chain file. Each line stores the all chains that are identical. Our `6kwc` and `3lrm` examples would be stored as follows.

```duplicate_pdb_chains.txt
...
6kwc_A
3lrm_A 3lrm_B 3lrm_C 3lrm_D
...
```


34 changes: 34 additions & 0 deletions docs/source/FAQ.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# FAQ

Frequently asked questions or encountered issues when running OpenFold.

## Setup

- When running unit tests (e.g. [`./scripts/run_unit_tests.sh`](https://github.com/aqlaboratory/openfold/blob/main/scripts/run_unit_tests.sh)), I see an error such as
```
ImportError: version GLIBCXX_3.4.30 not found
```

> Solution: Make sure that the `$LD_LIBRARY_PATH` environment has been set to include the conda path, e.g. `export $LD_LIBRARY_PATH=$CONDA_PREFIX/lib:$LD_LIBRARY_PATH`

- I see a CUDA mismatch error, eg.
```
The detected CUDA version (11.8) mismatches the version that was used to compile
PyTorch (12.1). Please make sure to use the same CUDA versions.
```

> Solution: Ensure that your system's CUDA driver and toolkit match your intended OpenFold installation (CUDA 11 by default). You can check the CUDA driver version with a command such as `nvidia-smi`
- I get some error involving `fatal error: cuda_runtime.h: No such file or directory` and or `ninja: build stopped: subcommand failed.`.

> Solution: Something went wrong with setting up some of the custom kernels. Try running `install_third_party_dependencies.sh` again or try `python3 setup.py install` from inside the OpenFold folder. Make sure to prepend the conda environment as described above before running this.
## Training

- My model training is hanging on the data loading step:
> Solution: While each system is different, a few general suggestions:
- Check your `$KMP_AFFINITY` environment setting and see if it is suitable for your system.
- Adjust the number of data workers used to prepare data with the `--num_workers` setting. Increasing the number could help with dataset processing speed. However, to many workers could cause an OOM issue.

- When I reload my pretrained model weights or checkpoints, I get `RuntimeError: Error(s) in loading state_dict for OpenFoldWrapper: Unexpected key(s) in state_dict:`
> Solution: This suggests that your checkpoint / model weights are in OpenFold v1 format with outdated model layer names. Convert your weights/checkpoints following [this guide](convert_of_v1_weights.md).
16 changes: 8 additions & 8 deletions docs/source/Inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,10 +94,10 @@ where `$PRECOMPUTED_ALIGNMENTS` is a directory that contains alignments. A sampl
```
alignments
└── 6KWC_1
├── bfd_uniclust_hits.a3m
├── hhsearch_output.hhr
├── mgnify_hits.sto
└── uniref90_hits.sto
   ├── bfd_uniclust_hits.a3m
   ├── hhsearch_output.hhr
   ├── mgnify_hits.sto
   └── uniref90_hits.sto
```

`bfd_uniclust_hits.a3m`, `mgnify_hits.sto`, and `uniref90_hits.sto` are all alignments of the query structure against the BFD, Mgnify, and Uniref90 datasets respsectively. `hhsearch_output.hhr` contains hits against the PDB70 database used for template matching. The example directory `examples/monomer/alignments` shows examples of expected directories.
Expand Down Expand Up @@ -145,17 +145,17 @@ Some commonly used command line flags are here. A full list of flags can be view

#### Speeding up inference

The **DeepSpeed DS4Sci_EvoformerAttention kernel** is a memory-efficient attention kernel developed as part of a collaboration between OpenFold and the DeepSpeed4Science initiative.
The **DeepSpeed DS4Sci_EvoformerAttention kernel** is a memory-efficient attention kernel developed as part of a collaboration between OpenFold and the DeepSpeed4Science initiative.

If your system supports deepseed, using deepspeed generally leads an inference speedup of 2 - 3x without significant additional memory use. You may specify this option by selecting the `--use_deepspeed_inference` argument.

If DeepSpeed is unavailable for your system, you may also try using [FlashAttention](https://github.com/HazyResearch/flash-attention) by adding `globals.use_flash = True` to the `--experiment_config_json`. Note that FlashAttention appears to work best for sequences with < 1000 residues.
If DeepSpeed is unavailable for your system, you may also try using [FlashAttention](https://github.com/HazyResearch/flash-attention) by adding `globals.use_flash = True` to the `--experiment_config_json`. Note that FlashAttention appears to work best for sequences with < 1000 residues.

#### Large-scale batch inference
For large-scale batch inference, we offer an optional tracing mode, which massively improves runtimes at the cost of a lengthy model compilation process. To enable it, add `--trace_model` to the inference command.
For large-scale batch inference, we offer an optional tracing mode, which massively improves runtimes at the cost of a lengthy model compilation process. To enable it, add `--trace_model` to the inference command.

#### Configuring the chunk size for sequence alignments
Note that chunking (as defined in section 1.11.8 of the AlphaFold 2 supplement) is enabled by default in inference mode. To disable it, set `globals.chunk_size` to `None` in the config. If a value is specified, OpenFold will attempt to dynamically tune it, considering the chunk size specified in the config as a minimum. This tuning process automatically ensures consistently fast runtimes regardless of input sequence length, but it also introduces some runtime variability, which may be undesirable for certain users. It is also recommended to disable this feature for very long chains (see below). To do so, set the `tune_chunk_size` option in the config to `False`.
Note that chunking (as defined in section 1.11.8 of the AlphaFold 2 supplement) is enabled by default in inference mode. To disable it, set `globals.chunk_size` to `None` in the config. If a value is specified, OpenFold will attempt to dynamically tune it, considering the chunk size specified in the config as a minimum. This tuning process automatically ensures consistently fast runtimes regardless of input sequence length, but it also introduces some runtime variability, which may be undesirable for certain users. It is also recommended to disable this feature for very long chains (see below). To do so, set the `tune_chunk_size` option in the config to `False`.

#### Long sequence inference
To minimize memory usage during inference on long sequences, consider the following changes:
Expand Down
Loading

0 comments on commit 7d22739

Please sign in to comment.