Skip to content

Commit

Permalink
added first draft notebooks for similarity section
Browse files Browse the repository at this point in the history
  • Loading branch information
jamescalam committed Apr 29, 2021
1 parent 938e337 commit 70042a5
Show file tree
Hide file tree
Showing 7 changed files with 872 additions and 1 deletion.
9 changes: 8 additions & 1 deletion course/attention/03_bidirectional_attention.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,17 @@
"source": [
"# Bi-directional Attention\n",
"\n",
"We've explored both dot-product attention, and self-attention. Where dot-product compared two sequences, and causal attention compared previous tokens from the *same sequence*, bidirectional attention compares tokens from the *same sequence* in both directions, subsequent and previous. This is as simple as performing the exact same operation that we performed for *self-attention*, but excluding the masking operation - allowing each word to be mapped to every other word in the same sequence. So, we could call this *bi-directional **self** attention*. This is particularly useful for masked language modeling - and is used in BERT (**Bidirectional Encoder** Representations from Transformers) - bidirectional self-attention refers to the *bidirectional encoder*, or the *BE* of BERT.\n",
"We've explored both dot-product attention, and self-attention. Where dot-product compared two sequences, and self attention compared previous tokens from the *same sequence*, bidirectional attention compares tokens from the *same sequence* in both directions, subsequent and previous. This is as simple as performing the exact same operation that we performed for *self-attention*, but excluding the masking operation - allowing each word to be mapped to every other word in the same sequence. So, we could call this *bi-directional **self** attention*. This is particularly useful for masked language modeling - and is used in BERT (**Bidirectional Encoder** Representations from Transformers) - bidirectional self-attention refers to the *bidirectional encoder*, or the *BE* of BERT.\n",
"\n",
"![Bidirectional Attention](../../assets/images/bidirectional_attention.png)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
Expand Down
7 changes: 7 additions & 0 deletions course/introduction/02_transformer_heads.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,13 @@
"\n",
"![Q&A BERT showing additional start/end token classifiers](../../assets/images/qa_linear_bert.png)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
Expand Down
32 changes: 32 additions & 0 deletions course/similarity/00_intro.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "ML",
"language": "python",
"name": "ml"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.5"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
Loading

0 comments on commit 70042a5

Please sign in to comment.