Skip to content

BackProp-fr/algorithm-whiteboard-resources

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Algorithm Whiteboard Resources

This is where we share notebooks and projects used in our youtube channel.

This video explains the parts of the DIET architecture. It does not discuss any code.

This video explains the parts of the DIET architecture. It does not discuss any code.

In this video we make changes to a configuration file. The configuration files, the streamlit application as well as an instructions manual can be found in the diet folder.

In this video we demonstrate how to train letter embeddings in order to gain intuition on what word embeddings are.

The kaggle dataset that we use in this video can be found here.

We've added the two notebooks in this repo in the letter-embeddings folder. But you can also run them yourself in google colab. The notebooks are mostly identical but the v1 notebook only uses one token to predict the next one while v2 uses two tokens to predict the next one.

Notebook with one token input:

Notebook with two token input:

This video explains two algorithms but it does not discuss any code.


This video discusses GloVe but also offers code to train a variant of your own. The keras model can be found in the glove folder. The glove.py file contains just the keras algorithm while the notebook contains the full code. You can also go online to colab and play with the full notebook from there.

The full notebook:

This video discusses a small visualisation package we've open sourced. The documentation for it can be found here.

The notebook that we made in this video can be found in the whatlies folder.

This video discusses the idea behind attention (you may notice some similarities with a convolution) but it does not discuss any code.

This video discusses how you can add more context to the self attention mechanism by introducing layers. This video does not discuss any code though.

This video explains how you can increase the potential of attention by introducing multiple layers of keys, queries and values. The video does not discuss any code though.

Given the lessons from the previous videos, this video wraps everything together by combining everything into a transformer block. There is no code for this video.

Video 12: StarSpace

This video discusses the StarSpace algorithm. The video serves as an introduction to the TED policy. This video contains no code.


Video 13: TED Policy

This video only discusses the theory behind the TED algorithm. The next video will show how TED more on a practical level. This video contains no code.

This video makes use of a rasa project that can be found here. By tuning the history hyperparameter we see how the chatbot is able to deal with context switches over a long period in the dialogue.

This video explains how a response selection model might make your model more accurate in a FAQ/Chitchat scenario. There is no code for this video.

This video explains how a response selection model is implemented internally. There is no code for this video.


Video 17: CountVectors

This video explains why CountVectors are still the unsung hero of natural language processing. There is no code attachment for this video.


This video tries to combine the ideas from word embeddings with the idea of countvectors. To reproduce, check out whatlies.

This video explains how you might implement subword embeddings from a neural network design perspective. There is no code for this video.

This video explains how BytePair embeddings work. If you want to use these embeddings in Rasa please check out rasa-nlu-examples.

This video explains how count vector mights be turned from sparse into dense layers. While doing this, we also learn that these vectors also encode levensthein distance.

This video explains how you might measure gender bias in word embeddings. It's part of a larger series and the code for it can be found in the bias folder of this repository.

There's a lot of research on how we might remove bias from word-embeddings. In this video we'll discuss one such technique. For the code, check the bias folder of this repository.

In this video we explain why de-biasing techniques have limits. For the code, check the bias folder of this repository.

In this video we explain why de-biasing techniques have limits. For the code, check the bias folder of this repository.

Video 26: Word Analogies

In this video we explain why "word analogies" don't really work by merely applying arithmetic on word-vectors. For the code, check the analogies folder of this repository.

Video 27: Toxic Language

In this video we explain why detecting toxic language is harder than it might seem. Code for the video can be found in the toxic folder in this repository.

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 99.8%
  • Python 0.2%