Skip to content

Commit

Permalink
Add files via upload
Browse files Browse the repository at this point in the history
  • Loading branch information
bionictoucan authored Nov 15, 2018
1 parent ccfd2c1 commit af936de
Show file tree
Hide file tree
Showing 2 changed files with 313 additions and 0 deletions.
313 changes: 313 additions & 0 deletions Edinburgh2018/Exercises.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,313 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Exercises on Unsupervised Learning with Scikit-Learn\n",
"\n",
"## 1. Introduction\n",
"\n",
"Unsupervised machine learning is when the user steps back and trusts the computer's intuition in finding patterns and correlations within the data. We have discussed different methods for classical unsupervised learning (pre-deep learning) and will utilise these on an artificial dataset to see how they work and where they fall short. To do this we will be using various modules from the ```scikit-learn``` python package.\n",
"\n",
"The first thing we need to do is import the relevant packages we will be using. We need ```numpy``` for loading the data and manipulating arrays and we use ```matplotlib``` as our plotting tool."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%matplotlib notebook"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next we must load the dataset. The data is saved in ```.npz``` format. This is a data format native to ```numpy``` and works to save multiple ```numpy``` arrays to disk without losing any information. For more information check out the [`scipy` documentation](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.lib.format.html#module-numpy.lib.format). ```.npz``` is essentially a ```.zip``` of ```numpy``` arrays. Loading a ```.npz``` file returns a ```dict```-like object where the arrays have the keys corresponding to the names assigned when saving the file. e.g.\n",
"\n",
"```python\n",
">>> import numpy as np\n",
">>> a = np.array([[1,2],[3,4]])\n",
">>> np.savez_compressed(\"a.npz\",data=a)\n",
">>> f = np.load(\"a.npz\")\n",
">>> f[\"data\"] == a\n",
"array([[True, True],\n",
" [True, True]])\n",
"```\n",
"\n",
"In the example above, we use the function ```savez_compressed``` to save the data which creates a compressed ```.npz``` file but the function ```savez``` to create an uncompressed ```.npz``` file also exists (but when loading the data will make your code slower. ```.npz``` files preserve the structure of the arrays so there is no need to worry about doing any data manipulations when working with these files.\n",
"\n",
"Edit the template below to load the data we will use for the exercises."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"data = np.load(\"data.npz\")[\"data\"] #I have saved the data array to the key \"data\" in our .npz file"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Exercise 1: Dimensionality Reduction of the Data\n",
"\n",
"Explore the data you have just loaded in:\n",
"\n",
"* What is the shape of the data?\n",
"* What does this tell you about the number of features that describe the data?\n",
"* What does the data look like projected onto 2D (try mixtures of the feature dimensions)?\n",
"* How many different types of data are there?\n",
"* How might you go about embedding this data on a lower dimensional space while still maintaining most of the information?\n",
"\n",
"We will use classical PCA (Principal Component Analysis) to reduce the dimensions of our data (as not all of our features are important to learn about the variability in the data)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from sklearn.decomposition import PCA #import the PCA class from scikit-learn"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"PCA (as discussed in the lecture) is a dimensionality reduction technique which finds the *natural* basis for the data and finds the affine transformation required to translate to this basis in which each principal component has maximum variance. Each principal component can be thought of as a linear combination of the features defining the data. In our example, our data is described by 10 features with 400 samples.\n",
"\n",
"Firstly, we want to work out the 10 principal components and which are relevant in describing our data. This can be done using the PCA class we have just imported. After performing a PCA analysis on the data, have a look at the `explained_variance_ratio_` attribute of the PCA class to see what components are important.\n",
"\n",
"The documentation for PCA can be found [here](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#use this space here to write your code for doing the simple PCA analysis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A plot can also be made of the important principal components which show the importance of each. For this we should use the `plt.bar` from `matplotlib`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#skeleton script for bar chart of the principal components\n",
"plt.figure()\n",
"plt.bar(np.arange(10),pca.explained_variance_ratio_,tick_label=np.arange(10)+1)\n",
"plt.yscale(\"log\")\n",
"plt.ylabel(\"Variance encompassed by each principal component\")\n",
"plt.xlabel(\"Principal component no.\");"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Hint: you may want to plot the most important principal components on a linear to see the relative importance between them."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A good rule of thumb is that any sum of principal components that corresponds to more than 95% of the variance in the system are good enough to describe your system of data. That is, we discard the dimensions that make up very little of the variation in our data. We can then use the PCA class to reduce the size of the dataset to be described by the samples expressed in the basis of the most important principal components (the number of which will be set by you)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#use this space here to write your code for reducing the dimensions of the code"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After successfully reducing the dimensions in your data, try plotting the permutations of the principal components to see which gives you the best view of the data."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Questions to be answered:\n",
"\n",
"1. What is the best permuted visualisation of the data in terms of the principal components that encompass the most variance?\n",
"2. Do 2D projections of the original data make it obvious where the principal components lie?\n",
"3. What led you to the choices you made for the most important principal components of the data?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3. Exercise 2: Hierarchical Clustering\n",
"\n",
"Now that we can see what the data looks like when expressed in terms of its most important principal components, we can see the true separation of the classes. Our next job is to cluster this dataset using hierarchical clustering covered in the lecture. This should be very easy to see where the clusters lie."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from scipy.cluster.hierarchy import linkage, dendrogram, fcluster"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here we use agglomerative clustering which is where we start with each data point being a cluster, and with our defined distance metric calculate the pairwise distance between each points. Following this, we take the two closest clusters via our linkage method and merge them to become a new cluster. This algorithm concludes when all data points are part of one large cluster.\n",
"\n",
"Note: *divisive* clustering also exists. This is where we start with all data points being in one cluster and do the reverse. However, this has problems when splitting the data into smaller clusters as it is difficult to define the right heuristic to use to separate these clusters. As a result, agglomerative clustering is the preferred method."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Try different linkage methods, affinities and cluster numbers and see how changing these hyperparameters changes the clustering results. It should be obvious from the previous plot what the number of clusters should be but this may not always be obvious with your data so it's nice to see how the algorithm works with more or less clusters than is actually needed. Documentation for agglomerative clustering can be found [here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.linkage.html)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#use this space to set up your clustering algorithm"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[Dendrograms](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.cluster.hierarchy.dendrogram.html) are a good way to visualise the connections made by your clustering algorithm. Perform clustering several times using different combinations of linkages and metrics to see which gives you the best answer. The best answer can be seen in the dendrogram i.e. make plots of each dendrogram and see which gives you the desired result!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#use this space to plot the dendrograms"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once you have used the dendrogram to identify the number of clusters in the data, use the [`fcluster`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.fcluster.html?highlight=s) function to define your clusters and plot them with a different colour representing each label. Try this for each of the combinations of linkages and metrics you done previously as this will show more clearly the correct combination to choose."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#use this space to create clusters and plot them two dimensionally"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Questions to be answered:\n",
"\n",
"1. How do you use a dendrogram to determine the number of clusters of in a dataset?\n",
"2. How can you use a dendrogram to determine the right linkage method/metric to be used for hierarchical clustering?"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4. Exercise 3: Clustering with PCA\n",
"\n",
"There are many different clustering techniques with agglomerative being the most simple, easy to use and effective (in my opinion). However, there are ways to identify clusters in the data without using a clustering algorithm. This is what the following example is for.\n",
"\n",
"After performing PCA on our original data, we can take the two most important principal components and plot the data. From here we cans see the clusters in the data clearly, however what if we only wanted to express our data in terms of one of the two of these principal compoenents. Would we still be able to see the clusters?\n",
"\n",
"To do this, we plot [histograms](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.hist.html) of the data projected onto 1D which shows how the data is spread out over these principal components."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#perform PCA similarly to in Exercise 1 but this time choose only 2 principal components, then plot the projections of the data onto these singular principal components, use this space to plot the histograms"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Questions to be answered:\n",
"\n",
"1. How many clusters do you see when projecting the data onto one principal component?\n",
"2. Is PCA a suitable algorithm for clustering in the case of this data or should we use a different method?\n",
"3. Can you think of a dataset where this kind of clustering would work?"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Binary file added Edinburgh2018/data.npz
Binary file not shown.

0 comments on commit af936de

Please sign in to comment.