Skip to content

Latest commit

 

History

History
34 lines (27 loc) · 2.77 KB

README.md

File metadata and controls

34 lines (27 loc) · 2.77 KB

Machine-Learning-Projects

  1. Implementation of the KNN algorithm in python.
  2. Gaussian Naive Bayes classifier in python.
  3. Implementation of the Perceptron and the perceptron learning rule for 3-dimensional binary inputs (plus a constant bias input).
  4. OR-perceptron.
  5. XOR-perceptron.
  6. Feed-Forward Neural Networks.
  7. Percepton-learning-rules.
  8. Randomly assigned weights in the range for a fully-connected 2-layer feed-forward neural network with sigmoid functions as activation functions.
  9. SVM_solver(e.g. MatLab’s fitcsvm function) to learn the linear SVM parameters.
  10. Implementation of a self-training system using a logistic regression classifier:- Semi Supervised Classifier
  11. Implementation of the polynomial fit solver for 2-dimensional input data as a linear regression learner. Make sure your implementation can handle polynomial fits of different order (at least to 4th order).
  12. Consider the problem where we want to predict the gender of a person from a set of input parameters, namely height, weight, and age. Implement Logistic regression to classify this data (use the individual data elements, i.e. height, weight, and age, as features).
  13. Implementation of Linear Discriminant Analysis.
  14. Consider the problem where we want to predict whether we are going to win a game of Tic-Tac-Toe from the current board configuration. To make this decision we have access to the state of the board in the form of 9 attributes reflecting the locations on the board, each one with 3 possible values (x, o, b) representing the two players or blank, respectively. There is a training and a test data set for this problem.

a) Show the construction of a 2 level decision tree using minimum Entropy as the construction criterion on the training data set. You should include the entropy calculations and the construction decisions for each node you include in the 2-level tree.

b) Implement a decision tree learner for this particular problem that can derive decision trees with an arbitrary, pre-determined depth (up to the maximum depth where all data sets at the leaves are pure) using the information gain criterion.

c) Apply the tree from part b) to the test data set for all possible tree depths (i.e. 1 - 9) and compare the classification accuracy on the test set with the one on the training set for each. For which depths does the result indicate overfitting ?

  1. Using the data and decision tree algorithm from the above(14) problem, chose a decision tree depth that does not overfit but achieves some baseline classification performance (but at least depth 4) and apply bagging to the problem
  2. Using the data and decision tree algorithm from Problem 14 and the depth chosen for problem 15, apply boosting to the problem:- Implement AdaBoost on top of your decision tree classifier.