Machine Learning and Neural Networks - page 15

 

Lecture 9. Understanding Experimental Data



9. Understanding Experimental Data

In this lecture, Professor Eric Grimson discusses the process of understanding experimental data, from gathering data to using models to make predictions. He uses the example of a spring to demonstrate the importance of measuring accuracy when predicting linear relationships, and explores different methods for measuring the goodness of fit. Grimson introduces the concept of linear regression and polynomial fits, emphasizing that a high r-squared value doesn't necessarily mean that a higher-order polynomial is the best choice. Grimson uses code to optimize over a 16-dimensional space, leaving the choice of whether or not to use this polynomial fit for the next lecture.

  • 00:00:00 In this section of the lecture, Professor Eric Grimson discusses the importance of understanding experimental data in today's data-intensive world. He emphasizes that whether you're a scientist, an engineer, a social scientist, or in any other profession that deals with data, you need software that can manipulate data to extract useful information. He also talks about the process of conducting an experiment, getting data, and using models to make predictions about the data. Using the example of a spring, he explains how to gather data about it, model it, and write software that can help analyze the data.

  • 00:05:00 In this section, the concept of Hooke's law of elasticity is introduced. The law states that the force required to compress or stretch a spring is linearly correlated to the distance it is compressed or stretched. The negative sign denotes that the force is exerted in the opposite direction of compression or stretching. Hooke's law holds for a wide range of springs but has a limit to how much a spring can be stretched before the law breaks down. The example is given of calculating the force required to compress a spring by one centimeter using Hooke's law and the spring constant.

  • 00:10:00 In this section, the speaker explains the process of determining the spring constant through measurements of different masses on a spring. Ideally, a single measurement would suffice, but because masses can be unreliable and springs can contain imperfect materials, multiple trials are necessary to produce a set of measurements with a linear relationship that can be plotted to extract the spring constant. The speaker demonstrates using an array function to scale all values evenly prior to graphing the data points. The ideal linear relationship would allow researchers to calibrate atomic force microscopes and measure force in biological structures.

  • 00:15:00 In this section, the speaker discusses how to fit a line to experimental data and measure the distance between the line and the measured points. They explain that an objective function is needed to determine how good a fit the line is, which is done by finding the line that minimizes the objective function. The speaker also considers various ways to measure the distance, such as the displacement along the x-axis, the displacement vertically, or the distance to the closest point on the line. They ultimately choose the vertical displacement as it measures the dependent value being predicted given a new independent value.

  • 00:20:00 In this section, Eric Grimson explains how to measure the accuracy of a predicted line using the least squares method. The method involves finding the difference between the predicted and observed y- values, squaring them to eliminate the sign, and then summing up these squared differences for all observed values. This sum provides a measure of how the line fits the observed values. By minimizing the sum squared difference, one can find the best-fitting line. Additionally, Grimson discusses how to find the best-fitting curve by assuming that the predicted curve's model is a polynomial and using the linear regression technique to find the degree one or degree two polynomial that best fits the data.

  • 00:25:00 In this section, the concept of linear regression is introduced as the method for finding the lowest point on a surface that can be represented by all possible lines in a two-dimensional space. Linear regression is used to find the best fitting line by starting at some point and walking downhill along the gradient some distance, measuring the new gradient, and repeating until the lowest point is reached. The algorithm for doing this is very similar to Newton's method. The section also covers how to use polyFit, a built-in PyLab function, to find the coefficients of a polynomial with a given degree that provides the best least-squares fit.

  • 00:30:00 In this section, the presenter demonstrates how to use Python to fit a line to data and how to change the order of the polynomial being used. They explain that the higher the order of the polynomial used, the closer the fit will be to the data. The presenter provides a visual example of a set of data where fitting a line doesn’t work and fitting a quadratic line is a better fit. They also explain how to use the polyval function to fit any order polynomial and return an array of predicted values, demonstrating the abstract nature of the code.

  • 00:35:00 In this section, the speaker discusses how to measure the goodness of fit of experimental data. To compare different models, he suggests measuring the average squared error, as this approach is useful for comparing two models. However, this method has a problem because it does not provide a definitive way of knowing whether one fit is truly better than another fit. To address this issue, the speaker recommends using the coefficient of determination (r-squared), which is scale-independent and can tell how close a fit is to being perfect. He provides a formula for calculating r-squared that involves measuring the difference between the observed and predicted values and the mean error.

  • 00:40:00 In this section, the speaker explains how to calculate the variance and the r-squared value to evaluate the accuracy of a model. The variance can be obtained by dividing the sum of squared errors by the number of samples. The r-squared value indicates how much of the variability in the data is accounted for by the model, and it ranges between zero and one. An r-squared of one means that the model explains all of the variability, while an r-squared of zero means that there is no relationship between the model and the data. The speaker then introduces two functions, genFits and testFits, that generate and test models with different degrees of complexity and return the corresponding r-squared values. These functions can help determine the best fit for a set of data.

  • 00:45:00 In this section, the instructor runs code with a quadratic, quartic, 8th-order, and 16th-order polynomial fits to determine the best fit for the data. They explain that using the PyLab kind of code allows them to optimize over a 16-dimensional space and use linear regression to find the best solution. Although the 16th-order polynomial does an excellent job and has an r squared value of almost 97%, the instructor warns that a high r squared value doesn't necessarily mean that using an order 16th polynomial is the best choice. They leave the decision of whether or not to use it until the next lecture.
9. Understanding Experimental Data
9. Understanding Experimental Data
  • 2017.05.19
  • www.youtube.com
MIT 6.0002 Introduction to Computational Thinking and Data Science, Fall 2016View the complete course: http://ocw.mit.edu/6-0002F16Instructor: Eric GrimsonPr...
 

Lecture 10. Understanding Experimental Data (continue)



10. Understanding Experimental Data (continue)

In this section of the video, the presenter emphasizes the importance of finding the right model to fit experimental data, while also avoiding overfitting. Several methods are discussed, such as using cross-validation to determine the right balance between model complexity and effectiveness in predicting new data. The speaker provides examples of fitting models of different orders to experimental data and demonstrates the effects of overfitting by adding noise to data sets. The R-squared value is also introduced as a tool for determining how well a model fits the data. Overall, the importance of balancing model complexity and effectiveness in predicting new data is highlighted.

  • 00:00:00 In this section, the instructor reminds students that they were previously discussing the concept of fitting models to experimental data in order to understand the data. The goal is to have a model that explains the phenomena underlying the data and can make predictions about the behavior in new settings. However, since data is always noisy, there is a need to account for experimental uncertainty when fitting the model. The instructor recaps the use of polynomial expressions, specifically linear regression, to find coefficients that minimize the differences between observed and predicted data.

  • 00:05:00 In this section, the concept of linear regression is explored in detail. The idea behind linear regression is to represent all possible lines in a space that has one access with a values and the other access with B values, where the value or the height of the surface is the value of that objective function at every point. The idea is to start at some point on that surface and walk downhill until reaching the bottom, where there will always be one bottom one point, and once that point is reached, the a and B value will give the best line. The section concludes with a discussion of coefficient determination R squared, which is a scale-independent value between 0 and 1 that measures how well a model fits the data.

  • 00:10:00 In this section, the speaker discusses the importance of the R-squared value in fitting models to experimental data. The R-squared value indicates how well the model fits the data, with a value of 1 indicating a perfect fit and a value close to 0 indicating a poor fit. While a higher order model may fit the data better, it is not necessarily the best model to use for explaining the phenomena or making predictions. The speaker also explains how he generated the data for his example using a parabolic function with added noise.

  • 00:15:00 summarize. In this section, the speaker discusses how to test the effectiveness of a model by using validation or cross-validation. They generate data from a parabolic arc with added noise and fit models for degrees 2, 4, 8, and 16 using two different datasets. The best fitting model is still order 16, but the puzzle is why a 16th order polynomial is the best fit when data was generated from a degree 2 polynomial. The speaker explains that a small training error is necessary but not sufficient for a great model and that validation or cross-validation is necessary to see how well the model performs on different data generated from the same process.

  • 00:20:00 In this section, the speaker discusses the use of experimental data and how to fit a model to it. They also explore the importance of testing models on different data sets, as well as the potential for overfitting when using too many degrees of freedom in a model. Through their example, they show that low-order models (e.g. order 2 or 4) may actually be more effective for predicting behavior than high-order models (e.g. order 16) and that it is important to test models on multiple data sets to ensure that they are not too complex.

  • 00:25:00 In this section, the speaker cautions about the dangers of overfitting to data, where a model is designed to fit training data so perfectly that it cannot fit new datasets. He explains how to use validation to detect overfitting and why higher orders of input variables are unnecessary in some cases. He demonstrates an example of fitting a quadratic model to a line and shows that the system says no to the higher term coefficient because it will start fitting the noise, leading to a less effective fit. The speaker's example fits a quadratic to a line and shows how the model works perfectly until one point is added that leads to the system fitting the noise perfectly, therefore effectively predicting new values.

  • 00:30:00 In this section, the speaker introduces the concept of overfitting and demonstrates its effects by adding a small amount of noise to a dataset and fitting both a quadratic and a first-degree model. It is shown that the quadratic model does not perform well with the added noise, whereas the first-degree model is more resilient to it. The speaker emphasizes that finding the right balance between an overly complex model and an insufficiently complex model is crucial in predicting outcomes accurately. The section concludes with a suggested method for finding the right model.

  • 00:35:00 In this section, the video discusses how to determine the best model for a given data set, particularly in cases where there is no theory to guide the choice. One approach is to increase the order of the model until it does a good job at predicting new data but does not overfit the original training data. As an example, the video looks at how Hooke's law applies to stretching a spring and shows that different linear models are needed for different segments of the data, highlighting the importance of segmenting data appropriately. Cross-validation, including leave one out validation and K-fold validation, can also help guide the choice of model complexity when dealing with larger data sets.

  • 00:40:00 In this section, the speaker explains how to use cross-validation to determine the best model for predicting the mean daily high temperature in the US over a 55-year period. They use repeated random sampling to pick random samples from the dataset, train a model on the training set, and test it on the test set. They also compute yearly means for the high temperature to plot it and create models with linear, quadratic, cubic, and quartic dimensions, where they train on one half of the data, test on the other half, and record the coefficient of determination to get an average. They report the mean values for each dimensionality.

  • 00:45:00 In this section, the presenter demonstrates how to randomly split the dataset into training and test sets using a random dot sample method. He then runs through a loop where he sets up different training and test sets and then fits each dimension using a polynomial fit. The model can then be used to predict the test set values and compare them to the actual values, computing the R-squared value and adding it in. He concludes that running multiple trials is necessary to get statistics on those trials as well as statistics within each trial. This enables them to select the simplest model possible that accounts for the data.

  • 00:50:00 In this section, the speaker discusses the complexity of models that can effectively predict new data based on experimental data. This complexity can come from theory or from cross-validation to determine the simplest model that still does a good job of predicting out of data behavior.
10. Understanding Experimental Data (cont.)
10. Understanding Experimental Data (cont.)
  • 2017.05.19
  • www.youtube.com
MIT 6.0002 Introduction to Computational Thinking and Data Science, Fall 2016View the complete course: http://ocw.mit.edu/6-0002F16Instructor: Eric GrimsonPr...
 

Lecture 11. Introduction to Machine Learning



11. Introduction to Machine Learning

The video discusses the concept of machine learning, how it works, and two common ways of doing it-supervised and unsupervised learning. It then goes on to show an example of supervised learning-training a machine to predict the position of new football players based on their height and weight.

  • 00:00:00 This 1-paragraph summary is intended to give a general overview of the video, Machine Learning. It starts by introducing the idea of machine learning and its various applications, before discussing the two main methods of machine learning: classification and clustering. The video then moves on to introduce the basics of linear regression, before discussing the topic of machine learning in more detail. The last section of the video focuses on introducing the concepts of machine learning to students in a more concise way.

  • 00:05:00 Machine learning is the process of a computer learning without being explicitly programmed. In this lecture, we discuss some of the different types of machine learning algorithms and how they work. We also highlight a few examples of where machine learning is being used currently.

  • 00:10:00 This video discusses the idea of machine learning, how it works, and two common ways of doing it-supervised and unsupervised learning. It then goes on to show an example of supervised learning-training a machine to predict the position of new football players based on their height and weight.

  • 00:15:00 In this video, a machine learning algorithm is demonstrated that can be used to create clusters of data based on distance. The algorithm works by picking two examples as exemplars, clustering all the other examples by simply saying put it in the group to which it's closest to that example, and then finding the median element of that group.

  • 00:20:00 Machine learning is a process of learning how to identify patterns in data. The process starts by training a machine learning model on labeled data, and then using that model to identify patterns in unlabeled data. There are two main ways to do this: using labeled data and using unlabeled data. In the first case, the machine learning model is able to identify patterns in the data that correspond to labels that were assigned to it. In the second case, the machine learning model is able to identify patterns in the data that correspond to features that were selected by the user.

  • 00:25:00 This video discusses the concept of feature engineering, which is the process of determining what features to measure and how to weight them in order to create a model that is as accurate as possible. The example used is of labeling reptiles, and while it is easy to label a single example, it becomes more difficult as the number of examples increases. The video then goes on to discuss the concept of feature selection, which is the process of choosing which features to keep and which to discard in order to create a model that is as accurate as possible. The video finishes with an example of labeling chickens, which does not fit the model for reptiles, but does fit the model for chicken.

  • 00:30:00 The video provides an introduction to machine learning and its principles. It covers the importance of designing a system that will never falsely label any data as being something that it is not, using the example of a game where two players are trying to determine the difference between each other. It introduces the Minkowski metric, which is a way to measure distance between vectors.

  • 00:35:00 This video introduces Euclidean distance, a standard distance measurement in the plane, and Manhattan distance, a metric used to compare distances between objects with different features. Euclidean distance is based on the square root of two, while Manhattan distance is based on the distance between points on a grid. In some cases, such as when comparing the number of legs of different creatures, the difference in features between the objects may be more important than the distance between the objects themselves. Feature engineering—choosing which features to measure and how to weight them—is important in machine learning.

  • 00:40:00 This video covers the importance of scales and how they can affect how a machine learning algorithm works. It discusses how weights can be used in different ways and how to measure distance between examples. It also discusses how to cluster data using a number of methods and how to choose the right number of clusters.

  • 00:45:00 This video introduces the concept of machine learning, and demonstrates how to fit a curve to data to separate two groups. It also provides an example of how to evaluate a machine learning model.

  • 00:50:00 This video discusses the trade-off between sensitivity (how many things were correctly labeled) and specificity (how accurately the labels identified the desired items). Professor Guttag demonstrates a technique called ROC (Receiver Operator Curves), which helps to make this trade-off easier to understand.
11. Introduction to Machine Learning
11. Introduction to Machine Learning
  • 2017.05.19
  • www.youtube.com
MIT 6.0002 Introduction to Computational Thinking and Data Science, Fall 2016View the complete course: http://ocw.mit.edu/6-0002F16Instructor: Eric GrimsonIn...
 

Lecture 12. Clustering




12. Clustering

This video reviews the concept of clustering data points into groups. It explains how to perform clustering using the k-means algorithm, and how to optimize the algorithm for speed. It also discusses how to use clustering to diagnose problems with data.

  • 00:00:00 The objective of this video is to review the concepts of variability and clustering. The video explains that variability is the sum of the distance between the mean of a cluster and each example in the cluster, and that clustering is the optimization problem of grouping a set of examples into a single cluster.

  • 00:05:00 Hierarchical clustering is a method for clustering items in a data set. The algorithm starts by assigning each item to its own cluster, and then finds the two most similar clusters. If there are fewer than five clusters remaining, the algorithm merges the two closest clusters into a single cluster.

  • 00:10:00 The video discusses different clustering metrics, and explains how each one affects the final clustering results. For example, single linkage is used to join cities that are closer to each other, while complete linkage is used to join cities that are further from each other.

  • 00:15:00 The video explains how clustering works, and the most commonly used algorithm is k-means. It is fast and efficient, but it can be optimized to make it even faster.

  • 00:20:00 In this video, the author explains how to clustering objects by randomly selecting K centroids and assigning points to those centroids. The author also discusses the potential downside of choosing K incorrectly. Finally, the author recommends using a good k-means algorithm to find the best K.

  • 00:25:00 In this video, the author walks through how to perform hierarchical clustering and k-means on a subset of data. He also discusses the algorithm's weaknesses and how to fix them.

  • 00:30:00 This video explains how to cluster data using the k-means algorithm. The data is divided into clusters, and the centroids of each cluster are computed.

  • 00:35:00 In this lecture, professor explains how to cluster data using scaling and variance. He shows how to scale a feature vector and how to calculate the mean and standard deviation of the scaled data.

  • 00:40:00 This video explains how to cluster data using different methods, including Z scaling, interpolation, and k-means. The results show that the data is not clustered well, and that there is no statistically significant difference between the two clusters.

  • 00:45:00 The video discusses how clustering can be used to diagnose problems with data. In particular, it demonstrates how clustering can be used to find groups of patients with similar characteristics, such as those who are likely to be positive. The video then goes on to show how clustering can be used to find different values of K, which Increases the number of clusters found.

  • 00:50:00 In this video, data scientists discuss clustering. They explain that clustering is the process of grouping data together into similar groups. They discuss how different parameters can be used to create different clusters, and how the data scientist must think about the data in order to create the best clusters.
12. Clustering
12. Clustering
  • 2017.05.19
  • www.youtube.com
MIT 6.0002 Introduction to Computational Thinking and Data Science, Fall 2016View the complete course: http://ocw.mit.edu/6-0002F16Instructor: John GuttagPro...
 

Lecture 13. Classification



13. Classification

This video covers several classification methods including nearest neighbor, K-nearest neighbors (KNN), and logistic regression. The presenter demonstrates KNN using animal classification and handwriting recognition examples and explains how it avoids noisy data to provide more reliable outcomes. They introduce the Titanic dataset and explain the importance of finding the right balance when using metrics such as sensitivity and specificity to evaluate a classification model's performance. Additionally, the video discusses two testing methods, leave-one-out and repeated random subsampling, and how to apply them to KNN classification. Finally, the presenter explains why logistic regression is preferred over linear regression for classification problems, highlighting its ability to assign different weights to different variables and provide insights on variables through feature weights.

  • 00:00:00 In this section, the instructor begins by introducing the concept of classification in supervised learning, which is the act of predicting a discrete value, often referred to as a "label," associated with a feature vector. This can include predicting whether someone will have an adverse reaction to a drug or their grade in a course. The instructor then provides an example using a distance matrix and binary representation of animals to classify them as reptiles or not. The simplest approach to classification, known as nearest neighbor, involves remembering the training data and selecting the label associated with the nearest example when predicting the label of a new example.

  • 00:05:00 In this section, the presenter explains the K nearest neighbors (KNN) classification method, which avoids noisy data and is more reliable than just the nearest neighbor method. He demonstrates KNN using examples like classifying animals and handwriting recognition. The KNN method takes the “vote” of multiple nearest neighbors, usually an odd number, instead of just the nearest one, and this reduces the influence of outliers. The presenter concludes that, although not infallible, KNN is a typically more reliable classification method for data with noise.

  • 00:10:00 In this section, the video discusses the K-nearest neighbors algorithm and some of its limitations. While K-nearest neighbors is efficient and easy to understand, it requires storing all training examples, which can be memory-intensive, and predicting classifications can take a long time due to the need to compare examples. Additionally, if K is too large, the algorithm can be dominated by the size of the class, leading to classification errors. The video suggests using cross-validation to choose the best value for K and explains that it's important to choose K in such a way that there is a clear winner in the voting process.

  • 00:15:00 In this section, the presenter introduces a new example for classification - predicting which passengers would survive the Titanic disaster using machine learning. The dataset includes information about the passengers’ class, age, gender, and whether they survived or not. To evaluate the machine learning model, the presenter explains why accuracy alone is not a good metric when there is a class imbalance, and introduces other metrics such as sensitivity, specificity, positive predictive value, and negative predictive value. He also explains the importance of choosing the right balance and how these measures provide different insights.

  • 00:20:00 In this section, the speaker discusses the importance of sensitivity and specificity in classifiers and how to test a classifier. Sensitivity and specificity need to be balanced depending on the application of the classifier. For example, a cancer test would require more sensitivity, while a test for open-heart surgery would require more specificity. The speaker then explains two methods for testing a classifier: leave-one-out (used for smaller datasets) and repeated random subsampling (used for larger datasets). The latter involves randomly splitting the data into sets for training and testing, and a parameter called the machine learning method is introduced to compare different methods like kN and logistic regression. The code for these tests is shown, and the speaker emphasizes the importance of testing a classifier to validate its performance.

  • 00:25:00 In this section, the instructor discusses two methods of testing, leave one out and repeated random sampling, and shows how to apply them to KNN classification. The instructor also explains how to use lambda abstraction, a common programming trick in math, to turn a function of four arguments into a function of two arguments. The results of the KNN classification using both testing methods are shown and are not significantly different, indicating that the accuracy of the evaluation criteria is consistent. The KNN classification also performed better than random prediction.

  • 00:30:00 In this section, the speaker discusses logistic regression, which is a common method used in machine learning. Unlike linear regression, which is designed to predict a real number, logistic regression predicts a probability of a certain event. This method finds weights for each feature, computes for each feature a weight that is used in making predictions, and uses an optimization process to compute these weights from the training data. Logistic regression uses the log function, hence its name, and SK learn linear model is a Python library used to implement it.

  • 00:35:00 In this section, the speaker explains how to build a logistic regression model using training data and test it using a set of feature vectors. The logistic regression model is created using the SKLearn library, and once the weights of the variables have been computed, the model can be used to predict the probabilities of different labels based on a given feature vector. The speaker also introduces list comprehension, a versatile and efficient way of creating new lists from existing ones, which can be especially useful when building sets of test feature vectors.

  • 00:40:00 In this section, the speaker discusses list comprehension in Python and its convenience for certain tasks, but warns against its misuse. Moving forward, the speaker explains their process in applying logistic regression as a model and how they build and test it using the training and test data. They then define LR, or logistic regression, and show how the model can be applied with the labels "survived" and "not survived". The speaker notes that logistic regression is faster than KNN, as once the weights are obtained, evaluating the model is a quick process.

  • 00:45:00 In this section, the instructor explains why logistic regression is preferred over linear regression for classification problems. Firstly, logistic regression is considered to be more subtle and can assign different weights to different variables for better performance. Secondly, it provides insights on variables through feature weights that can be printed as output. By looking at the weights, one can make sense of the variables used for classification. For example, in the presented model, first-class cabin passengers had a positive effect on survival, whereas age and being a male had negative effects. The instructor also advises being cautious when interpreting feature weights since variables may be correlated.
13. Classification
13. Classification
  • 2017.05.19
  • www.youtube.com
MIT 6.0002 Introduction to Computational Thinking and Data Science, Fall 2016View the complete course: http://ocw.mit.edu/6-0002F16Instructor: John GuttagPro...
 

Lecture 14. Classification and Statistical Sins



14. Classification and Statistical Sins

This YouTube video discusses various classification and statistical sins that can lead to incorrect conclusions. One key takeaway is the importance of understanding the insights that can be gained from studying machine learning models, as interpreting the weights of variables in logistic regression can be misleading, especially when features are correlated. The video also emphasizes the importance of evaluating the performance of classifiers using the area under the receiver operating characteristic (AUROC) curve and avoiding the temptation to misuse numbers. Additionally, the importance of scrutinizing data and avoiding non-representative sampling is highlighted, as these can lead to statistical sins such as Garbage In, Garbage Out (GIGO) and survivor bias.

  • 00:00:00 In this section of the video, the instructor discusses the importance of studying machine learning models to gain insights into the systems and processes that generated the data. He demonstrates this by examining the weights of different variables in a logistic regression model, which was used to predict survival rates for the Titanic dataset. By looking at the relative weights of different variables, the instructor concludes that being a male passenger in third-class was associated with a much higher likelihood of not surviving the shipwreck. He cautions against relying solely on machine learning models for making predictions without understanding the insights that can be gained from studying them.

  • 00:05:00 In this section, the speaker explains the issues with interpreting weights in logistic regression, particularly when features are correlated. There are two ways to use logistic regression, L1 and L2, with L2 being the default in Python. L1 is designed to find weights and drive them to zero, making it useful to avoid overfitting in high-dimensional problems. However, L1 will drive one variable to zero even if it is important but correlated with another variable that has more weight. On the other hand, L2 spreads the weight across all variables, making it look like none of them are very important, especially when they are correlated. To illustrate this, the speaker gave an example of the cabin classes on the Titanic and discussed how eliminating one variable could change the interpretation of the results.

  • 00:10:00 In this section, the video explores the issue of overinterpreting the weights when dealing with correlated features. While analyzing some examples, the video emphasizes that interpreting the sign of the weights can be helpful while interpreting the weights themselves can be misleading. The video then addresses the parameter P of logistic regression and explains how different values of P can impact the accuracy and sensitivity of the predictions. The video concludes by highlighting that even if the accuracy seems to be good, there could be issues with sensitivity, indicating the need to analyze the results comprehensively before drawing any significant conclusions.

  • 00:15:00 In this section, the speaker talks about the receiver operating characteristic (ROC) and how it's a curve that allows us to look at all possible cutoffs of a model to see the shape of the results, where the y-axis presents sensitivity and the x-axis shows 1 minus specificity. They mention the importance of the area under the curve (AUC) and how it helps to better understand the performance of a model. The speaker warns to avoid the corners of the curve that are highly sensitive/unspecific or very specific/insensitive when choosing a cutoff for a model, to prevent the model from making bad decisions and unnecessary mistakes.

  • 00:20:00 In this section, the speaker discusses the concept of evaluating the performance of classifiers using the area under the receiver operating curve (AUROC). They explain how the curve shows the effectiveness of the classifier relative to a random classifier and that the closer the curve is to one, the better the classifier performs. The speaker also notes that determining the statistical significance of the AUROC score can be a challenge and depends on multiple factors, including the number of data points and the application at hand. Ultimately, the usefulness of the AUROC score is what matters, and it should help in making practical decisions.

  • 00:25:00 In this section, the speaker discusses the concept of the Area Under the Receiver Operating Characteristic (AUROC) curve and explains how it is commonly used compared to specificity. They explain that the trick of computing the area under the curve is aided by using the concave curve that they get from the specificity measurement, and this helps to make statistics easy to visualize and compare. However, they caution that this tool can be used for misleading purposes, and statisticians should understand how to avoid the temptation to misuse numbers. They emphasize that numbers themselves do not lie, but liars use numbers to create false impressions. The speaker offers a set of XY pairs where they explain that although statistically the pairs appear to be the same, they can be vastly different when graphed.

  • 00:30:00 In this section, the speaker discusses the importance of not confusing statistics with the actual data, and highlights the value of visualizing data through plots and graphs. However, he also cautions that misleading pictures can be created intentionally or unintentionally, and emphasizes the need to scrutinize labels and understand the context of a chart before drawing conclusions. The speaker presents two examples of visually misleading charts, one involving a gender comparison of grades and the other a comparison of the number of people on welfare and with full-time jobs.

  • 00:35:00 In this section, the speaker discusses the common statistical sin of Garbage In, Garbage Out (GIGO). They provide an example from the 1840s in which the census data was used to claim that slavery was good for the slaves, stating that freed slaves were more likely to be insane than enslaved slaves. John Quincy Adams exposed the errors in this claim and argued that atrocious misrepresentations had been made. The speaker emphasizes that the accuracy of the data is crucial, and even if there are errors, they must be unbiased, independent, and identically distributed to avoid garbage in, garbage out.

  • 00:40:00 In this section, the speaker warns against analyzing bad data which can be worse than no analysis at all. Often people make incorrect statistical analysis with incorrect data, leading to risky conclusions. The speaker gives the example of the flawed analysis of 19th century census data by abolitionists. Analyzing non-random errors in the data led to conclusions that weren't accurate. The speaker then cites how survivor bias caused allies to conclude the wrong thing about their planes during World War II. They analyzed the planes that returned from bombing runs and reinforced spots that sustained bullet holes from flak, instead of planes that got shot down. The speaker explains that statistical techniques are based on the assumption that by random sampling a subset of the population, mathematical statements about the entire population can be made. When random sampling is used, meaningful conclusions can be made.

  • 00:45:00 In this section, the speaker discusses non-representative sampling, also known as convenience sampling, and its impact on statistical analysis. He explains how convenience samples are not usually random, and thus suffer from survivor bias, which can skew the results of opinion polls and course evaluations, among other things. Moreover, he notes how the standard error computation, which assumes random and independent samples, cannot draw reliable conclusions from convenience samples, citing political polls as an example of the unreliability of statistical analysis. The key takeaway is the importance of understanding how data was collected and analyzed, and whether the assumptions underlying the analysis hold true, so as to avoid falling prey to statistical sins.
14. Classification and Statistical Sins
14. Classification and Statistical Sins
  • 2017.05.19
  • www.youtube.com
MIT 6.0002 Introduction to Computational Thinking and Data Science, Fall 2016View the complete course: http://ocw.mit.edu/6-0002F16Instructor: John GuttagPro...
 

MIT 6.0002 Introduction to Computational Thinking and Data Science, Fall 2016. Lecture 15. Statistical Sins and Wrap Up



15. Statistical Sins and Wrap Up

In this video, John Guttag discusses the three main types of statistical sins and provides an example of how each can lead to false conclusions. He urges students to be aware of the type of data they are looking at and to use an appropriate interval to make sure that their conclusions are accurate.

  • 00:00:00 John Guttag discusses the three main types of statistical sins: committing the y-axis sins of starting at 0, truncating the data to make it look correct, and confusing fluctuations with trends. He also provides an example of a less controversial topic, fever and flu, where it is clear that the temperature does not change when one gets the flu. Guttag urges students to be aware of the type of data they are looking at and to use an appropriate interval to make sure that their conclusions are accurate.

  • 00:05:00 In this video, statistician and Professor discusses the dangers of cherry picking data, which can lead to false conclusions. He suggests that, in order to make sound conclusions, scientists should look at data over an appropriate time period.

  • 00:10:00 The speaker points out that numbers by themselves don't always mean much, and that context is important when considering statistics. He discusses two examples of statistics where context is important: the swine flu and the seasonal flu. He also notes that when talking about percentage change, it's important to know the denominator.

  • 00:15:00 This video discusses the pitfalls of using percentages to calculate things, such as cancer clusters. It shows how mathematical simulations can give a more accurate picture of how likely something is, and how attorneys might use this information in their legal cases.

  • 00:20:00 This video explains how statistical analysis can help to answer questions about whether or not a certain region has a high number of cases of cancer. The video also shows how the attorney in question performed an improper statistical analysis, which led to incorrect conclusions.

  • 00:25:00 In this video, the instructor covers various statistical fallacies, including the Texas sharpshooter fallacy and multiple hypothesis checking. He warns that skepticism and denial are different, and that when drawing inferences from data, one should be careful not to make mistakes.

  • 00:30:00 The main takeaway from this video is that programming is about solving problems using libraries and algorithms. The video also stresses the importance of thinking in terms of probabilities and the use of randomness in solving problems.

  • 00:35:00 In this video, Professor discusses the different statistical models that students can use to analyze data. He emphasizes the importance of understanding the reliability of results and provides tips on how to present data effectively.

  • 00:40:00 This video provides a short history of computing and introduces the concept of a UROP (a research internship). It explains that, although computer science may not be the most popular major on campus, it is a very worthwhile field to pursue. Finally, the video provides some final words of advice on how to succeed in life.
15. Statistical Sins and Wrap Up
15. Statistical Sins and Wrap Up
  • 2017.05.19
  • www.youtube.com
MIT 6.0002 Introduction to Computational Thinking and Data Science, Fall 2016View the complete course: http://ocw.mit.edu/6-0002F16Instructor: John GuttagPro...
 

Deep Learning Crash Course for Beginners


Deep Learning Crash Course for Beginners

This video provides a crash course on deep learning, focusing on supervised and unsupervised learning algorithms. It covers the key concepts of each approach, including the model, state, reward, policy, and value. The main drawback of deep learning models is that they can be overfitted to the training data, resulting in poor generalization. Techniques for combating overfitting are discussed, including dropout and dataset augmentation. This introductory course on deep learning provides a general overview of the topic, highlighting the importance of neural networks and Dropout. It also explains how overfitting can be reduced by understanding the basics of deep learning.

  • 00:00:00 In this video, Jason takes viewers through a crash course in deep learning, explaining what deep learning is and its importance. He goes on to explain how deep learning works, focusing on its main advantages over traditional machine learning: that it can learn features and tasks directly from data, without needing domain expertise or human intervention. Finally, Jason covers some of the recent successes of deep learning, including its ability to outperform humans in a variety of tasks.

  • 00:05:00 Deep learning models require a lot of computational power and data, and were not available a few decades ago. Third, these models are streamlined with the increasing popularity of open source software like TensorFlow and PyTorch. Neural networks form the basis of deep learning, a sub-field of machine learning where algorithms are inspired by the structure of the human brain. Just like neurons make up the brain, the fundamental building blocks of a neural network are also neurons. Neural networks take in data and train themselves to recognize patterns in this data, and predict outputs for a new set of similar data. In the last step before propagation, a new network spits out a prediction. If the prediction is right, the network uses a loss function to quantify the deviation from the expected output. If the prediction is wrong, the network uses back propagation to adjust the weights and biases.

  • 00:10:00 This video explains how deep learning works, starting with the initialization of the network. In the first iteration, the network is given a set of input data. The network is then trained to make predictions using a loss function. Back propagation is then used to adjust the weights and biases in the network. The new network is then trained using gradient descent until it is able to make predictions for the entire data set. There are some drawbacks to this approach, including the fact that the adjustments made to the weights and biases are not dependent on the input data.

  • 00:15:00 The three most common activation functions used in deep learning are the sigmoid, tanh, and relu. These functions have different advantages and disadvantages, but in the end, they all produce a neural network that is nonlinear. The gradient descent algorithm is able to handle sparsity of activation well, but can suffer from the "dying value problem."

  • 00:20:00 Deep learning is a field of machine learning that deals with the training of artificial neural networks. The crash course starts by discussing what an activation function is, and goes on to cover why non-linear activation functions are used in deep learning. Next, the crash course discusses loss functions and how they are used to train the network. Finally, the crash course talks about optimizers and how they are used to make the network as accurate as possible.

  • 00:25:00 Gradient descent is an algorithm used to optimize a given loss function. It starts at a random point and decreases the loss function slope until it reaches a minimum or maximum. It is a popular optimizer and is fast, robust, and flexible. Gradient descent is iterative and uses past gradients to calculate the next step.

  • 00:30:00 In this video, the author outlined the difference between model parameters (internal variables within a machine learning model) and hyperparameters (external variables that are not within the model, and whose values cannot be estimated from data). Hyperparameters are often referred to as "parameters which can make things confusing," and are usually manually set by the practitioner. Gradient descent and backpropagation are two common iterative processes used in deep learning. The author notes that there is no "right answer" when it comes to the number of epochs needed to train a deep learning model, as different data sets require different numbers of iterations. Finally, the author offers a few tips on how to use deep learning effectively.

  • 00:35:00 This video provides a crash course on deep learning, focusing on supervised learning. The main concepts covered include supervised learning algorithms and their purposes, as well as linear and nonlinear regression.

  • 00:40:00 The main goal of unsupervised learning is to find patterns and relationships in data that a human observer might not pick up on. Unsupervised learning can be divided into two types: clustering and association. Clustering is the simplest and most common application of unsupervised learning and is the process of grouping data into different clusters. Classes contain data points that are as similar as possible to each other and as dissimilar as possible to data points in other clusters. Clustering helps find underlying patterns within the data that may not be noticeable through a human observer. Hierarchical clustering finds clusters by a system of hierarchies and every data point can belong to multiple clusters. Hierarchical clustering can be organized as a tree diagram. Some of the more commonly used clustering algorithms are the k-means, expectation, and the hierarchical cluster analysis of the aca. Association on the other hand attempts to find relationships between different entities. The classic example of association rules is market basket analysis. Unsupervised learning finds applications in almost everywhere, including airbnb, which helps hosts find stays and experiences and connects people all over the world. This application uses unsupervised learning algorithms where a potential client queries their requirements and airbnb learns these patterns and recommends

  • 00:45:00 The deep learning crash course for beginners covers the key concepts of reinforcement learning, including the model, state, reward, policy, and value. The main drawback of deep learning models is that they can be overfitted to the training data, resulting in poor generalization. Techniques for combating overfitting are discussed, including dropout and dataset augmentation.

  • 00:50:00 A neural network is a machine learning algorithm that is composed of a number of interconnected processing nodes, or neurons. Each neuron receives input from its neighboring neurons, and can produce an output. Neural networks are used to model complex functions, and can be trained using a number of different architectures.

  • 00:55:00 In this video, the Crash Course introduces the concept of sequential memory, which traditional neural networks struggle to model. Recurrent neural networks (RNns) are a type of new network architecture that use a feedback loop in the hidden layer, which allows them to model sequences of data with variable input length.
  • 01:00:00 The video discusses how recurrent neural networks work and how the short-term memory problem can be solved by using two variants of the network: gated recurrent neural networks and long short-term memory recurrent neural networks.

  • 01:05:00 The five steps of deep learning are data collection, data pre-processing, modeling, validation, and error detection. The quality of data is important, and bad data implies a bad model. There is no one-size-fits-all when it comes to data, but the general rule of thumb is that the amount of data you need for a well-performing model should be 10 times the number of parameters in that model.

  • 01:10:00 The video discusses the importance of training on a reliable data set and the importance of validation sets. It goes on to explain the train-test-validation split ratio and provides examples of how to do cross validation.

  • 01:15:00 Deep learning is a complex process that requires careful preparation of data before training a model. One step in this preparation process is dealing with missing data. There are a couple of ways to do this, and both have advantages and disadvantages. The first option is to eliminate the samples with missing values, but this can be risky because it may delete relevant information. The second option is to impute the missing values, but this can be time-consuming and may not be adequate in all cases. Feature scaling is another important step in preparing data for deep learning, and it helps to normalize the data, standardize it, and reduce the effects of outliers. After data has been prepared, it is fed into a network to train the model. The model is then evaluated using a validation set. If the model is good, it may be optimized further. Remember that data preparation is a complex and time-consuming process, so be sure to watch the video first if you are unsure about anything.

  • 01:20:00 Deep learning can be very effective, but it can also be prone to overfitting. There are several ways to avoid overfitting, including getting more data, reducing the model size, and implementing weight regularization.

  • 01:25:00 This introductory course on deep learning provides a general overview of the topic, highlighting the importance of neural networks and Dropout. It also explains how overfitting can be reduced by understanding the basics of deep learning.
Deep Learning Crash Course for Beginners
Deep Learning Crash Course for Beginners
  • 2020.07.30
  • www.youtube.com
Learn the fundamental concepts and terminology of Deep Learning, a sub-branch of Machine Learning. This course is designed for absolute beginners with no exp...
 

How Deep Neural Networks Work - Full Course for Beginners



How Deep Neural Networks Work - Full Course for Beginners

00:00:00 - 01:00:00 The "How Deep Neural Networks Work - Full Course for Beginners" video offers a comprehensive explanation of how neural networks operate, from basic linear regression equations to complex convolutional neural networks used in image recognition. The instructor uses examples and visual aids to explain the workings of neural networks, including how layers of nodes perform weighted sums and squashes to produce outputs, the process of backpropagation to adjust weights and minimize errors, and the concept of convolutional neural networks to recognize patterns in images. The video also covers topics such as logistic functions, multi-layer perceptrons, and the use of multiple output functions to create classifiers.

01:00:00 - 02:00:00 The course on how deep neural networks work for beginners covers several topics related to neural network functioning. The course instructor discusses convolution, pooling, and normalization and how they are stacked together to form a deep neural network. Backpropagation is also explained as a process used to adjust the weights of the network for error reduction. The course also covers the use of vectors, gating, squashing functions, and recurrent neural networks in sequence to sequence translation. The instructor provides examples of how LSTM networks predict the next word in a sentence, and how they are useful in robotic systems by identifying patterns over time. Finally, the video explains how neural networks are trained using gradient descent with backpropagation to adjust the weights and reduce error.

02:00:00 - 03:00:00 The video "How Deep Neural Networks Work - Full Course for Beginners" discusses the performance of neural networks in various scenarios, comparing it to human-level intelligence. The lecturer introduces a scientific definition of intelligence as the ability to do many things well, and compares the performance and generality of machines and humans on a logarithmic scale. The video covers topics such as the limitations of convolutional neural networks in image classification, the success of deep learning in playing board games and languages translation, the generality limitations of recommenders and self-driving cars, and the increasing complexity of humanoid robots. The video highlights AlphaZero's impressive increase in intelligence, generality, and performance and argues for focusing on physical interaction to create algorithms that can accommodate a more general set of tasks, bringing us closer to human-level intelligence. Finally, the instructor explains the process of convolution, pooling, and normalization in convolutional neural networks to recognize patterns and make accurate predictions.

03:00:00 - 03:50:00 This video on how deep neural networks work takes a beginner through the process of image categorization by building neurons and layers that recognize patterns in the brightness values of images. The video covers the optimization process using gradient descent an d different optimization methods like genetic algorithms and simulated annealing. The instructor explains how to minimize error and adjust weights through backpropagation and how to optimize hyperparameters in convolutional neural networks. While there are many tools available for creating neural networks, a thorough understanding of data preparation, interpretation, and choosing hyperparameters remains important.

Part 1

  • 00:00:00 In this section, the instructor provides an example of how a neural network would work if given a task to determine automatically whether a four-pixel black and white image is of a solid all-white or all-dark image, a vertical line, a diagonal line or a horizontal line. He explains that it is tricky to do this with simple rules about the brightness of the pixels, and instead a neural network would start by taking all of the inputs and assigning a number to each pixel depending on their brightness, with plus one being all the way white and minus one being all the way black. The weighted connections of the input neurons are then added up and the values are squashed to ensure that the neuron's value never gets outside the range of plus one to minus one which is helpful for keeping the computations in the neural network bounded and stable.

  • 00:05:00 In this section, the video explains how deep neural networks work and how each layer operates. The neurons in a layer perform a weighted sum and squash the result, which then becomes the input for the next layer. As the layers get deeper, the receptive fields become more complex and cover all the pixels. The video also introduces the concept of rectified linear units, which replace the squash function, and have very nice stability properties for neural networks. Finally, after creating as many layers as needed, the output layer is created, which provides the results of the neural network.

  • 00:10:00 In this section, the instructor explains how neural networks are trained to adjust their weights to minimize error between their output predictions and the actual truth. This is accomplished by calculating the slope, or the change in error with respect to a change in weight, and adjusting the weights in the direction that decreases the error. This is a computationally expensive process because it requires multiplication of all weights and neuron values at each layer for each weight adjustment. However, there is an insight that allows for the slope to be calculated directly without going back through the neural network, making the training process more efficient.

  • 00:15:00 In this section, the instructor explains how deep neural networks work and how they use calculus to calculate the slope to adjust weights and bring down errors. Through a simple example of a neural network with one weight, he demonstrates the concept of chaining, where the slope of each tiny step is multiplied together to get the slope of the full chain. He mentions that there are many types of back propagation that require different operations to be performed on each neuron, but ultimately, the goal is to calculate the slope to adjust the weights and reduce errors efficiently.

  • 00:20:00 In this section, the instructor discussed how to backpropagate the elements of a neural network such as the sigmoid function and the rectified linear unit to calculate the effect of adjusting any given weight on the error. To train a network, one starts with a fully connected network, assigns random values to all its weights, and calculates the error using backpropagation to adjust those weights slightly. The input known with the answer determines the weights' correctness, and the process repeats with multiple inputs until the weights gravitate toward a low spot where they perform well closer to the truth on most images. Neural networks tend to estimate such relationships between input and output variables, learn continuously and evaluate nonlinear relationships between data.

  • 00:25:00 In this section, the video explains how a linear regression equation can be represented as a network. This helps us to better understand how neural networks work. The network is made up of nodes and edges, with the input nodes being x sub 0 and x sub 1, and the output node as v sub 0. The weights, represented by w sub 0 0 and w sub 1, are the edges connecting the input and output nodes. This is called a directed acyclic graph, meaning that the edges only go in one direction and there is no way to form a loop. Adding more input nodes can make the equation higher-dimensional, but it still remains a linear equation, with the weights determining the relationship between the inputs and the output.

  • 00:30:00 In this section, the video discusses the concept of a two-layer linear network and how adding more layers to it can increase its complexity. The two-layer network is composed of identical layers that work in the same way. To make the model more flexible, non-linearity must be added. A common non-linear function to add is the logistic function, also known as the sigmoid function, which is shaped like an S. Adding more layers and non-linearity to the network creates a more complex model that can provide more sophisticated results.

  • 00:35:00 In this section, we learn about logistic functions and their role in logistic regression, which is used as a classifier. Logistic regression finds the relationship between a continuous and categorical input and output, where observations of one category are treated as zeros and, observations of the other category are treated as ones, and a logistic function that best fits all those observations is found. By adding more inputs, logistic regression can work with many input variables, and these become linear classifiers regardless of the number of dimensions. We also learn about hyperbolic tangent, which is a non-linear function related to the logistic function. These non-linear functions help us move out of the realm of linear networks and give us a wider variety of behavior than we ever saw in single-layer networks. By stacking layers with multiple hidden nodes, we can create more complex curves with wiggles, peaks, and valleys.

  • 00:40:00 In this section, the video describes how curves created by a two-layer network can be mathematically identical to those created using a many layered network. Even though the many layered network can create more complex curves using fewer nodes, the two-layer network can still create rich curves using enough hidden nodes. The video then explains how these curves can be used to create a classifier and shows that non-linear classifiers, unlike linear ones, can create interleaved regions of classification. The video concludes by showing the full network diagram of a multi-layer perceptron and a generic diagram for a three-layer single input single output network, which can be fully defined by specifying the number of inputs, outputs, layers and hidden nodes.

  • 00:45:00 In this section of the video, the presenter discusses using a two-output neural network to create a classifier that chops up the input space into regions based on where the two output functions cross. This approach can be extended with three or more output functions, allowing for more categories to be learned and input spaces to be chopped up in more complex ways than a linear classifier can accomplish. However, the winning category may not be significantly better than the runner-up category. Despite its limitations, this method demonstrates the power of neural networks to create diverse category boundaries while also favoring smoothness due to the activation functions used.

  • 00:50:00 In this section, the video discusses convolutional neural networks (CNNs) and their ability to learn and recognize patterns in images, such as faces, cars, and even video games. The video explains that CNNs are not magic but based on fundamental principles applied in a clever way. The video uses a simple toy example of a CNN that decides whether an input image is an X or an O to illustrate how CNNs work. The difficult part of CNNs is that a lot of variation is possible while identifying what the image is. The video explains how a CNN can handle the variance and identify the image by breaking down the image into smaller patterns and using filters to identify those patterns.

  • 00:55:00 In this section, the course explains how convolutional neural networks can match parts of an image to determine whether two pieces are the same. By breaking the images down into smaller parts or features, the convolutional neural network can identify if the features match with one another. The math behind matching these features is called filtering, and it involves lining up a feature with a patch of an image and then multiplying one pixel by another pixel and dividing it by the total number of pixels. The repeated application of this feature across the image produces a map of where this feature occurs, allowing the neural network to identify which parts of the image match.


Part 2

  • 01:00:00 In this section, the course instructor explains the three main tricks used in deep neural networks. The first trick is the convolution layer, where an image is convolved with a series of filters to produce a stack of filtered images. The second trick is pooling, which is used to shrink the image stack by taking a window size and stride value to obtain a smaller image representing the maximum value in the window. Finally, the third trick is normalization, which is used to keep the math from blowing up and involves changing all negative values in the image to zero. These tricks are stacked together to form a deep neural network, and their output forms an array of pixels that can be further manipulated.

  • 01:05:00 In this section, the video explains how neural networks can use a deep stacking technique of convolutional and pooling layers that filter and reduce the image with each iteration. The final fully connected layer connects each list of filtered and reduced images to a series of votes, which become the final answer. To obtain these weights, neural networks rely on back propagation to adjust based on the final error signal from the output layer. This process is known as gradient descent.

  • 01:10:00 In this section of the course for beginners on how deep neural networks work, the instructor explains the process of gradient descent, which allows the adjustment of the weights of the neural network to minimize error. By adjusting the weights up and down, the network finds the downhill direction and settles into a minimum where the error is at its least. Hyperparameters are knobs that the designer gets to turn, and they include decisions such as the number of features used, the window size and stride in pooling layers, and the number of hidden neurons in fully connected layers. Additionally, the instructor explains that the neural network can be applied to two-dimensional or even three or four-dimensional data, as long as the data follows a pattern where things closer together are more closely related. This allows the network to be used in fields such as sound and text analysis.

  • 01:15:00 In this section, the limitations of convolutional neural networks (CNNs) are discussed, as they are designed to capture local spatial patterns, and thus, may not be suitable for data that cannot be represented as images. CNNs are highly efficient in finding patterns and classifying images, but if the data is as useful after swapping any of the columns for each other, then CNNs may not be a good fit. On the other hand, recurrent neural networks (RNNs), specifically long short-term memory (LSTM), are useful for sequence to sequence translation, with applications such as speech to text or one language to another. An example of how LSTMs work is given for predicting what's for dinner, where the voting process is simplified through the observation of dinner cycles.

  • 01:20:00 In this section, the instructor explains the concept of vectors, which is just a list of numbers and how it can be useful in machine learning. Vectors are expressed in the form of a list of all possible values, and a number is assigned to each. The instructor explains how one-hot encoding is often used in encoding statements. The neural network is designed by connecting each element in the input vector to each element in the output vector. The example used is to predict what's for dinner using past data by taking into account our actual information from yesterday and what our prediction was yesterday. The instructor later explains that recurrent neural networks can be useful in predicting what comes next in a series, such as in language learning.

  • 01:25:00 In this section, the use of a squashing function to prevent feedback loops is explained. The recurrent neural network involves votes for a name, period, or "saw" based on what words have been used before. However, this system is subject to mistakes and limitations because it can only remember one time step. To overcome these, a memory function is added to the network through additional symbols including a squashing function with a flat bottom, an "x" in a circle, and a cross in a circle for element-wise addition and multiplication. This allows the network to remember what happened many time steps ago and to perform element-wise multiplication, enabling new and improved functionality.

  • 01:30:00 In this section, the video introduces gating, which allows control over what gets passed through and what gets blocked in a neural network. The concept is demonstrated using pipes with varying levels of water flow and faucets, which can either be closed to zero or open to one. The introduction of the logistic function, which squashes values between zero and one, provides a way to always have a value that is within this range. The video then demonstrates how gating can be used to hold and selectively release memories and predictions using a set of gates, each controlled by its own neural network and squashing function. Finally, an attention mechanism is introduced to set aside irrelevant input to prevent it from clouding the predictions and memory going forward.

  • 01:35:00 In this section, the instructor gives an example of how a trained LSTM network can generate predictions for the next word in a sentence. Assuming that the LSTM has been trained on children's book examples, the example sentence is "Jane saw Spot." The word "Doug" is the most recent word, and the LSTM predicts "Doug," "Jane," and "Spot" as viable options. The LSTM then passes these predictions through four different neural networks that learn to make predictions, and the LSTM predicts that "saw" is the most likely next word. The example shows how an LSTM can generate predictions based on the previous words and predictions and avoid making certain errors by using memory and selection gates.

  • 01:40:00 In this section, the instructor explains how LSTM neural networks are able to look back over many time steps to identify patterns in data, making them successful in practical applications such as language translation and speech-to-text software. He also discusses how LSTM networks are particularly useful in robotics, where actions taken by an agent can influence what is sensed and what should be done many time steps later. Although LSTM networks may seem complex when expressed mathematically, the instructor encourages viewers to focus on the basic principles, likening deep learning to a highly specialized fighter jet compared to a simpler airplane.

  • 01:45:00 In this section of the video, the instructor explains the basic structure and function of neurons in a neural network. The dendrites of neurons act like feelers and pick up electrical activity, which is then accumulated in the soma and sent through the axon as a signal. The strength of the signal passing through a synapse, where the axon of one neuron touches the dendrite of another, is represented by the size of a circle, with a larger circle indicating a stronger connection. By assigning numbers and weights to these connections, a complex neural network can be simplified into a circle-stick diagram where each stick represents a weight. This diagram is used to represent combinations of inputs and outputs, with each connection having its own weight.

  • 01:50:00 In this section, the video explains that neural networks work by combining input neurons and their connections to output neurons. Through a simple example of input pixels being combined to create an output image, the video shows how input neurons represent individual elements, such as pixels or words, and can be combined to represent more complex concepts. The video also discusses the process of learning in neural networks, where initial connection weights are randomly assigned and then updated based on observed input-output patterns, allowing the network to improve over time.

  • 01:55:00 In this section, the video explains how neural networks are trained using gradient descent with backpropagation. The goal is to adjust the weights of the neurons in order to reduce the error between the actual output and the expected output. By taking small incremental steps, the weights are adjusted until the error is minimized. This process is repeated for each data point, and if there are multiple layers, the output from one layer is used as the input for the next layer. This is called a deep neural network. The more layers there are, the more complex the features that can be learned, making it possible to identify images or even natural language phrases.


Part 3

  • 02:00:00 In this section of the video, the instructor explains how deep neural networks work in different scenarios. When training on images of faces or automobiles, the neural network learns to recognize the basic components of these objects such as eyes, noses, and wheels. The deeper the network becomes, the more complex the recognition becomes, eventually leading to identifiable images such as faces, spiders, and teddy bears. Additionally, deep neural networks can be used to learn and group similar music artists. The instructor also covers how deep neural networks can be paired with reinforcement learning to learn how to play Atari games better than humans, and how robots can be taught to cook by using video representations. Finally, the instructor clarifies that deep learning is good at learning patterns, but it's not magic.

  • 02:05:00 In this section, a functional definition of intelligence is introduced as being able to do many things and do them well. This definition allows for a scientific discussion about machine intelligence and lets us compare the relative intelligence of different agents. Using the equation "intelligence equals performance times generality," we can plot this on a logarithmic scale to represent the human level of performance and generality. Machines may exceed human performance in some areas due to human limitations, such as limited attention and cognitive biases.

  • 02:10:00 In this section, the video discusses how intelligence is compared on a graph with generality as one axis and performance as the other. Chess-playing computers were the first agents to perform at a superhuman level, with IBM's Deep Blue beating Gary Kasparov in 1989. The current state-of-the-art chess program, Stockfish, has an ELO rating of 3447, making it much better than any human player. However, it is worth noting that Stockfish is specifically programmed for chess and lacks generality, unlike humans. The video compares Stockfish to the board game Go, which is regarded as even more complex, and demonstrates the importance of generality in intelligence.

  • 02:15:00 In this section, the transcript discusses how the game of Go, despite having exponentially more possible board configurations than chess, was beaten by the program AlphaGo, which used a technique called convolutional neural networks to learn common configurations and reinforcement learning on a library of human games to learn which moves were good. Similarly, in the field of image classification, a database called ImageNet was created where machines were able to classify images better than humans with an error rate of fewer than five percent. With machines routinely beating humans at this task, the progress made in machine learning is impressive.

  • 02:20:00 In this section, the lecturer discusses the limitations of convolutional neural networks (CNN) in classifying images. While CNNs are designed to find patterns in two-dimensional arrays of data, such as pixels on a chessboard, they have been shown to break easily outside the set of images they are trained on. The fragility of CNNs is demonstrated when images are distorted, a single pixel is changed, or stickers are added to deceive the CNN into mis-classifying the image. The lecturer explains that image classification generality is not where we would like it to be, even though it performs better than humans on the ImageNet dataset. The lecture also mentions that DeepMind's deep q learning algorithm impressed the world by achieving human expert level in 29 out of 49 classic Atari games.

  • 02:25:00 In this section, the instructor discusses how deep neural networks perform in playing video games and translating languages. After using convolutional neural networks to learn pixel patterns for playing video games using reinforcement learning, the algorithm was not able to match human performance on 20 games that required longer term planning. This suggests that the algorithm failed to think ahead several steps to make required connections. On the other hand, language translation uses long short-term memory (LSTM) to translate over 100 languages to a single intermediate representation. However, it is worth noting that the translation has accuracy limitations and efficiency issues due to extensive computation involved. Therefore, while machine translation has the scope, it falls short of human performance.

  • 02:30:00 In this section, the speaker discusses the performance of recommenders and notes that they are relatively okay when compared to humans. However, their performance is not perfect, since the algorithm does not adapt to the fact that a person's preferences may change, and they do not consider how various products are related. In terms of generality, the knowledge of the world required to make recommenders work well is pretty deep, making them hit on performance. Moving on to robots, the speaker notes that self-driving cars have impressive performance since their accident rates are lower than humans, despite their task being more complicated. However, self-driving cars are less general than they may appear, with the biggest trick being reducing the difficulty of the task, which reduces the necessary generality of the solution.

  • 02:35:00 In this section, the speaker explains that self-driving cars are not as general as they appear to be as they are custom-engineered based on a specific set of sensors, selection of algorithms, and environmental conditions. The challenge for self-driving cars is to encompass all conditions from which they will operate. As of now, self-driving cars perform at a lower performance than human drivers, mainly because of their physical interaction and interaction with other cars and people. Next, the speaker discusses humanoid robots and how most activities are hard-coded and pretty fragile. Though their general application is increasing with the complexity of systems, performance still remains laughably low as compared to a human agent. The generality vs. performance trend is discussed in detail, leading to the speaker's point regarding AlphaZero program's capability as seen in DeepMind.

  • 02:40:00 In this section, the video explains how AlphaZero, an AI program, was able to beat some of the world's best board games without being fed any rules. AlphaZero was made to learn the visual patterns of a game through trial and error. Two AlphaZero babies were created who played each other but only one was allowed to learn, and the other was not. The one that learned managed to evolve and become an intermediate player after playing and cloning itself with one learning and the other not. This approach made AlphaZero beat humans in just four hours, and beat the previous best computer after eight. The AI game also went on and beat the best chess-playing program and the best shogi-playing program, hence, showing AlphaZero's significant increase in intelligence, generality, and performance. The video also highlights how assumptions limit generality and enable performance in AI systems.

  • 02:45:00 In this section, the speaker explains some common assumptions made by algorithms used in artificial intelligence, including convolutional neural networks, and why these assumptions are insufficient for achieving human-level intelligence. The assumptions include stationarity, independence, ergodicity, and the effects of actions becoming apparent quickly. While these assumptions work well for analyzing two-dimensional arrays of information or data that does not change much, they do not hold in physical interactions with the world, making them unsuitable for humanoid robotics or any physically interactive robots. The speaker proposes focusing on physical interaction to create algorithms that can accommodate a more general set of tasks and bring us one step closer to human-level intelligence. The section also introduces convolutional neural networks and their ability to learn the building blocks of images.

  • 02:50:00 In this section, the instructor provides an example of a convolutional neural network that can classify whether an image is of an X or an O, which takes into account different sizes, rotations, and weights of the images. To identify specific features of the image, the network matches pieces of the image with certain features and shifts them until the overall image is considered a good match. The process involves filtering, where the feature is aligned with the image, multiplied pixel by pixel, and then divided by the total number of pixels. This method enables the network to recognize patterns of images and make accurate predictions.

  • 02:55:00 In this section, the instructor explains how convolution operates in convolutional neural networks. Convolution is taking a feature and checking every possible patch on an image to see how well it matches. Comparisons can be done on every location in the image resulting in a filtered image map of where the feature matches the image. The instructor describes how pooling shrinks down the filtered images to smaller versions of the original image. In this step, a window of pixels is selected, and maximized values are chosen, resulting in a smaller image that still maintains the original signal. Lastly, normalization is required to avoid negative numbers and maintain manageable values in the network.


Part 4

  • 03:00:00 In this section of the video, the instructor explains the process of how the convolutional neural network progresses through subsequent layers, starting with the rectified linear unit function that converts everything that is negative to zero. As the output of one layer looks like the input of the next layer, the final output is a stack of images that has been transformed by convolution, rectification, and pooling layers, which produces a stack of filtered images with no negative values that has been reduced in size. Additionally, the instructor states that the final pixel values that tend to be strong when the right answer is x or o give a strong vote for the x or o categories, respectively, and the total weighted votes are used to categorize the input as either an x or o by a fully connected layer that takes a list of feature values and becomes a list of votes for each output category.

  • 03:05:00 In this section, the speaker explains how neural networks are used to categorize images. The images are broken down into their component pixels, which are then turned into a list of brightness values. Each value corresponds to a different level of brightness, ranging from -1 for black to +1 for white. This list of brightness values is used to build a neuron, which takes input from four of the pixels and performs a weighted sum. The neuron then applies a "squashing" function to ensure the result is between -1 and +1. This process of using neurons to categorize images can be repeated multiple times to create a layer, which is loosely inspired by the biological layers found in the human cortex.

  • 03:10:00 In this section, the instructor explains how the receptive fields in neural networks become more complex in higher layers. By connecting the input layer to multiple hidden layers of neurons, each neuron combines inputs from the previous layer with specific weights. When a rectified linear unit is used instead of a weighted sum, the neuron outputs the original value if it’s positive, and 0 if it’s negative. Through this process, the network learns to recognize patterns that resemble the desired outputs, resulting in a final output layer that classifies the input. The instructor uses an example of an image with a horizontal bar to demonstrate how the network processes the image through each layer.

  • 03:15:00 In this section, the video explains the optimization process and how deep neural network models learn by adapting through optimization of weights and filters. The optimization process is illustrated with an example of optimizing the temperature of tea to maximize enjoyment. The process involves finding the minimum point of the mathematical function, which can be done by gradient descent, a process of making iterations and adjusting the input slightly until reaching the minimum value. The video also notes that the weights and filters are learned through a bunch of examples over time, and this is what machine learning is about.

  • 03:20:00 In this section, the speaker discusses other methods to optimize models besides gradient descent. One popular method is to use curvature to find the optimal parameters by making tea of varying temperatures and observing the steepness of the curve. However, this method can break down if the curve is not well-behaved, and gradients can get stuck in local minima. To avoid getting stuck in local minima, other methods like genetic algorithms and simulated annealing can be used, which are more sample-efficient than exhaustive exploration, but not as fast as gradient descent. The speaker compares these methods to different types of vehicles, with gradient descent being a formula one race car, genetic algorithms and simulated annealing being a four-wheel-drive pickup truck, and exhaustive exploration being like traveling on foot.

  • 03:25:00 In this section, the speaker gives an example of how to use numerical optimization to answer a question in a way that is less wrong. The example involves guessing the number of M&Ms in a bag, and the speaker explains how to convert the guess into a cost function by using a deviation measurement, which can be squared for penalizing further away guesses more severely. The loss function calculates how wrong the guess is when a deviation measurement is squared, and it can help to exhaustively explore the guess within a range and visually find the lowest value. Alternatively, a slope with respect to guesses can be found by taking the derivative of the loss function, setting it equal to 0, and solving the equation.

  • 03:30:00 In this section, the speaker discusses optimization and how it is used in neural networks to find the best weights and features. Gradient descent is used to adjust all the weights in every layer to bring the error down. However, calculating the gradient requires many passes through the network to determine which direction is downhill. Backpropagation is then introduced as a way to find an analytical solution to the slope problem, allowing for a more efficient optimization process. The speaker also explains the use of a cost function, specifically the deviation squared, which enables the calculation of the sum of deviations, leading to finding the best guess.

  • 03:35:00 In this section, the instructor explains how calculating the slope or derivative of the error function can help adjust the weights for a neural network. He gives the example of a simple neural network with one input, one output, and one hidden layer with one neuron, showing how the slope of the error function can be found with a simple calculation. The process of breaking down the changes in weights and errors to find the slope is called chaining, which makes it possible to adjust weights found deeper in the neural network. This process is called backpropagation, where the values at the end of the network need to be used to calculate the derivatives of weights for error propagation through the depths of the network.

  • 03:40:00 In this section of the video, the instructor explains the back propagation step in training neural networks. He emphasizes the importance of each element in a neural network remaining differentiable so that the chain rule can be used to calculate the link in the chain when finding derivatives. The instructor demonstrates how the chain rule can be used for a fully connected layer and also explains how it can be applied to convolutional and pooling layers. The process of adjusting the weights in a neural network over thousands of repetitive iterations to achieve efficient answers is also discussed.

  • 03:45:00 In this section, the instructor explains how to optimize the hyperparameters of a convolutional neural network (CNN). These parameters, such as the number, size and stride of the features, the pooling windows and the number of hidden neurons, are the next level up and control how everything happens below. The instructor points out that there are some recipes that researchers have stumbled upon that seem to work well, but there are a lot of combinations of these hyperparameters that haven't been tried yet, meaning there is always the possibility that some combinations work much better than what has been seen so far. Additionally, it is noted that CNNs are not only useful for images, but any two or three-dimensional data where things that are closer together are more closely related than things far away. However, the pattern recognition capabilities of CNNs are limited to spatial patterns only, and thus they are less useful in situations where the spatial organization of data is not important.

  • 03:50:00 In this section, the speaker explains that although creating your own convolutional neural networks from scratch is a great exercise, there are already many mature tools available for use. The takeaway from this section is that when working with neural networks, it's important to make many subtle decisions about how to prepare the data, interpret the results, and choose the hyperparameters. Understanding what is being done with the data and the meaning behind it will help get the most out of the tools available.
How Deep Neural Networks Work - Full Course for Beginners
How Deep Neural Networks Work - Full Course for Beginners
  • 2019.04.16
  • www.youtube.com
Even if you are completely new to neural networks, this course will get you comfortable with the concepts and math behind them.Neural networks are at the cor...
 

Machine Learning Course for Beginners  (parts 1-5)


Machine Learning Course for Beginners

00:00:00 - 01:00:00 In this YouTube video on a beginner's course on machine learning, the instructor explains the basics of machine learning algorithms and their real-world applications, covering both theoretical and practical aspects. The course takes learners from the basics of machine learning to algorithms like linear regression, logistic regression, principal component analysis, and unsupervised learning. The video also discusses overfitting, underfitting, and training/testing data sets. The instructor emphasizes the importance of understanding how to develop functions that enable machine learning algorithms to analyze data to create predictions. At the end, he introduces the Gradient Descent Algorithm for optimizing cost functions used to evaluate performance.

01:00:00 - 02:00:00 This Machine Learning Course for Beginners covers a range of essential topics in machine learning for new learners. The instructor explains the vectorization of partial derivative of theta in linear regression, normal equation, assumptions of linear regression, and the difference between independent and dependent features. The course also includes logistic regression and classification tasks, teaching the hypothesis for logistic regression, cost function, and gradient descent as well as the vectorization code for the cost function and gradient descent. Furthermore, the course introduces Python libraries, data analysis techniques, model building, and accuracy checking using linear regression. The instructor also covers regularization techniques and their importance in machine learning for avoiding overfitting. The course covers ridge and lasso regression, which penalizes the feature weights of less important features, making them closer to zero or eliminating them altogether

.02:00:00 - 03:00:00 The "Machine Learning Course for Beginners" covers various topics such as regularization techniques, support vector machines (SVM), non-linear classification, and data exploration. The course provides an introduction to SVMs and explains how they construct hyperplanes with maximum margins to make predictions while classifying data points. The concept of hard margin and soft margin classification in SVM along with their differences is also covered. The course also includes a stock price prediction project using Python libraries and explores evaluation metrics such as Mean Squared Error, Root Mean Squared Error, and R2 square for the linear regression model. Regularized linear models such as Ridge and Lasso are also explained in detail, along with the demonstration of creating a simple app using Flask.

03:00:00 - 04:00:00 The video "Machine Learning Course for Beginners" covers various topics related to machine learning, such as setting up a server and website using Flask, principal component analysis (PCA), bias and variance trade-offs, regression models, and nested if-else statements. The instructors emphasize the importance of understanding the concepts of machine learning and data pre-processing for text and image data in real-world scenarios, and they provide practical examples of how to work on Iris data and create simple decision trees. The video also covers topics such as linear transformations, eigenvectors, and eigenvalues, and explains how PCA can reduce data dimensions while preserving information. Overall, the video provides a comprehensive introduction for beginners to learn about machine learning and its applications.

04:00:00 - 05:00:00 This video gives a beginner-level introduction to decision trees, including basic terminology, how to construct decision trees using attribute selection measures like entropy, information gain, and Gini impurity, and how decision trees can be used for both classification and regression problems. The video also emphasizes the importance of hyperparameters and understanding decision trees as a crucial concept in machine learning. The next section discusses ensemble learning and its three techniques: bagging, boosting, and stacking, which are commonly used in Kaggle competitions.

05:00:00 - 06:00:00 This YouTube video explains various ensemble learning techniques for improving machine learning model accuracy. One of the popular techniques is bagging or bootstrap aggregation, where multiple models are trained on subsets of training data and combined for better performance with row sampling used for training. The video also covers random forests which use decision trees, bagging, and column sampling to create powerful models. In addition, the video covers boosting, which is used to reduce bias and improve model accuracy, done by additively combining weak learners into a strong model. The instructor provides an overview of various types of boosting such as Gradient Boosting and Adaptive Boosting, to name a few. The video concludes by providing a problem set on GitHub for viewers to try and encourages viewers to subscribe to their channel to receive more free content.

06:00:00 - 07:00:00 The "Machine Learning Course for Beginners" video covers several topics related to boosting, such as the core idea behind boosting, different boosting techniques (e.g., gradient boosting, adaptive boost, and extreme boosting), the algorithm for training a model using boosting, and how boosting can be used to reduce high bias in machine learning models. Additionally, the video discusses the implementation of boosting algorithms in Python using libraries such as scikit-learn and mlx10. The video also touches on the concept of stacking, a method of combining multiple models to create a new model with better performance. The instructor demonstrates how to create a stacked classification model using logistic regression, k-nearest neighbors, Gaussian naive Bayes, and random forest models in Python using the sklearn library.

07:00:00 - 08:00:00 The instructor covers various topics in this video, starting with ensemble learning and stacking classifiers. Then, the focus shifts to unsupervised learning and its applications in clustering data points. The speaker explains different types of clustering algorithms, including center-based and density-based, and gives an overview of evaluation techniques such as the Dunn index and Davies-Bouldin index to assess clustering model quality. Finally, the speaker goes in-depth on k-means clustering, including initialization, centroids, hyperparameters, and limitations, while providing a visualization of the algorithm with two centroids. Overall, the video covers a range of machine learning concepts and techniques, providing a comprehensive introduction to the subject matter.

08:00:00 - 09:00:00 This YouTube video titled "Machine Learning Course for Beginners" covers various topics related to machine learning. One section focuses on k-means clustering and explains the algorithm in detail, covering initialization of centroids, cluster assignment, and updating of clusters until convergence. The video also introduces K-means++ and the elbow method as solutions to problems faced in random initialization. Additionally, another section delves into hierarchical clustering, explaining the creation of a hierarchy of clusters using agglomerative and divisive clustering methods. The video concludes by discussing the heart failure prediction model project, which aims to build a healthcare AI system that will help with the early detection of health concerns to save lives.

09:00:00 - 09:50:00 The "Machine Learning Course for Beginners" video covers various topics related to machine learning, such as imbalanced data, correlation, feature engineering, model building and evaluation, and text classification using NLP techniques. The instructor emphasizes the importance of balanced data and visualizing the data to understand it better. The presenter walks through a step-by-step process to build a spam and ham detector system, analyzing and understanding the data, and implementing NLP techniques to classify messages as spam or ham. The course gives an overview of the essential concepts that beginner machine learning enthusiasts can build upon.


Part 1

  • 00:00:00 In this section, Ayush, a data scientist and machine learning engineer, introduces his machine learning course that covers both theoretical and practical aspects of machine learning algorithms and real-world AI projects. Ayush describes his background, including his experience working on various AI applications such as computer vision and natural language processing, and his YouTube channel where he provides end-to-end courses on machine learning and deep learning. He explains the syllabus of the course, which starts with the basics of machine learning and progresses to understanding algorithms such as linear regression, logistic regression, principal component analysis, and unsupervised learning. Ayush emphasizes the importance of understanding overfitting and underfitting before wrapping up the section.

  • 00:05:00 In this section, the instructor provides a simple explanation of what machine learning is. Essentially, it involves using algorithms to analyze data and make intelligent predictions based on that data without explicit programming. The goal is to create a function that maps input variables to output variables, such as predicting the price of a house based on its size, number of bedrooms, etc. The instructor also provides a more formal definition of machine learning, which involves a computer program improving its performance on a task with experience. Overall, the instructor emphasizes the importance of understanding how to create these functions in order to successfully utilize machine learning.

  • 00:10:00 In this section of the video, the instructor discusses various applications of machine learning, such as self-driving cars, stock price prediction, and medical diagnosis. He also explains the basic workflow of how machine learning works, starting with studying a problem and analyzing data, then training the algorithm and evaluating its performance. The instructor also provides an overview of the main types of machine learning systems, including supervised, unsupervised, and reinforcement learning. He gives an example of supervised learning using house price prediction, where the size of the house is used as a feature to predict the price of the house.

  • 00:15:00 In this section, the speaker discusses supervised learning and its two types of problems: regression and classification. The speaker also provides examples, such as house price prediction for regression and image classification for classification. The speaker explains that supervised learning involves labeled data, where the output variable is known, and there is a relationship between the input and output variables. The speaker also briefly mentions unsupervised learning, where the data is unlabeled, and the model has to recognize patterns based on the data available.

  • 00:20:00 In this section, the speaker discusses the difference between classification and regression problems in machine learning. If the output of a problem is in continuous value, it is considered a regression problem. If the output is in degree value, it is a classification problem. The importance of dividing data into training and testing sets is highlighted, with 80% used for training and 20% for testing the model. The speaker also explains the issues of overfitting and underfitting, where models either perform poorly on both training and testing data or perform well in training but fail in testing. The section concludes with some notations that will be used throughout the course.

  • 00:25:00 In this section, the instructor discusses supervised and unsupervised learning in machine learning. In supervised learning, a function f(x) is created that maps input variables to output variables, using both input and output data to make predictions. The instructor gives an example of a dataset that uses features like outlook, temperature, humidity, and windy to predict whether or not a player will play tennis, with the target variable being whether they will play or not. The input features are independent, while the target variable is dependent on those features. Overall, supervised learning involves a labeled dataset where there is a relationship between the input and output data.

  • 00:30:00 In this section, the instructor explains the difference between independent and dependent features in supervised learning, as well as the difference between regression and classification. For example, predicting stock prices or house prices would be regression, while identifying if a person has diabetes would be classification. The instructor then introduces unsupervised learning and explains that it involves only a set of independent features without any dependent features. An example of this is market segmentation without knowing the labels of the data. The instructor notes that the next section will cover unsupervised learning in more depth.

  • 00:35:00 In this section, we learn about linear regression, a type of supervised learning algorithm that is used when the output data is continuous. The goal is to create a function that maps the input variable to the output variable. Linear regression involves fitting a straight line to scattered data to make predictions, for example, predicting the price of a house based on its size. The line represents a hypothesis, and the closer it is to the data, the better the predictions. This section provides an overview of linear regression and prepares learners for the next section on learning algorithms and a project on Boston hot springs prediction.

  • 00:40:00 In this section, the instructor explains how the hypothesis function works in linear regression. The hypothesis is constructed using the weight of every feature, represented by theta. The bias term, x0, determines the y-intercept or where the line will cross the y-axis. The feature weights are learned by the machine, and the best weights produce the best prediction. The instructor emphasizes that machine learning is based on learning parameters, specifically the weight of the features. The hypothesis function maps the input variables to an output variable, which can be used for prediction.

  • 00:45:00 In this section, the instructor explains the hypothesis function for linear regression using feature weights and the bias term, and how it can be expressed in a vectorized form. He showed that in Python, it can be written in just one line of code. He then introduces the cost function used to evaluate how well the model is performing and how it is used to calculate the distance between actual and predicted data points. A scatter plot is used as an example to illustrate the concept of the cost function.

  • 00:50:00 In this section of the video, the speaker explains the concept of a cost function in machine learning. The cost function involves calculating the difference between predicted and actual values for all data points, which are called residuals. By taking the distance between predicted and actual values and squaring them, a cost function is produced. The goal is to minimize the cost function, which determines the effectiveness of the model. The speaker also introduces the Gradient Descent Algorithm as an optimization method to find the best theta, which will provide the optimal model.

  • 00:55:00 In this section, the instructor explains the concept behind gradient descent, a method used to minimize the cost function in machine learning. The instructor uses a simple analogy to demonstrate how gradient descent tweaks the value of theta to minimize the cost function. Next, the instructor explains the mathematical derivation of this process, taking partial derivatives to update the theta value. Finally, the instructor introduces the gradient descent algorithm, outlining the steps to update theta using the learning rate and the partial derivatives of the cost function. The instructor also discusses how to tune the learning rate and the benefits of vectorizing the process.

Part 2

  • 01:00:00 In this section, the instructor explains how to vectorize the partial derivative of theta in linear regression. Instead of taking the partial derivative of theta zero and theta one separately, you can put it into a joint vector theta and take out the partial derivative of whatever you want to compute. With this vectorization, you can write the derived equation in a vectorized form, which can be used for computational power. The instructor also highlights the normal equation, which gives you optimal theta in just one equation. They explain the assumptions of linear regression and the difference between independent and dependent features. Finally, they mention other optimization algorithms that can be used in advanced levels, such as stochastic gradient descent, Adam optimization algorithm, and RMS Prop.

  • 01:05:00 In this section, the instructor briefly explains the concept of polynomial regression and how it can be used to transform non-linear data into a linear form that can be fit for linear regression. The instructor also mentions the upcoming topics in the course, including logistic regression and classification tasks. The difference between linear regression and logistic regression is explained, and the hypothesis for logistic regression is presented as similar to that of linear regression, but used to classify data.

  • 01:10:00 In this section of the video, the instructor explains the hypothesis for logistic regression, which includes a sigmoid function to get the output between 0 and 1. The output is then compared to a threshold (0.5 or 0.7) to predict whether the image is a cat or not. The cost or loss function is also explained as a way to evaluate the accuracy of the model. The instructor provides the formula for the cost function for one training example.

  • 01:15:00 In this section, the speaker discusses the log loss cost function and its formula in machine learning. The formula takes the ground truth value and the model predicted value and calculates the log of the latter, multiplied with the ground truth value (y_i). The speaker explains that if both values are the same, the cost will be approximately zero, and if they are different, the cost will be very high. The author then goes on to discuss the gradient descent algorithm, which uses the cost function to adjust the parameters, and updates the theta value to move closer to the global optimum. The partial derivative of the cost function is taken to update the theta value.

  • 01:20:00 In this section, the speaker discusses vectorization in machine learning and provides a vectorized code for the cost function and gradient descent in logistic regression. They explain that vectorization means doing all calculations at once to save time. The speaker highlights the importance of understanding logistic regression as a classification algorithm, including the hypothesis, cost function, and gradient descent for finding the best optimal theta. They also mention that the next section will cover breast cancer detection and support vector machines. The speaker encourages the audience to follow along on their Jupyter Notebook, which includes loading data, feature engineering, data visualization, and feature selection.

  • 01:25:00 In this section, the instructor introduces the Python libraries that will be used in the project which includes numpy, pandas, plotly, seaborn, and matplotlib. The project involves predicting the prices of houses in Boston using machine learning, and data is loaded from scikit-learn library. The target variable is y, which is the sales price, while x is the independent features used to predict the model. The data is then converted into a data frame and the instructor shows how to access information about the data, including the number of rows and columns, non-null values, and data types using various pandas functions. The describe function is also used to show the mean, standard deviation, minimum, maximum, and percentiles of each column.

  • 01:30:00 In this section of the video, the presenter discusses data analysis techniques to gain insights into a supervised learning problem involving a target variable, sales price, and feature variables. They demonstrate how to use visualization tools such as distribution plots, pair plots, and qq plots. They also explain how to identify and deal with outliers, calculate skewness and apply transformations to the data. Moreover, they introduce the concept of correlation between features and how it can be used to select highly correlated features for the problem.

  • 01:35:00 In this section of the video, the instructor covers model building and accuracy checking using linear regression. The focus is on avoiding overfitting, which is when a model learns too much from the training data and performs poorly on new data. The instructor explains how regularization can reduce overfitting by either reducing features or applying a penalty to the model's complexity. The next section will cover regularized linear models, including Lasso and Ridge regression. The instructor encourages viewers to search online for solutions when encountering problems and offers a Github repository for further projects.

  • 01:40:00 In this section, the speaker discusses regularization techniques in machine learning, specifically ridge and lasso regression. Ridge regression penalizes the feature weights of less important features, making them closer and closer to zero. Lasso regression goes one step further and eliminates less important features by making the parameter weights equal to zero. The regularization term is added to the end of the cost function, with ridge regression using L2 norm and lasso regression using L1 norm. The speaker also emphasizes that theta zero, which is the bias term, should not be penalized.

  • 01:45:00 In this section, the instructor introduces the topic of regularization, which is considered one of the most important topics in machine learning. The instructor explains that regularization helps to reduce overfitting, which occurs when a model performs very well on the training set but fails to generalize well on the testing set. The main idea behind regularization is to eliminate features that are less important or contain less information by making their respective hyperparameters or thetas equal to zero. The instructor uses the example of house price prediction to show how regularization can help to eliminate less helpful features.

  • 01:50:00 In this section of the video, the instructor explains the concept of feature weighting and learning in machine learning. Using an example of house price prediction, the instructor shows how different features such as house size, number of fans, number of bedrooms, and air conditioners can be given different weights that can be learned and optimized over time. The weights can be adjusted by tweaking the parameters with respect to the partial derivative of the cost function, and this can be improved through regularization techniques such as ridge regression.

  • 01:55:00 In this section, the video discusses regularization in machine learning, specifically ridge and lasso regression. Both types of regression penalize the theta value, but ridge regression uses the L2 norm while lasso regression uses the L1 norm. The alpha term controls how strict the model is on the features, with higher values making it stricter. Both types do not penalize the bias term theta 0, and the L1 norm in lasso regression helps make the less important features' weights closer to zero, making the model less prone to overfitting.

Part 3

  • 02:00:00 In this section, the video explains the l1 and l2 norm in regularization and their specific use cases. L1 norm is very strict as it directly makes any unimportant feature theta zero while L2 is more flexible. The video then briefly mentions elastic net, a combination of both norms. Moving forward, the video introduces support vector machine (SVM) in detail, which is a supervised learning algorithm used for both classification and regression tasks. SVM constructs a hyperplane with parallel margins to maximize the margin while classifying new points as cat or not-cat in a cat and non-cat recognition system. The video also outlines the topics that will be covered in the upcoming sections regarding SVM.

  • 02:05:00 In this section, the instructor explains support vector machine (SVM) and how it constructs two parallel hyperplanes to separate data points with a maximum margin. SVM aims to maximize this margin while keeping the nearest data point far away from the hyperplane. The nearest data points are called support vectors, which support the parallel hyperplanes to separate the data. The instructor provides an example of using SVM for cancer detection and explains the difference between hard margin and soft margin classification. Hard margin classification doesn't allow any data point to violate the margin, which can lead to overfitting, whereas soft margin classification allows for some violations to prevent overfitting.

  • 02:10:00 In this section, the concept of hard margin and soft margin in SVMs was introduced. Hard margin does not allow any data points to come into the margin, while soft margin allows some data points to violate the margin to avoid overfitting. The width of the margin is adjusted by the parameter c, where a very large c makes the margin very small, and a c of 1 makes the margin very large. The construction of the hyperplane in SVM was also discussed, which is defined by the equation w transpose x minus b equals z, where w is the weight parameter vector and b is the bias term. Constraints for hard margin predictions were defined, where anything on or above the margin is regarded as one and anything below is regarded as zero.

  • 02:15:00 In this section, the video discusses how Support Vector Machines (SVM) work and provides a mathematical explanation of SVM. The margin of the hyperplane is written as 2 over the norm of w, and to maximize this margin, we need to minimize the norm of w. SVM seeks to maximize the margin to make predictions. The video explains that we can write an objective function expressed as the distance between two hyperplanes, which is equal to two over the norm of w, subject to the actual label of the data being greater than or equal to 1. The video also addresses a loss function, written as hinge loss or soft margin, and explains how it works to get good predictions in a straightforward way. The concept of a linear SVM is also covered before a discussion about non-linear classification.

  • 02:20:00 In this section, the video discusses the concept of non-linear classification and kernel trick which can help in achieving non-linear specification. Kernel trick transforms data from one-dimensional to higher-dimensional space by mapping the input data to the phi of x function. The RBF kernel function is one of the famous kernels that transforms data into high-dimensional space. The video also talks about the primal problem which helps to formalize the objective function by introducing a new variable called zeta. The zeta variable is introduced for each member i of one to all the round to n.

  • 02:25:00 In this section, the speaker introduces the concept of hinge loss and how it can be used to formulate a cost function for machine learning problems. They explain how to use sub gradient descent to update the parameters of the cost function and how the primal problem works, emphasizing that it is a beginner-friendly approach to machine learning. The speaker also discusses empirical risk minimization and support vector regression, providing equations for these concepts. They encourage viewers to comment if they have any questions and end by mentioning that the next section will focus on making a stock price predictor as an end-to-end machine learning project.

  • 02:30:00 In this section, the speaker demonstrates how to build a stock price predictor using Python's libraries such as NumPy, Pandas, Matplotlib, Seaborn, and YFinance. The code downloads data from Yahoo Finance and takes an input to enter the code of the stock to be downloaded. The data is adjusted with auto-adjust and the shape of the data is shown, revealing a total of 1256 training examples and five columns. The speaker explains that this is the starting point for building a stock price predictor.

  • 02:35:00 In this section of the video, the speaker explores data exploration and analysis for stock price prediction. The speaker begins by looking at statistics such as mean, standard deviation, maximum, and minimum of the data. They caution that stock price prediction is highly non-linear and should be used for educational purposes only. The speaker proceeds to analyze their target variable, "close," to demonstrate the non-linearity and how it can't be relied on to predict output accurately. The speaker then goes on to plot the distribution of "open" and "close" to get a better feel for how to proceed with the data and apply feature engineering. Finally, the speaker concludes by summarizing their results and outlining what they have understood about the data.

  • 02:40:00 In this section, the speaker discusses various machine learning algorithms that have been covered in the course so far, including linear regression, logistic regression, regularized linear models, support vector machines, and principal component analysis. The speaker explains that while linear regression is often considered a bad algorithm, it can be powerful in certain cases, such as for predicting non-linear stock prices. The importance of splitting the data into training and testing sets is emphasized, and the speaker demonstrates how to use the train_test_split function to accomplish this. The linear regression algorithm is then instantiated and trained on the training set, and used to predict the test set. The predicted outputs are shown and compared to the actual outputs from the test set.

  • 02:45:00 In this section, the speaker talks about calculating matrices to evaluate the performance of linear regression models. They discuss using the Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and R2 square to assess the model effectiveness. The speaker provides the helper function to calculate these matrices, and they use it to evaluate the performance of their linear regression model. They find that the MSE and RMSE are almost equal to zero, which means that the model is accurately predicting values. The R2 square is also close to 1, which indicates a good model fit.

  • 02:50:00 In this section, the speaker discusses regularized linear models like ridge and lasso, and demonstrates how they can be implemented using Scikit-Learn in Python. The speaker explains that lasso eliminates the less important features, while ridge penalizes them. The ridge model is less prone to overfitting, making it a better choice to save a model and build a website, according to the speaker. The speaker also discusses support vector regression and demonstrates how it can be fine-tuned using grid search CV to check different values and select the best model.

  • 02:55:00 In this section, the video discusses the use of regularized linear models for machine learning and how they are more powerful than the previous method used. The presenter goes through examples of how to import and save a model using joblib and how to load the saved model for future use. The video also delves into creating a simple app using Flask and rendering a template from an HTML file. A demonstration is provided on how to create an index.html file and render it in an app route.

Part 4

  • 03:00:00 In this section of the video, the speaker shows how to set up a server and website using Flask for an end-to-end machine learning project. They explain how to use a form to input data, preprocess it, make predictions using a loaded model, and return the prediction to a prediction.html template that is derived from a layout.html template. The speaker also encourages users to modify the data features to make the model more powerful but warns against making it too complex. They conclude by emphasizing the importance of understanding the limitations of linear regression when dealing with multi-collinear data.

  • 03:05:00 In this section of the YouTube video "Machine Learning Course for Beginners", the instructor explains how to remove multicollinearity in data through the use of principal component analysis (PCA). Correlated variables can be problematic in machine learning models, but PCA is a dimensionality reduction algorithm that can effectively address this issue. To prepare for learning about PCA, the instructor briefly reviews linear transformations and eigenvectors/eigenvalues, which provide a basis for understanding the concept of dimensionality reduction. The instructor recommends a YouTube channel for those interested in a deeper dive into linear algebra but emphasizes that this material is not necessary to understand PCA.

  • 03:10:00 In this section, the instructor explains linear transformation, eigenvectors, eigenvalues, and their significance in principal component analysis (PCA). Linear transformation is a function that transforms one vector space to another while maintaining a linear structure in each vector space. Eigenvectors and eigenvalues represent a new transformed vector and the factor by which it is scaled, respectively. The instructor also discusses the need for dimensionality reduction, especially in large datasets, and how PCA is used to transform large sets of features into smaller ones. Overall, understanding these concepts is crucial for working on text and image data in real-world scenarios.

  • 03:15:00 In this section, the speaker explains the basic intuition behind Principal Component Analysis (PCA), which is to reduce the dimensions of a dataset while preserving as much information as possible. They describe how PCA constructs new variables, or principal components, that are a linear combination of the initial variables. These components are uncorrelated and most of the information is compressed into the first components. The speaker also goes over the visualization of the data projection onto the principal components and stresses that the algorithm must ensure that the components are uncorrelated.

  • 03:20:00 In this section, the importance of data pre-processing is discussed in relation to principal component analysis (PCA). The first step in PCA is standardizing the data to ensure it falls within the same range, which is critical because PCA is sensitive to outliers. Once the data is standardized, the covariance matrix of the data is computed to understand how the input variables are varying with respect to each other. Highly correlated variables can contain unnecessary information and can be removed. Finally, the eigen vectors and eigenvalues of the covariance matrix are computed, which are important factors in transforming the data into a lower-dimensional space.

  • 03:25:00 In this section, the instructor explains the process of computing eigenvectors and eigenvalues in Python to transform vectors from an original vector and the factor by which it is stretched, which is called the eigenvalue. The columns of the eigenvectors matrix and the eigenvalues matrix are sorted in decreasing order, and the cumulative energy content of each eigenvector is calculated. Then, a subset of eigenvectors with the highest energy content is selected as basis vectors. Finally, the data is projected onto the new basis. The instructor concludes the tutorial by outlining the topics that will be covered in the following sections, which include learning theory, bias and variance tradeoff, approximation, and estimation error.

  • 03:30:00 In this section, the instructor discusses the concept of empirical risk minimization and the importance of understanding bias and variance trade-offs in machine learning. The instructor emphasizes that although bias and variance seem like easy concepts to understand, it is difficult to master them in practice when developing real-world products. The instructor also explains underfitting, which occurs when a model does not perform well on a training set due to a low amount of features or data. The instructor suggests feature engineering to generate more features and improve the model's performance.

  • 03:35:00 In this section of the video, the different types of regression models such as underfitting, good fit, and overfitting are explained. Overfitting happens when a model performs well on the training set but poorly on the testing set due to the model being too complex or having too many features. It can be prevented by selecting important features or using regularization. The bias and variance trade-off is also discussed, where a model giving a low error on the training set but high error on the validation set indicates high variance, and bagging is used to reduce it in ensemble learning methods.

  • 03:40:00 In this section, the instructor explains how to identify if a model has high bias or high variance. If the model has high variance, the training set may have a 15% error rate while the evaluation set has a 16% error rate. On the other hand, if it has high bias, it can have a 30% error rate on the evaluation. The instructor also explains the concept of approximation estimation error, the difference between the exact value and the approximation of it, which is the output from the model. Finally, they mention the assumption that the base error or human level performance is approximately zero percent when building a classification model.

  • 03:45:00 In this section, the video introduces the concept of empirical risk minimization, which is an algorithm that receives a training set sample from a large distribution, with labels assigned by a target function. The goal is to minimize the error with respect to the unknown distribution, so the model can predict new examples it has not even seen before with minimal errors. The video emphasizes that the output predictor depends on the weights learned from the training set, and the goal is to minimize the error or risk called the empirical risk minimization. The video invites viewers to ask any questions in the comment box and to check out the website for the course.

  • 03:50:00 In this section, the instructor discusses the concept of nested if-else statements in machine learning, which is a common way to split data by asking questions. They use the example of the Iris dataset, which contains four features and a label indicating the species of the flower. The task is to detect the flower species based on four features, making it a binary classification dataset. The instructor explains how to create a simple classifier using if-else statements to split the data based on the features and determine the label.

  • 03:55:00 In this section of the video, the instructor explains how to create a simple decision tree using two features: petal length and sepal length. The decision tree uses if-else statements and nested loops to define various conditions and classify target variables. The instructor also explains the terminology of decision trees, such as root nodes, parent nodes, and child nodes.

Part 5

  • 04:00:00 In this section of the video, the instructor explains the basic terminology of decision trees, such as terminal or leaf nodes and branches. They also discuss splitting the data and pruning nodes, which is the elimination of certain nodes in the tree. The instructor emphasizes the importance of taking notes and understanding the terminology for a better understanding of decision trees. They then move on to explain what decision boundaries, or hyperplanes, look like in decision trees and how they are constructed for each outcome. The instructor plots the hyperplanes based on the two features chosen for the example and shows how they are constructed depending on the outcome.

  • 04:05:00 In this section, the instructor explains how decision trees are constructed using attribute selection measures such as entropy, information gain, and gini impurity. These measures help to determine which feature should be the root node or how to split the data set. The instructor emphasizes the importance of choosing the right feature to avoid ending up with a bad model. Entropy is defined as a measure of randomness where the higher the entropy, the harder it is to draw information from it. The instructor provides examples and properties of entropy to help understand its significance in constructing decision trees.

  • 04:10:00 In this section of the video, the instructor explains how to calculate entropy, which is a measure of randomness, using a formula that involves the probability of each class and the logarithm of the probability. The instructor uses the example of a dataset for playing golf and calculates the entropy of the classes "yes" and "no" to demonstrate how the formula works. The instructor also discusses different properties of entropy, including binary classifiers with two classes, and the importance of minimizing entropy to improve the accuracy of a machine learning model.

  • 04:15:00 In this section, the instructor explains the concept of entropy, which is a measure of the randomness of an attribute that determines if it needs to be split for a decision tree. The entropy is calculated based on the number of unique values in the data, and the goal is to minimize the entropy to get better decision tree results. The instructor provides examples of different scenarios and shows how to calculate the entropy for each to understand when an attribute becomes a leaf node or needs further splitting. The maximum entropy is 1, and the minimum entropy is 0, and different decision tree algorithms follow specific rules regarding when to consider an attribute as a leaf or split further based on the entropy value.

  • 04:20:00 In this section, the presenter explains the concept of entropy as a measure of randomness and the diagram of entropy. The highest entropy value is one and can be calculated using a certain equation. Moving on to information gain, the presenter introduces it as another attribute selection measure and provides an example of a dataset used to explain it. The dataset is divided into smaller subsets based on the number of labels, and the entropy is calculated for each subset. This is a preliminary explanation of information gain, which will be explored further in the next section.

  • 04:25:00 section explains how to calculate information gain in decision trees using entropy. The process involves taking the entropy of the whole data distribution and then taking the entropy before and after splitting the data. The formula for information gain involves subtracting the entropy before splitting from the weighted entropy after splitting. The example used in this video demonstrates how the entropy and weighted entropy are calculated for each division, and then averaged to obtain the weighted entropy of the entire dataset. Finally, the entropy before splitting is subtracted from the entropy after splitting to determine the information gain.

  • 04:30:00 the Gini impurity is very similar to the entropy calculation, but instead of using logarithms, it uses probabilities squared. After dividing the data set into multiple categories, you calculate the weighted Gini impurity and then subtract it from the previous Gini impurity to get the information gain. This is a popular and commonly used method in machine learning. It's important to understand the Gini impurity, as well as entropy and information gain when building a decision tree model.

  • 04:35:00 In this section, the instructor explains the concept of Gini impurity as another measure of impurity in a decision tree. Gini impurity is a sign given for a given impurity of y equals to the 1 minus i equals to 1 all the way around to the k. The instructor explains the scenario of Gini impurity with an example of having a unique category value of yes or no where 0.5 is the probability of the number of yes and the number of no is 0.5. The gain impurity of 0.5 is maximum in Gini impurity, which is mostly used as an alternative to entropy since it's faster, as it avoids the use of the log function in entropy. Finally, the instructor shows the diagram of entropy and Gini impurity and promises to demonstrate how to use a decision tree classifier to perform a regression task in the next section.

  • 04:40:00 In this section of the video, the instructor explains how to calculate entropy before splitting the data to determine information gain. They take the entropy of the data distribution and calculate the information gain with respect to the outlook variable. They then split the data based on this variable and continue to calculate the weighted entropy and information gain for temperature, humidity, and windy variables. Ultimately, it is found that the information gain in outlook is the highest, so it is chosen as the root node for the decision tree.

  • 04:45:00 decision tree and how it can be used for both classification and regression problems. In classification, decision trees are built based on the calculation of the measures of impurity, such as entropy and Gini coefficient, to make decisions at each node. The objective is to make the tree as pure as possible or until it reaches a point where it becomes pure. In regression, decision trees are built by taking the average of the target value at each splitting point until it reaches a leaf node. Overfitting can be an issue in decision trees, so it's important to stop the tree growth at a certain depth or prune some branches to make it more robust.

  • 04:50:00 In this section, the instructor explains that it's important to understand attribute selection measures when working with decision trees. They provide examples and explain the implementation for decision tree regression and residential classifier. The instructor emphasizes the importance of learning from implementation and explains hyperparameters such as maximum depth of the tree, minimum sample split, minimum sample leaf, and random state control. They also show an example of using the graphics tool to plot a decision tree.

  • 04:55:00 In this section, the video discusses decision tree regression and its criteria for measuring function, including mean square error, mean absolute error, and poison. The video emphasizes the importance of hyperparameters, particularly max step, in controlling overfitting. Examples of decision trees and their methods, such as plot, are also showcased, and the video stresses the significance of understanding decision trees as a crucial concept in machine learning. The next section discusses ensemble learning and its three techniques: bagging, boosting, and stacking, which are commonly used in Kaggle competitions.
Machine Learning Course for Beginners
Machine Learning Course for Beginners
  • 2021.08.30
  • www.youtube.com
Learn the theory and practical application of machine learning concepts in this comprehensive course for beginners.🔗 Learning resources: https://github.com/...