Research Based and Integration Oriented Program in the World
ARTH learners NIL
500 hours
In this age of Automation, the traditional concepts and approaches are getting neglected and Technologies like Machine Learning, Artificial Intelligence etc. are getting evolved. This era of digitalization we live in is changing the way we work. The organizations endeavouring to accomplish the comprehensive digital transformation becoming data-driven is a key goal for them.
Become a part of the only program in the world, where the learners will understand the need & the power of integrating Cloud Computing, DevOps and BigData Ecosystem with Machine Learning.
Program is designed to fulfill the current need of industry covering all the aspects of Data Science.
What is feature selection? |
Filter Methods |
Wrapper methods |
Embedded Methods |
Constant, quasi constant, and duplicated features |
Constant features |
Quasi-constant features |
Duplicated features |
Correlation |
Basic methods plus Correlation pipeline |
Statistical methods – Intro |
Mutual information |
Chi-square for categorical variables | Fisher score |
Univariate approaches |
Univariate ROC-AUC |
Wrapper methods – Intro |
Step forward feature selection |
Step backward feature selection |
Exhaustive search |
Regularisation – Intro |
Lasso |
Regression Coefficients – Intro |
Selection by Logistic Regression Coefficients |
Coefficients change with penalty |
Selection by Linear Regression Coefficients |
Selecting Features by Tree importance – Intro |
Select by model importance random forests |embedded |
Select by model importance random forests | recursively |
Select by model importance gradient boosted machines |
Feature selection with decision trees | review |
Jupyter Overview: |
Updates to Notebook Zip |
Jupyter Notebooks |
Optional: Virtual Environments |
Python: |
Python Core to Advance |
Python for Data Analysis - NumPy: |
Introduction to Numpy |
Numpy Arrays |
Array Indexing |
Numpy Array Indexing |
Numpy Operations |
Python for Data Analysis - Pandas: |
Introduction to Pandas |
Series |
DataFrames |
Missing Data |
Groupby |
Merging Joining and Concatenating |
Operations |
Data Input and Output |
Python for Data Visualization - Matplotlib |
Matplotlib |
Python for Data Visualization - Seaborn |
Introduction to Seaborn |
Categorical Plots |
Matrix Plots |
Grids |
Regression Plots |
Style and Color |
Python for Data Visualization - Pandas Built-in Data Visualization |
Pandas Built-in Data Visualization |
Pandas Data Visualization Exercise |
Pandas Data Visualization Exercise- Solutions |
Python for Data Visualization - Plotly and Cufflinks |
Plotly and Cufflinks |
Python for Data Visualization - Geographical Plotting |
Choropleth Maps |
Supervised Learning Overview |
Bias/Variance Tradeoff |
K-Fold Cross-Validation to avoid overfitting |
Data Cleaning and Normalization |
Normalizing numerical data |
Detecting outliers |
Feature Engineering and the Curse of Dimensionality |
Imputation Techniques for Missing Data |
Handling Unbalanced Data: Oversampling, Undersampling, and SMOTE |
Binning, Transforming, Encoding, Scaling, and Shuffling |
Simple Linear Regression |
Simple Linear Regression Intuition |
Cross Validation and Bias-Variance Trade-Off |
Bias Variance Trade-Off |
Multiple Linear Regression |
Multiple Linear Regression Intuition |
Multiple Linear Regression - Backward Elimination |
Multiple Linear Regression - Automatic Backward Elimination |
Polynomial Regression |
Polynomial Regression Intuition |
Support Vector Regression (SVR) |
SVR in Python and R |
Decision Tree Regression |
Decision Tree Regression Intuition |
Random Forest Regression |
Random Forest Regression Intuition |
Evaluating Regression Models Performance |
R-Squared Intuition |
Adjusted R-Squared Intuition |
Interpreting Linear Regression Coefficients |
Classification |
Logistic Regression |
Logistic Regression Intuition |
K-Nearest Neighbors (K-NN) |
K-Nearest Neighbor Intuition |
SVM Intuition |
Kernel SVM |
Kernel SVM Intuition |
Types of Kernel Functions |
Non-Linear Kernel SVR |
Naive Bayes |
Naive Bayes Intuition |
Decision Tree Classification |
Decision Tree Classification Intuition |
Random Forest Classification |
Random Forest Classification Intuition |
Classification Model Selection in Python |
Evaluating Classification Models Performance |
False Positives & False Negatives |
Confusion Matrix |
Accuracy Paradox |
CAP Curve |
CAP Curve Analysis |
Clustering |
K-Means Clustering |
K-Means Random Initialization Trap |
K-Means Selecting The Number Of Clusters |
K-Means Clustering |
Hierarchical Clustering |
Hierarchical Clustering How Dendrograms Work |
Association Rule Learning |
Apriori |
Apriori Intuition |
Eclat |
Eclat Intuition |
Reinforcement Learning |
Upper Confidence Bound (UCB) |
Upper Confidence Bound |
Thompson Sampling |
Thompson Sampling Intuition |
Algorithm Comparison: UCB vs Thompson Sampling |
Natural Language Processing |
NLP Intuition |
Types of Natural Language Processing |
Classical vs Deep Learning Models |
Bag-Of-Words Model |
Introduction to Neural Networks |
Introduction to Neural Networks |
Introduction to Perceptron |
Neural Network Activation Functions |
Cost Functions |
Gradient Descent Backpropagation |
TensorFlow Playground |
Manual Creation of Neural Network |
Operations / Placeholders and Variables / Session |
TensorFlow Basics |
Introduction to TensorFlow |
TensorFlow Graphs |
Variables and Placeholders |
TensorFlow - A Neural Network |
TensorFlow Regression |
TensorFlow Classification |
Saving and Restoring Models |
Introduction to Artificial Neural Networks (ANN) |
Installing Tensorflow |
Perceptron Model |
Neural Networks |
Activation Functions |
Multi-Class Classification Considerations |
Cost Functions and Gradient Descent |
Backpropagation |
TensorFlow vs Keras |
TF Syntax Basics |
Deep Learning |
Artificial Neural Networks |
The Neuron |
The Activation Function |
How do Neural Networks work? |
How do Neural Networks learn? |
Gradient Descent |
Stochastic Gradient Descent |
Backpropagation |
Business Problem Description |
ANN |
Convolutional Neural Networks |
What are convolutional neural networks? |
Convolution Operation |
ReLU Layer |
Pooling |
Flattening |
Full Connection |
Softmax & Cross-Entropy |
CNN |
Dimensionality Reduction |
Principal Component Analysis (PCA) |
Principal Component Analysis (PCA) Intuition |
PCA |
Linear Discriminant Analysis (LDA) |
Linear Discriminant Analysis (LDA) Intuition |
LDA |
Kernel PCA |
Model Selection & Boosting |
k-Fold Cross Validation |
Grid Search in Python |
XGBoost |
Model Selection and Boosting |
Recommender Systems |
Recommender Systems |
Natural Language Processing |
Natural Language Processing Theory |
Statistics and Probability Refresher, and Python Practice |
Mean, Median, Mode |
Variation and Standard Deviation |
Probability Density Function; Probability Mass Function |
Common Data Distributions |
Percentiles and Moments |
Covariance and Correlation |
Conditional Probability |
Bayes' Theorem |
Recommender Systems |
User-Based Collaborative Filtering |
Item-Based Collaborative Filtering |
Data Warehousing Overview: ETL and ELT |
Reinforcement Learning |
Reinforcement Learning & Q-Learning with Gym |
Understanding a Confusion Matrix |
Measuring Classifiers (Precision, Recall, F1, ROC, AUC) |
Recurrent Neural Networks (RNN's) |
Using a RNN for sentiment analysis |
Transfer Learning |
Tuning Neural Networks: Learning Rate and Batch Size Hyperparameters |
RNN Intuition |
The Vanishing Gradient Problem |
LSTMs |
LSTM and GRU Theory |
Evaluating and Improving the RNN |
Evaluating the RNN |
Improving the RNN |
Self Organizing Maps |
SOMs Intuition |
How do Self-Organizing Maps Work? |
Reading an Advanced SOM |
Building a SOM |
Boltzmann Machines |
Boltzmann Machine Intuition |
Boltzmann Machine |
Energy-Based Models (EBM) |
Contrastive Divergence |
Deep Belief Networks |
Deep Boltzmann Machines |
Intractability |
Free Energy |
RBM Greedy Layer-Wise Pretraining |
The Vanishing Gradient Problem |
The Vanishing Gradient Problem Description |
AutoEncoders |
Training an Auto Encoder |
Overcomplete hidden layers |
Sparse Autoencoders |
Denoising Autoencoders |
Contractive Autoencoders |
Stacked Autoencoders |
Deep Autoencoders |
Word2Vec Theory |
Deep Nets with Tensorflow Abstractions API |
Deep Nets with Tensorflow Abstractions API - Estimator API |
Deep Nets with Tensorflow Abstractions API - Keras |
Deep Nets with Tensorflow Abstractions API - Layers |
Tensorboard |
AutoEncoders |
Autoencoder Basics |
Dimensionality Reduction with Linear Autoencoder |
Linear Autoencoder PCA |
Stacked Autoencoder |
Denoising Autoencoders |
Stacked Autoencoders |
Testing greedy layer-wise autoencoder training vs. pure backpropagation |
Cross Entropy vs. KL Divergence |
Deep Autoencoder Visualization Description |
Reinforcement Learning with OpenAI Gym |
Introduction to Reinforcement Learning with OpenAI Gym |
Introduction to OpenAI Gym |
OpenAI Gym Steup |
Open AI Gym Env Basics |
Open AI Gym Observations |
OpenAI Gym Actions |
Simple Neural Network Game |
Policy Gradient Theory |
Policy Gradient Code |
GAN - Generative Adversarial Networks |
Introduction to GANs |
Principal Components Analysis |
How does PCA work? |
PCA for NLP |
PCA objective function |
PCA Application: Naive Bayes |
SVD (Singular Value Decomposition) |
t-SNE (t-distributed Stochastic Neighbor Embedding) |
t-SNE Theory |
t-SNE Visualization |
t-SNE on the Donut |
t-SNE on XOR |
Applications to NLP (Natural Language Processing) |
Application of PCA and SVD to NLP (Natural Language Processing) |
Latent Semantic Analysis in Code |
Application of t-SNE + K-Means: Finding Clusters of Related Words |
Applications to Recommender Systems |
Recommender Systems Section Introduction |
Why Autoencoders and RBMs work |
Data Preparation and Logistics |
Data Preprocessing Code |
AutoRec |
AutoRec in Code |
Categorical RBM for Recommender System Ratings |
Generative Modeling Review |
What does it mean to Sample? |
Sampling Demo: Bayes Classifier |
Gaussian Mixture Model Review |
Bayes Classifier with GMM |
Variational Autoencoders |
Variational Autoencoder Architecture |
Parameterizing a Gaussian with a Neural Network |
The Latent Space, Predictive Distributions and Samples |
Cost Function |
Tensorflow Implementation |
The Reparameterization Trick |
Visualizing the Latent Space |
Bayesian Perspective |
Generative Adversarial Networks (GANs) |
GAN - Basic Principles |
GAN Cost Function |
DCGAN |
Batch Normalization Review |
Fractionally-Strided Convolution |
Tensorflow Implementation |
Face Detection Intuition |
Haar-like Features |
Integral Image |
Training Classifiers |
Adaptive Boosting (Adaboost) |
Cascading |
Face Detection Intuition |
Face Detection with OpenCV |
Object Detection Intuition |
How SSD is different |
The Multi-Box Concept |
Predicting Object Positions |
Image Creation with GANs |
Recurrent Neural Networks, Time Series, and Sequence Data |
Sequence Data |
Forecasting |
Autoregressive Linear Model for Time Series Prediction |
Proof that the Linear Model Works |
Recurrent Neural Networks |
RNN Code Preparation |
RNN for Time Series Prediction |
GRU and LSTM |
Natural Language Processing (NLP) |
Embeddings |
Code Preparation (NLP) |
Text Preprocessing |
Text Classification with LSTMs |
CNNs for Text |
Text Classification with CNNs |
Recommender Systems |
Recommender Systems with Deep Learning |
Transfer Learning for Computer Vision |
Transfer Learning Theory |
Some Pre-trained Models (VGG, ResNet, Inception, MobileNet) |
Large Datasets and Data Generators |
Deep Reinforcement Learning |
Elements of a Reinforcement Learning Problem |
States, Actions, Rewards, Policies |
Markov Decision Processes (MDPs) |
The Return |
Value Functions and the Bellman Equation |
What does it mean to “learn”? |
Epsilon-Greedy |
Q-Learning |
Deep Q-Learning / DQN |
Epsilon-Greedy Theory |
Calculating a Sample Mean |
Epsilon-Greedy Beginner's Exercise Prompt |
Designing Your Bandit Program |
Epsilon-Greedy in Code |
Comparing Different Epsilons |
Optimistic Initial Values Theory |
Optimistic Initial Values Code |
UCB1 Theory |
UCB1 Beginner's Exercise Prompt |
UCB1 Code |
Bayesian Bandits / Thompson Sampling Theory |
Thompson Sampling Code |
Thompson Sampling With Gaussian Reward Theory |
Thompson Sampling With Gaussian Reward Code |
Nonstationary Bandits |
Bandit Summary, Real Data, and Online Learning |
On Unusual or Unexpected Strategies of RL |
From Bandits to Full Reinforcement Learning |
Advanced Tensorflow Usage |
What is a Web Service? |
Tensorflow Serving pt 2 |
Tensorflow Lite (TFLite) |
Training with Distributed Strategies |
Using the TPU |
In-Depth: Loss Functions |
Mean Squared Error |
Binary Cross Entropy |
Categorical Cross Entropy |
In-Depth: Gradient Descent |
Gradient Descent |
Stochastic Gradient Descent |
Momentum |
Variable and Adaptive Learning Rates |
Adam |
Links to TF2.0 Notebooks |
VGG and Transfer Learning |
Transfer Learning |
Relationship to Greedy Layer-Wise Pretraining |
ResNet (and Inception) |
ResNet Architecture |
Building ResNet - Strategy |
Building ResNet - Conv Block Details |
Building ResNet - Conv Block Code |
Building ResNet - Identity Block Details |
Building ResNet - First Few Layers |
1x1 Convolutions |
Different sized images using the same network |
Object Detection (SSD / RetinaNet) |
What is Object Detection? |
The Problem of Scale |
The Problem of Shape |
Using Pretrained RetinaNet |
Neural Style Transfer |
Style Transfer Section Intro |
Style Transfer Theory |
Optimizing the Loss |
Class Activation Maps |
Object Localization Project |
Localization Introduction |
Deep NLP Intuition |
Seq2Seq Architecture |
Seq2Seq Training |
Beam Search Decoding |
Attention Mechanisms |
Building a ChatBot with Deep NLP |
ChatBot |
BUILDING THE SEQ2SEQ MODEL |
IMPROVING & TUNING THE SEQ2SEQ MODEL |
Improving & Tuning the ChatBot |
Markov Decision Proccesses |
MDP Section Introduction |
Gridworld |
Choosing Rewards |
The Markov Property |
Markov Decision Processes (MDPs) |
Future Rewards |
Value Functions |
The Bellman Equation |
Bellman Examples |
Optimal Policy and Optimal Value Function |
Dynamic Programming |
Intro to Dynamic Programming and Iterative Policy Evaluation |
Designing Your RL Program |
Gridworld in Code |
Iterative Policy Evaluation in Code |
Windy Gridworld in Code |
Iterative Policy Evaluation for Windy Gridworld in Code |
Policy Improvement |
Policy Iteration |
Policy Iteration in Windy Gridworld |
Value Iteration |
Monte Carlo |
Monte Carlo Intro |
Monte Carlo Policy Evaluation |
Monte Carlo Policy Evaluation in Code |
Policy Evaluation in Windy Gridworld |
Monte Carlo Control |
Monte Carlo Control in Code |
Monte Carlo Control without Exploring Starts |
Monte Carlo Control without Exploring Starts in Code |
Temporal Difference Learning |
Temporal Difference Intro |
TD(0) Prediction |
TD(0) Prediction in Code |
SARSA |
SARSA in Code |
Q Learning |
Q Learning in Code |
TD Summary |
Approximation Methods |
Approximation Intro |
Linear Models for Reinforcement Learning |
Features |
Monte Carlo Prediction with Approximation |
TD(0) Semi-Gradient Prediction |
Semi-Gradient SARSA |
OpenAI Gym and Basic Reinforcement Learning Techniques |
OpenAI Gym Tutorial |
Random Search |
Saving a Video |
CartPole with Bins (Code) |
RBF Neural Networks |
TD Lambda |
N-Step Methods |
N-Step in Code |
TD Lambda |
TD Lambda in Code |
TD Lambda Summary |
Policy Gradients |
Policy Gradient Methods |
Policy Gradient in TensorFlow for CartPole |
Policy Gradient in Theano for CartPole |
Continuous Action Spaces |
Deep Q-Learning |
Deep Q-Learning Intro |
Deep Q-Learning Techniques |
Deep Q-Learning in Tensorflow |
Pseudocode and Replay Memory |
Partially Observable MDPs |
Deep Q-Learning Section Summary |
A3C |
A3C - Theory and Outline |
Policy Gradient |
Actor-Critic |
Twin Delayed DDPG Theory |
Introduction and Initialization |
The Q-Learning part |
The Policy Learning part |
The whole training process |
Twin Delayed DDPG Implementation |
Taking care of Missing Data |
Splitting the dataset into the Training set and Test set |
Feature Scaling |
Encoding Categorical Data |