CS3491 ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING ANNA UNIVERSITY SYLLABUS R2021
CS3491 ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING L T P C 3 0 2 4
COURSE OBJECTIVES:
The main objectives of this course are to:
• Study about uninformed and Heuristic search techniques.
• Learn techniques for reasoning under uncertainty
• Introduce Machine Learning and supervised learning algorithms
• Study about ensembling and unsupervised learning algorithms
• Learn the basics of deep learning using neural networks
UNIT I PROBLEM SOLVING 9
Introduction to AI – AI Applications – Problem solving agents – search algorithms – uninformed search strategies – Heuristic search strategies – Local search and optimization problems – adversarial search
– constraint satisfaction problems (CSP)
UNIT II PROBABILISTIC REASONING 9
Acting under uncertainty – Bayesian inference – naïve bayes models. Probabilistic reasoning – Bayesian networks – exact inference in BN – approximate inference in BN – causal networks.
UNIT III SUPERVISED LEARNING 9
Introduction to machine learning – Linear Regression Models: Least squares, single & multiple variables, Bayesian linear regression, gradient descent, Linear Classification Models: Discriminant function – Probabilistic discriminative model – Logistic regression, Probabilistic generative model – Naive Bayes, Maximum margin classifier – Support vector machine, Decision Tree, Random forests
UNIT IV ENSEMBLE TECHNIQUES AND UNSUPERVISED LEARNING 9
Combining multiple learners: Model combination schemes, Voting, Ensemble Learning – bagging, boosting, stacking, Unsupervised learning: K-means, Instance Based Learning: KNN, Gaussian mixture models and Expectation maximization
UNIT V NEURAL NETWORKS 9
Perceptron – Multilayer perceptron, activation functions, network training – gradient descent optimization – stochastic gradient descent, error backpropagation, from shallow networks to deep networks –Unit saturation (aka the vanishing gradient problem) – ReLU, hyperparameter tuning, batch normalization, regularization, dropout.
45 PERIODS
PRACTICAL EXERCISES: 30 PERIODS
- Implementation of Uninformed search algorithms (BFS, DFS)
- Implementation of Informed search algorithms (A, memory-bounded A)
- Implement naïve Bayes models
- Implement Bayesian Networks
- Build Regression models
- Build decision trees and random forests
- Build SVM models
- Implement ensembling techniques
- Implement clustering algorithms
- Implement EM for Bayesian networks
- Build simple NN models
- Build deep learning NN models
COURSE OUTCOMES:
At the end of this course, the students will be able to:
CO1: Use appropriate search algorithms for problem solving
CO2: Apply reasoning under uncertainty
CO3: Build supervised learning models
CO4: Build ensembling and unsupervised models
CO5: Build deep learning neural network models
TOTAL:75 PERIODS
TEXT BOOKS:
- Stuart Russell and Peter Norvig, “Artificial Intelligence – A Modern Approach”, Fourth Edition, Pearson Education, 2021.
- Ethem Alpaydin, “Introduction to Machine Learning”, MIT Press, Fourth Edition, 2020.
REFERENCES:
- Dan W. Patterson, “Introduction to Artificial Intelligence and Expert Systems”, Pearson Education,2007
- Kevin Night, Elaine Rich, and Nair B., “Artificial Intelligence”, McGraw Hill, 2008
- Patrick H. Winston, “Artificial Intelligence”, Third Edition, Pearson Education, 2006
- Deepak Khemani, “Artificial Intelligence”, Tata McGraw Hill Education, 2013 (http://nptel.ac.in/)
- Christopher M. Bishop, “Pattern Recognition and Machine Learning”, Springer, 2006.
- Tom Mitchell, “Machine Learning”, McGraw Hill, 3rd Edition,1997.
- Charu C. Aggarwal, “Data Classification Algorithms and Applications”, CRC Press, 2014
- Mehryar Mohri, Afshin Rostamizadeh, Ameet Talwalkar, “Foundations of Machine Learning”, MIT Press, 2012.
- Ian Goodfellow, Yoshua Bengio, Aaron Courville, “Deep Learning”, MIT Press, 2016