Welcome to the "Applied Machine Learning" repository! This repository serves as a comprehensive resource for understanding and implementing various machine learning concepts in real-world scenarios.
Jupyter Notebooks are provided for each topic, containing detailed explanations, code implementations, and practical examples. These notebooks are designed to be beginner-friendly yet offer insights for advanced users.
- Navigate through folders to find specific topics of interest. Each topic includes a dedicated notebook with step-by-step implementations and theoritical explaination.
- Clone or download the repository to your local machine and open the Jupyter Notebooks in your favorite environment. Follow along with the code and experiment with the provided examples.
- Feel free to contribute by adding your own implementations, enhancements, or additional topics. Your contributions can help create a valuable learning resource for the community.
-
Clone the repository:
git clone https://github.com/Syeda-Farhat/Applied-Machine-Learning-.gitStart exploring the notebooks and enhance your machine learning skills!
-
Your feedback is highly valuable. If you encounter any issues, have suggestions, or want to contribute, please open an issue or submit a pull request.
-
The repository will be regularly updated with new notebooks covering additional applied machine learning topics. Stay tuned for more content!
- Data Preprocessing and EDA using numeric Data : dataset .
- Data Preprocessing and EDA using txt Data : dataset.
-
- Multinomial Logistic Regression
- Support Vector Machines (SVM) with multi-class support
- Decision Trees for multi-class problems
- Random Forest for multi-class problems
- Neural Networks for multi-class classification
-
- Techniques for handling imbalanced datasets
- Resampling methods (oversampling, undersampling)
- Cost-sensitive learning
- Ensemble methods for imbalanced data
-
- Natural Language Processing for text classification
- Bag-of-Words and TF-IDF representations
- Word embeddings (e.g., Word2Vec, GloVe)
- Recurrent Neural Networks (RNN) for text classification
- Transformer models (e.g., BERT) for text classification
-
[Ensemble Methods]
- Bagging (e.g., Bootstrap Aggregating)
- Boosting (e.g., AdaBoost, Gradient Boosting)
- Stacking multiple models
-
[Linear Regression]
- Simple Linear Regression
- Multiple Linear Regression
- Polynomial Regression
-
[Regularization in Regression]
- Ridge Regression
- Lasso Regression
- Elastic Net Regression
-
[Decision Trees for Regression]
- Regression Trees
- Random Forest for regression
- Gradient Boosted Trees for regression
-
[Support Vector Machines (SVM) for Regression]
- Support Vector Regression (SVR)
-
[Ensemble Methods for Regression]
- Bagging and boosting techniques for regression tasks
- Stacking models for improved regression performance