Trying to Teach Freshmen Robotics 2
Weeks 5 to 8
Stuff that I sent over
- PyTorch
- Robotic Vision (with OpenCV)
Evidently, progress in the stuff I could teach was slow owing to academics (ironic, isn’t it?). Regardless, they are still interested in RRC and a couple of them got part-time internships in a robotics startup (Yay!).
PyTorch
Not just the basics but also dive into critical aspects that will be essential for applying neural networks to robotics problems.
- Getting Started with PyTorch:
- Resource: Deep Learning with PyTorch: A 60 Minute Blitz
Familiarize yourself with the core components of PyTorch.
- Resource: Deep Learning with PyTorch: A 60 Minute Blitz
- Data Loaders and Datasets:
- PyTorch’s
torch.utils.data.Dataset
andDataLoader
classes are designed to simplify data handling. - Key Aspects:
- Custom Datasets: How to structure and preprocess your robotics data for training.
- Batching & Shuffling: Efficiently loading data during training to improve performance.
- Parallel Data Loading: Leveraging multi-threading to accelerate your training pipeline.
- Resource: The Data Loading Tutorial
- PyTorch’s
- Optimization Theory:
- Understanding the mathematics and intuition behind optimization is crucial.
- Focus Areas:
- Gradient Descent and Variants: Learn why and how different optimizers work (SGD, Adam, etc.).
- Tuning Hyperparameters: Experiment with learning rates, momentum, and other parameters through
torch.optim
.
- Resource: The PyTorch Optimizers Documentation is a solid starting point.
- Other stuff:
- You have done quite a bit of this in the mathy bits earlier and will redo it in the mobile robotics course. Take a look again at this in for the time being: Intro to Optimization
Robotic Vision with OpenCV
- Introduction to OpenCV:
- Resource: The OpenCV-Python Tutorials for an overview.
- Multi-view Geometry:
- A super important thing in robotic vision is the ability to perceive depth and spatial relationships using multiple images.
- Focus Areas:
- Camera Calibration: Learn to determine intrinsic and extrinsic camera parameters to correct lens distortions.
- Resource: The Camera Calibration Tutorial guides you through this process.
- Epipolar Geometry & Stereo Vision: Understand how to estimate depth and recover 3D information from two or more camera views.
- Resource: Multi-view Geometry is perhaps the best book available. There are a few copies in the library as well.
- Camera Calibration: Learn to determine intrinsic and extrinsic camera parameters to correct lens distortions.
- Robust Vision Pipelines:
- Key Concepts:
- Feature Detection: Experiment with detectors such as SIFT, SURF, or ORB to recognize and match key points in images.
- Real-time Processing: Build pipelines that can handle continuous video streams and process images on the fly.
- Key Concepts:
Final Things
Once you’re done. Start with reading about SLAM pipelines.
Enjoy Reading This Article?
Here are some more articles you might like to read next: