Robotic Vision 2018

Allow robots to process visual data from the world

Learn More

About course


Course Description:

This course aims at training students to participate world-class competition, and we will target the RobotX Competition (https://www.robotx.org/) in 2018. This course will cover the fundamental and advanced domains in vision for mobility tasks, including the challenges combining object recognition, pose estimation, motion planning, and SLAM. We will systematically study each components from previous winning teams in 2014 and 2016, as well as cutting-edge methods that may better improve the performance. Students will form teams to develop term projects, and we encourage in-class discussions and presentation skills with a white board for class participations. This course is a "learning-by-doing," and include in-class lab/tutorial in each teaching module. We will use "Duckietown" (an open course "MIT 2.166 Autonomous Vehicles) as platform, and focus advanced topics using Gazebo, 3D perception, and deep learning.

Class Organization:

  • Task-oriented: We aim at the RobotX competition in 2018.
  • Learning by Doing: We have lecture and lab sections, and include hand-on tutorial/lab materials each week.
  • In-class Discussion: We encourage discussions and presentation skills using a whiteboard each team for class participation. We also wish to train students' presentation skills.
  • Project-based learning: each team will work on a term project related to RobotX Competition.

Textbooks and Resources:

  • 1. Computer Vision: Algorithms and Applications, Richard Szeliski, Springer, 2010.
  • 2. Robotics, Vision, and Control, Peter Croke, Springer, 2011.
  • 3. Introduction to Autonomous Robots, Nikolaus Correll, 2015.
  • 4. RobotX Competition (Website Link)

Assignments:

We have in-class tutorial/lab each week, and the lab materials typically include 3 specific tasks (programming / algorithm / system work). and students are expected to finish the lab materials during the class.

Exams:

There will be no midterm/final exams.

Evaluations:

  • In-Class Lab/Tutorials (40%)
  • Midterm Project Proposal (20%)
  • Final Presentation and Term Paper/Report (40%)

Notices


Prerequisite:

  • Due to the tight competition schedule, we wish to cover more advanced topics directly relevant to the competition. We will assume that you are familiar with Duckietown A to H steps, which is an open-source, self-learning material. 28 Duckiebots will be available after 1/20. Please feel free to get started early during winter break if you do not have prior knowledge of this course!
  • Due to the limited robots/computing resource for Gazebo and Deep Learning, we will ask all attended students to fill in a self-evaluation form and motivation statement. We apologise that we can only enroll a limited number of students according to students' 1) motivation, 2) knowledge of Unix/Ubuntu, Vim, Git, 3) Duckietown and ROS, 4) computer vision and image processing, and 5) your own computing resource (such as a native Ubuntu machine with GTX 1060 or above).
  • Appologize that we will not host audit students due to limited resources and in-class discussions in small groups/teams.
  • The order of registration may be used in a potential lottery.
  • This course involve a fair amount of probability, linear algebra, and programming. Students are required to take image processing or computer vision. Being familiar with Unix-like system, Git, C++, Robot Operation System (ROS) and Python is needed.

注意事項:

  • 鑑於教學實驗室的機器人 計算資源 與助教有限,我們預計課程人數為25人
  • 我們課程的先修課程與技能為ROS, 影像處理等,也需要對Ubuntu, Git, Python等有ㄧ定熟悉度
  • 如果還沒有相關知識的同學,我們會建議先找助教拿一台小鴨車,並使用OCW與線上實作課程自學
  • 上課第一週會請所有同學填寫自評表,將以動機、對先修課程與技能的熟悉度高的學生優先,若人數過多則以抽籤決定

Teaching Staff


...

Prof. Nick Wang

Instructor

...

Daniel Huang

Teaching Assistant

...

Tony Hsiao

Teaching Assistant

...

David Chen

Teaching Assistant

...

Monica Lin

Teaching Assistant

Schedule


Week Date Topic Lecture/Discussion Lab
I 2/22 Gazebo IGazebo and Virtual Challenges Self-evaluation
II 3/1 Introduction to Robotic Vision and RobotX Competition Subscribe/Publish ROS Messages in Gazebo
III 3/8 Gazebo for Marine Robotics Gazebo USV I
IV 3/15 Duckietown Lane Following Revisit Duckietown Virtual Lane Following
V 3/22 From DT to RX AprilTag; How to Read a Research Paper AprilTag Navi
VI 3/29 HSV Filter & FSM Light Buoy Sequence Detection
VII 4/5 Image Features: Harris Corner, FAST, SIFT, & MSER Placard Feature Detector
VIII 4/12 Bayes Filter GPS Navi
IX 4/19 Midterm Proposal Pitch
X 4/26 3D Perception 3D Sensors and Data Logging SVelodyne & RGBD Camera Calibration
XI 5/3 Point Cloud and ICP PCL Normal RANSAC ICP
XII 5/10 Laser-Based Feature Detection 3D Feature Tracking
XIII 5/17 Deep Learning Intro to Deep Learning Layer Computation
XIV 5/24 Network Structure Network Structure and Surgery
XV 5/31 DL Prediction Prediction from Pre-trained Models
XVI 6/7 DL Training Training with Freiburg Groceries
XVII 6/14 Feature Engineering vs. Deep Learning Placard Deep Learning
XVIII 6/21 Final Presentation

Contact us


If you are interested, please visit our ARG-NCTU website or contact us with following information.

(EE627)1001 University Road, Hsinchu,
Taiwan 30010, ROC

David Chen (TA)

ccpwearth@gmail.com