Robot Learning by Visual Observation
This book presents programming by demonstration for robot learning from observations with a focus on the trajectory level of task abstraction

  • Discusses methods for optimization of task reproduction, such as reformulation of task planning as a constrained optimization problem
  • Focuses on regression approaches, such as Gaussian mixture regression, spline regression, and locally weighted regression
  • Concentrates on the use of vision sensors for capturing motions and actions during task demonstration by a human task expert
1133267677
Robot Learning by Visual Observation
This book presents programming by demonstration for robot learning from observations with a focus on the trajectory level of task abstraction

  • Discusses methods for optimization of task reproduction, such as reformulation of task planning as a constrained optimization problem
  • Focuses on regression approaches, such as Gaussian mixture regression, spline regression, and locally weighted regression
  • Concentrates on the use of vision sensors for capturing motions and actions during task demonstration by a human task expert
108.0 In Stock
Robot Learning by Visual Observation

Robot Learning by Visual Observation

Robot Learning by Visual Observation

Robot Learning by Visual Observation

eBook

$108.00 

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers

LEND ME® See Details

Overview

This book presents programming by demonstration for robot learning from observations with a focus on the trajectory level of task abstraction

  • Discusses methods for optimization of task reproduction, such as reformulation of task planning as a constrained optimization problem
  • Focuses on regression approaches, such as Gaussian mixture regression, spline regression, and locally weighted regression
  • Concentrates on the use of vision sensors for capturing motions and actions during task demonstration by a human task expert

Product Details

ISBN-13: 9781119091998
Publisher: Wiley
Publication date: 01/13/2017
Sold by: JOHN WILEY & SONS
Format: eBook
Pages: 208
File size: 14 MB
Note: This product may take a few minutes to download.

About the Author

ALEKSANDAR VAKANSKI is a Clinical Assistant Professor in Industrial Technology at the University of Idaho, Idaho Falls, USA. He received a Ph.D. degree from the Department of Mechanical and Industrial Engineering at Ryerson University, Toronto, Canada, in 2013. The scope of his research interests encompasses the fields of robotics and mechatronics, artificial intelligence, computer vision, and control systems.

FARROKH JANABI-SHARIFI is a Professor of Mechanical and Industrial Engineering and the Director of Robotics, Mechatronics and Automation Laboratory (RMAL) at Ryerson University, Toronto, Canada. He is currently a Technical Editor of IEEE/ASME Transactions on Mechatronics, an Associate Editor of The International Journal of Optomechatronics, and an Editorial Member of The Journal of Robotics and The Open Cybernetics and Systematics Journal. His research interests include optomechatronic systems with the focus on image-guided control and planning.

Read an Excerpt

Click to read or download

Table of Contents

Preface x 

List of Abbreviations xiii 

1 Introduction 1 

1.1 Robot Programming Methods 2 

1.2 Programming by Demonstration 3 

1.3 Historical Overview of Robot PbD 4 

1.4 PbD System Architecture 6 

1.4.1 Learning Interfaces 8 

1.4.1.1 Sensor-Based Techniques 10 

1.4.2 Task Representation and Modeling 13 

1.4.2.1 Symbolic Level 14 

1.4.2.2 Trajectory Level 16 

1.4.3 Task Analysis and Planning 18 

1.4.3.1 Symbolic Level 18 

1.4.3.2 Trajectory Level 19 

1.4.4 Program Generation and Task Execution 20 

1.5 Applications 21 

1.6 Research Challenges 25 

1.6.1 Extracting the Teacher’s Intention from Observations 26 

1.6.2 Robust Learning from Observations 27 

1.6.2.1 Robust Encoding of Demonstrated Motions 27 

1.6.2.2 Robust Reproduction of PbD Plans 29 

1.6.3 Metrics for Evaluation of Learned Skills 29 

1.6.4 Correspondence Problem 30 

1.6.5 Role of the Teacher in PbD 31 

1.7 Summary 32 

References 33 

2 Task Perception 43 

2.1 Optical Tracking Systems 43 

2.2 Vision Cameras 44 

2.3 Summary 46 

References 46 

3 Task Representation 49 

3.1 Level of Abstraction 50 

3.2 Probabilistic Learning 51 

3.3 Data Scaling and Aligning 51 

3.3.1 Linear Scaling 52 

3.3.2 Dynamic Time Warping (DTW) 52 

3.4 Summary 55 

References 55 

4 Task Modeling 57 

4.1 Gaussian Mixture Model (GMM) 57 

4.2 Hidden Markov Model (HMM) 59 

4.2.1 Evaluation Problem 61 

4.2.2 Decoding Problem 62 

4.2.3 Training Problem 62 

4.2.4 Continuous Observation Data 63 

4.3 Conditional Random Fields (CRFs) 64 

4.3.1 Linear Chain CRF 65 

4.3.2 Training and Inference 66 

4.4 Dynamic Motion Primitives (DMPs) 68 

4.5 Summary 70 

References 70 

5 Task Planning 73 

5.1 Gaussian Mixture Regression 73 

5.2 Spline Regression 74 

5.2.1 Extraction of Key Points as Trajectories Features 75 

5.2.2 HMM-Based Modeling and Generalization 80 

5.2.2.1 Related Work 80 

5.2.2.2 Modeling 81 

5.2.2.3 Generalization 83 

5.2.2.4 Experiments 87 

5.2.2.5 Comparison with Related Work 100 

5.2.3 CRF Modeling and Generalization 107 

5.2.3.1 Related Work 107 

5.2.3.2 Feature Functions Formation 107 

5.2.3.3 Trajectories Encoding and Generalization 109 

5.2.3.4 Experiments 111 

5.2.3.5 Comparisons with Related Work 115 

5.3 Locally Weighted Regression 117 

5.4 Gaussian Process Regression 121 

5.5 Summary 122 

References 123 

6 Task Execution 129 

6.1 Background and Related Work 129 

6.2 Kinematic Robot Control 132 

6.3 Vision-Based Trajectory Tracking Control 134 

6.3.1 Image-Based Visual Servoing (IBVS) 134 

6.3.2 Position-Based Visual Servoing (PBVS) 135 

6.3.3 Advanced Visual Servoing Methods 141 

6.4 Image-Based Task Planning 141 

6.4.1 Image-Based Learning Environment 141 

6.4.2 Task Planning 142 

6.4.3 Second-Order Conic Optimization 143 

6.4.4 Objective Function 144 

6.4.5 Constraints 146 

6.4.5.1 Image-Space Constraints 146 

6.4.5.2 Cartesian Space Constraints 149 

6.4.5.3 Robot Manipulator Constraints 150 

6.4.6 Optimization Model 152 

6.5 Robust Image-Based Tracking Control 156 

6.5.1 Simulations 157 

6.5.1.1 Simulation 1 158 

6.5.1.2 Simulation 2 161 

6.5.2 Experiments 162 

6.5.2.1 Experiment 1 166 

6.5.2.2 Experiment 2 173 

6.5.2.3 Experiment 3 177 

6.5.3 Robustness Analysis and Comparisons with Other Methods 177 

6.6 Discussion 183 

6.7 Summary 185 

References 185 

Index 189

From the B&N Reads Blog

Customer Reviews