100% found this document useful (1 vote)
12 views

Computer Vision Toolbox Users Guide Unknown pdf download

The document is a user's guide for the Computer Vision Toolbox™ by MathWorks, detailing various functionalities and examples related to camera calibration, code generation, deep learning, and more. It includes links to additional resources and recommended products in the field of computer vision. The guide is intended for users looking to leverage the toolbox for various computer vision applications.

Uploaded by

mauckrollotu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
12 views

Computer Vision Toolbox Users Guide Unknown pdf download

The document is a user's guide for the Computer Vision Toolbox™ by MathWorks, detailing various functionalities and examples related to camera calibration, code generation, deep learning, and more. It includes links to additional resources and recommended products in the field of computer vision. The guide is intended for users looking to leverage the toolbox for various computer vision applications.

Uploaded by

mauckrollotu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 89

Computer Vision Toolbox Users Guide Unknown

download

https://ebookbell.com/product/computer-vision-toolbox-users-
guide-unknown-49478960

Explore and download more ebooks at ebookbell.com


Here are some recommended products that we believe you will be
interested in. You can click the link to download.

Matlab Computer Vision System Toolbox Documentation

https://ebookbell.com/product/matlab-computer-vision-system-toolbox-
documentation-6741128

Computer Vision Eccv 2022 17th European Conference Tel Aviv Israel
October 2327 2022 Proceedings Part Vi Shai Avidan

https://ebookbell.com/product/computer-vision-eccv-2022-17th-european-
conference-tel-aviv-israel-october-2327-2022-proceedings-part-vi-shai-
avidan-47217540

Computer Vision Eccv 2022 17th European Conference Tel Aviv Israel
October 2327 2022 Proceedings Part Viii Shai Avidan

https://ebookbell.com/product/computer-vision-eccv-2022-17th-european-
conference-tel-aviv-israel-october-2327-2022-proceedings-part-viii-
shai-avidan-47284126

Computer Vision In Medical Imaging Chihau Chen

https://ebookbell.com/product/computer-vision-in-medical-imaging-
chihau-chen-47986042
Computer Vision Statistical Models For Marrs Paradigm 1st Edition
Songchun Zhu

https://ebookbell.com/product/computer-vision-statistical-models-for-
marrs-paradigm-1st-edition-songchun-zhu-48225808

Computer Vision Metrics Scott Krig

https://ebookbell.com/product/computer-vision-metrics-scott-
krig-48655120

Computer Vision Algorithms And Applications Texts In Computer Science


2nd Ed 2022 Richard Szeliski

https://ebookbell.com/product/computer-vision-algorithms-and-
applications-texts-in-computer-science-2nd-ed-2022-richard-
szeliski-48676718

Computer Vision And Machine Intelligence Paradigms For Sdgs Select


Proceedings Of Icrtaccvmip 2021 1st Ed 2023 R Jagadeesh Kannan

https://ebookbell.com/product/computer-vision-and-machine-
intelligence-paradigms-for-sdgs-select-proceedings-of-
icrtaccvmip-2021-1st-ed-2023-r-jagadeesh-kannan-48909164

Computer Vision Imaging And Computer Graphics Theory And Applications


16th International Joint Conference Visigrapp 2021 Virtual Event
February 810 2021 Revised Selected Papers A Augusto De Sousa

https://ebookbell.com/product/computer-vision-imaging-and-computer-
graphics-theory-and-applications-16th-international-joint-conference-
visigrapp-2021-virtual-event-february-810-2021-revised-selected-
papers-a-augusto-de-sousa-49113516
Computer Vision Toolbox™
User's Guide

R2023a
How to Contact MathWorks

Latest news: www.mathworks.com

Sales and services: www.mathworks.com/sales_and_services

User community: www.mathworks.com/matlabcentral

Technical support: www.mathworks.com/support/contact_us

Phone: 508-647-7000

The MathWorks, Inc.


1 Apple Hill Drive
Natick, MA 01760-2098
Computer Vision Toolbox™ User's Guide
© COPYRIGHT 2004–2023 by The MathWorks, Inc.
The software described in this document is furnished under a license agreement. The software may be used or copied
only under the terms of the license agreement. No part of this manual may be photocopied or reproduced in any form
without prior written consent from The MathWorks, Inc.
FEDERAL ACQUISITION: This provision applies to all acquisitions of the Program and Documentation by, for, or through
the federal government of the United States. By accepting delivery of the Program or Documentation, the government
hereby agrees that this software or documentation qualifies as commercial computer software or commercial computer
software documentation as such terms are used or defined in FAR 12.212, DFARS Part 227.72, and DFARS 252.227-7014.
Accordingly, the terms and conditions of this Agreement and only those rights specified in this Agreement, shall pertain
to and govern the use, modification, reproduction, release, performance, display, and disclosure of the Program and
Documentation by the federal government (or other entity acquiring for or through the federal government) and shall
supersede any conflicting contractual terms or conditions. If this License fails to meet the government's needs or is
inconsistent in any respect with federal procurement law, the government agrees to return the Program and
Documentation, unused, to The MathWorks, Inc.
Trademarks
MATLAB and Simulink are registered trademarks of The MathWorks, Inc. See
www.mathworks.com/trademarks for a list of additional trademarks. Other product or brand names may be
trademarks or registered trademarks of their respective holders.
Patents
MathWorks products are protected by one or more U.S. patents. Please see www.mathworks.com/patents for
more information.
Revision History
July 2004 First printing New for Version 1.0 (Release 14)
October 2004 Second printing Revised for Version 1.0.1 (Release 14SP1)
March 2005 Online only Revised for Version 1.1 (Release 14SP2)
September 2005 Online only Revised for Version 1.2 (Release 14SP3)
November 2005 Online only Revised for Version 2.0 (Release 14SP3+)
March 2006 Online only Revised for Version 2.1 (Release 2006a)
September 2006 Online only Revised for Version 2.2 (Release 2006b)
March 2007 Online only Revised for Version 2.3 (Release 2007a)
September 2007 Online only Revised for Version 2.4 (Release 2007b)
March 2008 Online only Revised for Version 2.5 (Release 2008a)
October 2008 Online only Revised for Version 2.6 (Release 2008b)
March 2009 Online only Revised for Version 2.7 (Release 2009a)
September 2009 Online only Revised for Version 2.8 (Release 2009b)
March 2010 Online only Revised for Version 3.0 (Release 2010a)
September 2010 Online only Revised for Version 3.1 (Release 2010b)
April 2011 Online only Revised for Version 4.0 (Release 2011a)
September 2011 Online only Revised for Version 4.1 (Release 2011b)
March 2012 Online only Revised for Version 5.0 (Release 2012a)
September 2012 Online only Revised for Version 5.1 (Release R2012b)
March 2013 Online only Revised for Version 5.2 (Release R2013a)
September 2013 Online only Revised for Version 5.3 (Release R2013b)
March 2014 Online only Revised for Version 6.0 (Release R2014a)
October 2014 Online only Revised for Version 6.1 (Release R2014b)
March 2015 Online only Revised for Version 6.2 (Release R2015a)
September 2015 Online only Revised for Version 7.0 (Release R2015b)
March 2016 Online only Revised for Version 7.1 (Release R2016a)
September 2016 Online only Revised for Version 7.2 (Release R2016b)
March 2017 Online only Revised for Version 7.3 (Release R2017a)
September 2017 Online only Revised for Version 8.0 (Release R2017b)
March 2018 Online only Revised for Version 8.1 (Release R2018a)
September 2018 Online only Revised for Version 8.2 (Release R2018b)
March 2019 Online only Revised for Version 9.0 (Release R2019a)
September 2019 Online only Revised for Version 9.1 (Release R2019b)
March 2020 Online only Revised for Version 9.2 (Release R2020a)
September 2020 Online only Revised for Version 9.3 (Release R2020b)
March 2021 Online only Revised for Version 10.0 (Release R2021a)
September 2021 Online only Revised for Version 10.1 (Release R2021b)
March 2022 Online only Revised for Version 10.2 (Release R2022a)
September 2022 Online only Revised for Version 10.3 (Release R2022b)
March 2023 Online only Revised for Version 10.4 (Release R2023a)
Contents

Camera Calibration and SfM Examples


1
Monocular Visual-Inertial Odometry Using Factor Graph . . . . . . . . . . . . . 1-2

Visual SLAM with an RGB-D Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-26

Import Stereo Camera Parameters from ROS . . . . . . . . . . . . . . . . . . . . . . 1-40

Import Camera Intrinsic Parameters from ROS . . . . . . . . . . . . . . . . . . . . 1-44

Develop Visual SLAM Algorithm Using Unreal Engine Simulation . . . . . 1-48

Visual Localization in a Parking Lot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-61

Stereo Visual SLAM for UAV Navigation in 3D Simulation . . . . . . . . . . . 1-67

Camera Calibration Using AprilTag Markers . . . . . . . . . . . . . . . . . . . . . . 1-73

Configure Monocular Fisheye Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-90

Monocular Visual Simultaneous Localization and Mapping . . . . . . . . . . 1-95

Structure From Motion From Two Views . . . . . . . . . . . . . . . . . . . . . . . . . 1-113

Stereo Visual Simultaneous Localization and Mapping . . . . . . . . . . . . . 1-122

Evaluating the Accuracy of Single Camera Calibration . . . . . . . . . . . . . 1-136

Measuring Planar Objects with a Calibrated Camera . . . . . . . . . . . . . . 1-141

Depth Estimation From Stereo Video . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-150

Structure From Motion From Multiple Views . . . . . . . . . . . . . . . . . . . . . 1-158

Uncalibrated Stereo Image Rectification . . . . . . . . . . . . . . . . . . . . . . . . 1-165

Code Generation and Third-Party Examples


2
Code Generation for Object Detection by Using Single Shot Multibox
Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2

v
Code Generation for Object Detection by Using YOLO v2 . . . . . . . . . . . . . 2-5

Introduction to Code Generation with Feature Matching and Registration


.......................................................... 2-9

Code Generation for Face Tracking with PackNGo . . . . . . . . . . . . . . . . . . 2-16

Code Generation for Depth Estimation From Stereo Video . . . . . . . . . . . 2-24

Detect Face (Raspberry Pi2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-29

Track Face (Raspberry Pi2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-35

Video Display in a Custom User Interface . . . . . . . . . . . . . . . . . . . . . . . . . 2-41

Generate Code for Detecting Objects in Images by Using ACF Object


Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-46

Deep Learning, Semantic Segmentation, and Detection


Examples
3
Recognize Seven-Segment Digits Using OCR . . . . . . . . . . . . . . . . . . . . . . . 3-3

Train an OCR Model to Recognize Seven-Segment Digits . . . . . . . . . . . . . 3-8

Automate Ground Truth Labeling for OCR . . . . . . . . . . . . . . . . . . . . . . . . 3-19

Object Detection In Large Satellite Imagery Using Deep Learning . . . . 3-33

Augmented Reality Using AprilTag Markers . . . . . . . . . . . . . . . . . . . . . . . 3-52

Multiclass Object Detection Using YOLO v2 Deep Learning . . . . . . . . . . 3-62

Generate Adversarial Examples for Semantic Segmentation . . . . . . . . . 3-72

Classify Defects on Wafer Maps Using Deep Learning . . . . . . . . . . . . . . . 3-83

Detect Image Anomalies Using Explainable FCDD Network . . . . . . . . . . 3-99

Detect Image Anomalies Using Pretrained ResNet-18 Feature


Embeddings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-112

Detect Defects on Printed Circuit Boards Using YOLO v4 Network . . . 3-132

Train Object Detectors in Experiment Manager . . . . . . . . . . . . . . . . . . . 3-138

Activity Recognition Using R(2+1)D Video Classification . . . . . . . . . . . 3-145

Activity Recognition from Video and Optical Flow Data Using Deep
Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-168

vi Contents
Evaluate a Video Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-196

Extract Training Data for Video Classification . . . . . . . . . . . . . . . . . . . . 3-200

Classify Streaming Webcam Video Using SlowFast Video Classifier . . . 3-204

Gesture Recognition using Videos and Deep Learning . . . . . . . . . . . . . 3-207

Explore Semantic Segmentation Network Using Grad-CAM . . . . . . . . . 3-228

Point Cloud Classification Using PointNet Deep Learning . . . . . . . . . . 3-235

Object Detection Using SSD Deep Learning . . . . . . . . . . . . . . . . . . . . . . 3-258

Object Detection in a Cluttered Scene Using Point Feature Matching 3-270

Semantic Segmentation Using Deep Learning . . . . . . . . . . . . . . . . . . . . 3-281

Calculate Segmentation Metrics in Block-Based Workflow . . . . . . . . . . 3-300

Semantic Segmentation of Multispectral Images Using Deep Learning


........................................................ 3-305

3-D Brain Tumor Segmentation Using Deep Learning . . . . . . . . . . . . . . 3-323

Image Category Classification Using Bag of Features . . . . . . . . . . . . . . 3-333

Image Category Classification Using Deep Learning . . . . . . . . . . . . . . . 3-340

Image Retrieval Using Customized Bag of Features . . . . . . . . . . . . . . . 3-349

Create SSD Object Detection Network . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-356

Train YOLO v2 Network for Vehicle Detection . . . . . . . . . . . . . . . . . . . . 3-359

Import Pretrained ONNX YOLO v2 Object Detector . . . . . . . . . . . . . . . . 3-364

Export YOLO v2 Object Detector to ONNX . . . . . . . . . . . . . . . . . . . . . . . 3-371

Estimate Anchor Boxes From Training Data . . . . . . . . . . . . . . . . . . . . . . 3-377

Object Detection Using YOLO v3 Deep Learning . . . . . . . . . . . . . . . . . . 3-381

Object Detection Using YOLO v2 Deep Learning . . . . . . . . . . . . . . . . . . 3-396

Create YOLO v2 Object Detection Network . . . . . . . . . . . . . . . . . . . . . . . 3-406

Train Object Detector Using R-CNN Deep Learning . . . . . . . . . . . . . . . . 3-411

Object Detection Using Faster R-CNN Deep Learning . . . . . . . . . . . . . . 3-424

Train Classification Network to Classify Object in 3-D Point Cloud . . . 3-434

Estimate Body Pose Using Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . 3-444

vii
Generate Image from Segmentation Map Using Deep Learning . . . . . . 3-452

Train Simple Semantic Segmentation Network in Deep Network Designer


........................................................ 3-466

Train ACF-Based Stop Sign Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-471

Train Fast R-CNN Stop Sign Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-474

Perform Instance Segmentation Using Mask R-CNN . . . . . . . . . . . . . . . 3-477

Object Detection Using YOLO v4 Deep Learning . . . . . . . . . . . . . . . . . . 3-482

Feature Detection and Extraction Examples


4
Automatically Detect and Recognize Text Using MSER and OCR . . . . . . . 4-2

Automatically Detect and Recognize Text Using Pretrained CRAFT


Network and OCR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-14

Digit Classification Using HOG Features . . . . . . . . . . . . . . . . . . . . . . . . . . 4-17

Find Image Rotation and Scale Using Automated Feature Matching . . . 4-25

Feature Based Panoramic Image Stitching . . . . . . . . . . . . . . . . . . . . . . . . 4-30

Cell Counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-36

Object Counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-39

Pattern Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-41

Recognize Text Using Optical Character Recognition (OCR) . . . . . . . . . . 4-46

Cell Counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-59

Lidar and Point Cloud Processing Examples


5
Design Lidar SLAM Algorithm Using Unreal Engine Simulation
Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2

Ground Plane and Obstacle Detection Using Lidar . . . . . . . . . . . . . . . . . 5-12

Augment Point Cloud Data For Deep Learning . . . . . . . . . . . . . . . . . . . . . 5-21

Import Point Cloud Data For Deep Learning . . . . . . . . . . . . . . . . . . . . . . . 5-26

viii Contents
Encode Point Cloud Data For Deep Learning . . . . . . . . . . . . . . . . . . . . . . 5-30

Build a Map from Lidar Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-36

Build a Map from Lidar Data Using SLAM . . . . . . . . . . . . . . . . . . . . . . . . 5-55

3-D Point Cloud Registration and Stitching . . . . . . . . . . . . . . . . . . . . . . . 5-71

Computer Vision with Simulink Examples


6
Multicore Simulation of Video Processing System . . . . . . . . . . . . . . . . . . . 6-2

Concentricity Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-6

Object Counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-9

Video Focus Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-11

Video Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-13

Motion Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-15

Pattern Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-17

Scene Change Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-20

Surveillance Recording . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-22

Traffic Warning Sign Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-24

Abandoned Object Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-27

Color-based Road Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-30

Detect and Track Face . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-34

Lane Departure Warning System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-41

Tracking Cars Using Foreground Detection . . . . . . . . . . . . . . . . . . . . . . . 6-45

Tracking Cars Using Optical Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-48

Tracking Based on Color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-50

Video Mosaicking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-52

Video Stabilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-57

Periodic Noise Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-59

ix
Rotation Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-61

Barcode Recognition Using Live Video Acquisition . . . . . . . . . . . . . . . . . 6-65

Edge Detection Using Live Video Acquisition . . . . . . . . . . . . . . . . . . . . . . 6-67

Noise Removal and Image Sharpening . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-72

Track Marker Using Simulink Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-78

Video and Image Ground Truth Labeling


7
Export Ground Truth Object to Custom and COCO JSON Files . . . . . . . . . 7-2

Automate Ground Truth Labeling for Semantic Segmentation . . . . . . . . . 7-7

Convert Image Labeler Polygons to Labeled Blocked Image for Semantic


Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-16

Automate Ground Truth Labeling for Object Detection . . . . . . . . . . . . . . 7-21

Tracking and Motion Estimation Examples


8
Visual Tracking of Occluded and Unresolved Objects . . . . . . . . . . . . . . . . 8-2

Implement Simple Online and Realtime Tracking . . . . . . . . . . . . . . . . . . 8-23

Import Camera-Based Datasets in MOT Challenge Format for Object


Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-32

Video Stabilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-39

Video Stabilization Using Point Feature Matching . . . . . . . . . . . . . . . . . . 8-42

Face Detection and Tracking Using CAMShift . . . . . . . . . . . . . . . . . . . . . 8-52

Face Detection and Tracking Using the KLT Algorithm . . . . . . . . . . . . . . 8-57

Face Detection and Tracking Using Live Video Acquisition . . . . . . . . . . . 8-63

Motion-Based Multiple Object Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . 8-68

Tracking Pedestrians from a Moving Car . . . . . . . . . . . . . . . . . . . . . . . . . 8-77

Use Kalman Filter for Object Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-87

x Contents
Detect Cars Using Gaussian Mixture Models . . . . . . . . . . . . . . . . . . . . . . 8-98

Labelers
9
View Summary of ROI and Scene Labels . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-2

Create Automation Algorithm Function for Labeling . . . . . . . . . . . . . . . . . 9-4


How to Specify an Automation Function in an App . . . . . . . . . . . . . . . . . . 9-4
Use a Function to Automate Labeling with Your Custom Detector . . . . . . . 9-4
Create an Automation Algorithm Function . . . . . . . . . . . . . . . . . . . . . . . . 9-5

Create Automation Algorithm for Labeling . . . . . . . . . . . . . . . . . . . . . . . . . 9-8


Create New Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-8
Import Existing Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-9
Custom Algorithm Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-9

Label Large Images in the Image Labeler . . . . . . . . . . . . . . . . . . . . . . . . . 9-12


Import Blocked Image into Image Labeler . . . . . . . . . . . . . . . . . . . . . . . 9-12
Work with Blocked Images in the Image Labeler . . . . . . . . . . . . . . . . . . . 9-14
Use Blocked Image Automation with Images . . . . . . . . . . . . . . . . . . . . . . 9-15
Postprocess Exported Labels to Create a Labeled Blocked Image . . . . . . 9-17

Label Pixels for Semantic Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . 9-19


Start Pixel Labeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-19
Label Pixels Using Flood Fill Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-19
Label Pixels Using Superpixel Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-20
Label Pixels Using Smart Polygon Tool . . . . . . . . . . . . . . . . . . . . . . . . . . 9-21
Label Pixels Using Polygon Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-22
Label Pixels Using Assisted Freehand Tool . . . . . . . . . . . . . . . . . . . . . . . 9-23
Replace Pixel Labels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-24
Refine Labels Using Brush Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-24
Visualize Pixel Labels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-24
Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-25

Label Objects Using Polygons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-27


About Polygon Labels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-27
Load Unlabeled Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-27
Create Polygon Labels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-28
Draw Polygon ROI Labels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-28
Modify Polygon Preferences and Stacking Order . . . . . . . . . . . . . . . . . . . 9-28
Postprocess Exported Labels for Instance or Semantic Segmentation
Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-31

Get Started with the Image Labeler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-34


Load Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-34
Layout of the Image Labeler App . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-35
Create Label Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-36
Label Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-40
Export Labeled Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-41

Choose an App to Label Ground Truth Data . . . . . . . . . . . . . . . . . . . . . . . 9-44

xi
Get Started with the Video Labeler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-48
Load Unlabeled Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-48
Create Label Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-48
Label Ground Truth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-54
Export Labeled Ground Truth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-56
Label Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-58
Save App Session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-59

Use Custom Image Source Reader for Labeling . . . . . . . . . . . . . . . . . . . . 9-61


Create Custom Reader Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-61
Import Data Source into Video Labeler App . . . . . . . . . . . . . . . . . . . . . . 9-61
Import Data Source into Ground Truth Labeler App . . . . . . . . . . . . . . . . 9-62

Keyboard Shortcuts and Mouse Actions for Video Labeler . . . . . . . . . . . 9-63


Label Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-63
Frame Navigation and Time Interval Settings . . . . . . . . . . . . . . . . . . . . . 9-63
Labeling Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-63
Polyline Drawing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-64
Polygon Drawing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-65
Zooming and Panning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-65
App Sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-65

Keyboard Shortcuts and Mouse Actions for Image Labeler . . . . . . . . . . . 9-67


Label Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-67
Image Browsing and Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-67
Labeling Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-67
Polyline Drawing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-68
Polygon Drawing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-69
Zooming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-69
Zooming and Panning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-70
App Sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-70
Label and Sublabel Attribute Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-70
View Labels, Sublabels, and Attributes Right-Panel . . . . . . . . . . . . . . . . . 9-70
Attribute Column: Drop-down Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-70
Attribute Column: Edit Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-71

Share and Store Labeled Ground Truth Data . . . . . . . . . . . . . . . . . . . . . . 9-72


Share Ground Truth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-72
Move Ground Truth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-75
Store Ground Truth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-76
Extract Labeled Video Scenes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-76

View Summary of Ground Truth Labels . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-78


View Label Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-78
Compare Selected Labels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-80

Temporal Automation Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-82


Create Temporal Automation Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 9-82
Run Temporal Automation Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-82

Blocked Image Automation Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-84


Create Blocked Image Automation Algorithm . . . . . . . . . . . . . . . . . . . . . 9-84
Run Blocked Image Automation Algorithm . . . . . . . . . . . . . . . . . . . . . . . 9-84

xii Contents
Use Sublabels and Attributes to Label Ground Truth Data . . . . . . . . . . . 9-85
When to Use Sublabels vs. Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-85
Draw Sublabels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-86
Copy and Paste Sublabels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-86
Delete Sublabels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-87
Sublabel Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-88

Training Data for Object Detection and Semantic Segmentation . . . . . . 9-89

Create Automation Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-93


Create New Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-93
Import Existing Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-94
Custom Algorithm Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-94

Featured Examples
10
Localize and Read Multiple Barcodes in Image . . . . . . . . . . . . . . . . . . . . 10-2

Monocular Visual Odometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-18

Detect and Track Vehicles Using Lidar Data . . . . . . . . . . . . . . . . . . . . . . 10-30

Semantic Segmentation Using Dilated Convolutions . . . . . . . . . . . . . . . 10-49

Define Custom Pixel Classification Layer with Tversky Loss . . . . . . . . . 10-54

Track a Face in Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-61

Create 3-D Stereo Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-66

Measure Distance from Stereo Camera to a Face . . . . . . . . . . . . . . . . . . 10-67

Reconstruct 3-D Scene from Disparity Map . . . . . . . . . . . . . . . . . . . . . . 10-68

Visualize Stereo Pair of Camera Extrinsic Parameters . . . . . . . . . . . . . . 10-71

Remove Distortion from an Image Using Camera Parameters Object . 10-74

Structure from Motion and Visual SLAM


11
Choose SLAM Workflow Based on Sensor Data . . . . . . . . . . . . . . . . . . . . . 11-2
Choose SLAM Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-2

Implement Visual SLAM in MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-8


Terms Used in Visual SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-8
Typical Feature-based Visual SLAM Workflow . . . . . . . . . . . . . . . . . . . . . 11-8

xiii
Key Frame and Map Data Management . . . . . . . . . . . . . . . . . . . . . . . . . 11-9
Map Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-10
Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-11
Local Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-12
Loop Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-14
Drift Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-14
Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-15

Point Cloud Processing


12
Choose a Point Cloud Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-2

Getting Started with Point Clouds Using Deep Learning . . . . . . . . . . . . . 12-3


Import Point Cloud Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-3
Augment Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-3
Encode Point Cloud Data to Image-like Format . . . . . . . . . . . . . . . . . . . . 12-4
Train a Deep Learning Classification Network with Encoded Point Cloud
Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-4

Implement Point Cloud SLAM in MATLAB . . . . . . . . . . . . . . . . . . . . . . . . 12-5


Mapping and Localization Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-5
Manage Data for Mapping and Localization . . . . . . . . . . . . . . . . . . . . . . 12-7
Preprocess Point Clouds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-7
Register Point Clouds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-7
Detect Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-10
Correct Drift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-10
Assemble Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-10
Localize Vehicle in Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-10
Alternate Workflows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-10

The PLY Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-13


File Header . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-13
Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-14
Common Elements and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-15

Using the Installer for Computer Vision System Toolbox


Product
13
Install Computer Vision Toolbox Add-on Support Files . . . . . . . . . . . . . . 13-2

Install OCR Language Data Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-3


Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-3
Pretrained Language Data and the ocr function . . . . . . . . . . . . . . . . . . . 13-3

Install and Use Computer Vision Toolbox Interface for OpenCV in


MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-6
Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-6

xiv Contents
Support Package Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-6

Build MEX-Files for OpenCV Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-8


Create MEX-File from OpenCV C++ file . . . . . . . . . . . . . . . . . . . . . . . . . 13-8
Create Your Own OpenCV MEX-files . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-8
Run OpenCV Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-8

Use Prebuilt MATLAB Interface to OpenCV . . . . . . . . . . . . . . . . . . . . . . 13-10


Call MATLAB Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-11
Call Functions in OpenCV Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-11
Display Help for MATLAB Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-12
Display Help for MATLAB Interface to OpenCV Library . . . . . . . . . . . . . 13-12
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-13

Perform Edge-Preserving Image Smoothing Using OpenCV in MATLAB


........................................................ 13-15

Subtract Image Background by Using OpenCV in MATLAB . . . . . . . . . 13-19

Perform Face Detection by Using OpenCV in MATLAB . . . . . . . . . . . . . 13-22

Install and Use Computer Vision Toolbox Interface for OpenCV in


Simulink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-24
Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-24
Import OpenCV Code into Simulink . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-24
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-29

Draw Different Shapes by Using OpenCV Code in Simulink . . . . . . . . . 13-31

Convert RGB Image to Grayscale Image by Using OpenCV Importer . . 13-38

Smile Detection by Using OpenCV Code in Simulink . . . . . . . . . . . . . . . 13-45

Shadow Detection by Using OpenCV Code in Simulink . . . . . . . . . . . . . 13-55

Vehicle and Pedestrian Detector by Using OpenCV Importer . . . . . . . . 13-60

Video Cartoonizer by Using OpenCV Code in Simulink . . . . . . . . . . . . . 13-64

Convert Between Simulink Image Type and Matrices . . . . . . . . . . . . . . 13-69


Copy Example Model to a Writable Location . . . . . . . . . . . . . . . . . . . . . 13-69
Example Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-69
Simulate Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-69
Generate C++ Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13-70

Input, Output, and Conversions


14
Export to Video Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-2
Setting Block Parameters for this Example . . . . . . . . . . . . . . . . . . . . . . . 14-2
Configuration Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-3

xv
Import from Video Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-4
Setting Block Parameters for this Example . . . . . . . . . . . . . . . . . . . . . . . 14-4
Configuration Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-5

Batch Process Image Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-6


Configuration Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-6

Convert R'G'B' to Intensity Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-7

Process Multidimensional Color Video Signals . . . . . . . . . . . . . . . . . . . 14-10

Video Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-12


Defining Intensity and Color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-12
Video Data Stored in Column-Major Format . . . . . . . . . . . . . . . . . . . . . 14-12

Image Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-13


Binary Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-13
Intensity Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-13
RGB Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-13

Display and Graphics


15
Choose Function to Visualize Detected Objects . . . . . . . . . . . . . . . . . . . . 15-2

Display, Stream, and Preview Videos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-5


View Streaming Video in MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-5
Preview Video in MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-5
View Video in Simulink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-5

Draw Shapes and Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-7


Rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-7
Line and Polyline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-7
Polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-9
Circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-9

Registration and Stereo Vision


16
Select Calibration Pattern and Set Properties . . . . . . . . . . . . . . . . . . . . . 16-2

Prepare Camera and Capture Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-4


Camera Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-4
Capture Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-4

Calibration Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-6


What Are Calibration Patterns? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-6
Supported Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-8
Checkerboard Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-8

xvi Contents
Circle Grid Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-9
Custom Pattern Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-9

Fisheye Calibration Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-11


Fisheye Camera Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-13
Fisheye Camera Calibration in MATLAB . . . . . . . . . . . . . . . . . . . . . . . . 16-20

Using the Single Camera Calibrator App . . . . . . . . . . . . . . . . . . . . . . . . 16-24


Camera Calibrator Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-24
Choose a Calibration Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-24
Capture Calibration Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-24
Using the Camera Calibrator App . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-25

Using the Stereo Camera Calibrator App . . . . . . . . . . . . . . . . . . . . . . . . 16-38


Stereo Camera Calibrator Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-38
Choose a Calibration Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-39
Capture Calibration Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-39
Using the Stereo Camera Calibrator App . . . . . . . . . . . . . . . . . . . . . . . 16-39

What Is Camera Calibration? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-51


Camera Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-51
Pinhole Camera Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-53
Camera Calibration Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-59
Distortion in Camera Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-63

Structure from Motion Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-67


Structure from Motion from Two Views . . . . . . . . . . . . . . . . . . . . . . . . 16-67
Structure from Motion from Multiple Views . . . . . . . . . . . . . . . . . . . . . 16-68

Object Detection
17
Train Custom OCR Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-2
Prepare Training Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-2
Train an OCR model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-4
Evaluate OCR training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-5

Getting Started with OCR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-6


Text Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-6
Text Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-7
Troubleshoot OCR Function Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-8
Train Custom OCR Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-9
Create Ground Truth Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-9
Evaluate and Quantize OCR Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-9

Getting Started with Anomaly Detection Using Deep Learning . . . . . . 17-11


Prepare Training and Calibration Data . . . . . . . . . . . . . . . . . . . . . . . . . 17-11
Train the Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-12
Calibrate and Evaluate the Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-12
Perform Classification Using the Model . . . . . . . . . . . . . . . . . . . . . . . . 17-13
Deploy the Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-13

xvii
Getting Started with Video Classification Using Deep Learning . . . . . . 17-14
Create Training Data for Video Classification . . . . . . . . . . . . . . . . . . . . 17-15
Create Video Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-15
Train Video Classifier and Evaluate Results . . . . . . . . . . . . . . . . . . . . . 17-22
Classify Using Deep Learning Video Classifiers . . . . . . . . . . . . . . . . . . . 17-23

Choose an Object Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-24

Getting Started with SSD Multibox Detection . . . . . . . . . . . . . . . . . . . . 17-31


Predict Objects in the Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-31
Design an SSD Detection Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-32
Train an Object Detector and Detect Objects with an SSD Model . . . . . 17-32
Transfer Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-33
Code Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-33
Label Training Data for Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . 17-33

Getting Started with Object Detection Using Deep Learning . . . . . . . . 17-34


Create Training Data for Object Detection . . . . . . . . . . . . . . . . . . . . . . 17-34
Create Object Detection Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-35
Train Detector and Evaluate Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-35
Detect Objects Using Deep Learning Detectors . . . . . . . . . . . . . . . . . . . 17-35
Detect Objects Using Pretrained Object Detection Models . . . . . . . . . . 17-36
MathWorks GitHub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-37

How Labeler Apps Store Exported Pixel Labels . . . . . . . . . . . . . . . . . . . 17-39


Location of Pixel Label Data Folder . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-39
View Exported Pixel Label Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-39
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-40

Anchor Boxes for Object Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-44


What Is an Anchor Box? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-44
Advantage of Using Anchor Boxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-44
How Do Anchor Boxes Work? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-45
Anchor Box Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-48

Getting Started with YOLO v2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-49


Predicting Objects in the Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-49
Transfer Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-50
Design a YOLO v2 Detection Network . . . . . . . . . . . . . . . . . . . . . . . . . . 17-50
Train an Object Detector and Detect Objects with a YOLO v2 Model . . . 17-51
Code Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-51
Label Training Data for Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . 17-51

Getting Started with YOLO v3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-53


Predicting Objects in the Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-53
Design a YOLO v3 Detection Network . . . . . . . . . . . . . . . . . . . . . . . . . . 17-54
Transfer Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-54
Train an Object Detector and Detect Objects with a YOLO v3 Model . . . 17-54
Label Training Data for Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . 17-54

Getting Started with YOLO v4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-56


Predict Objects Using YOLO v4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-57
Create YOLO v4 Object Detection Network . . . . . . . . . . . . . . . . . . . . . . 17-57
Train and Detect Objects Using YOLOv4 Network . . . . . . . . . . . . . . . . . 17-58
Transfer Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-59

xviii Contents
Label Training Data for Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . 17-59

Getting Started with R-CNN, Fast R-CNN, and Faster R-CNN . . . . . . . . 17-61
Object Detection Using R-CNN Algorithms . . . . . . . . . . . . . . . . . . . . . . 17-61
Comparison of R-CNN Object Detectors . . . . . . . . . . . . . . . . . . . . . . . . 17-63
Transfer Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-63
Design an R-CNN, Fast R-CNN, and a Faster R-CNN Model . . . . . . . . . . 17-64
Label Training Data for Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . 17-65

Getting Started with Mask R-CNN for Instance Segmentation . . . . . . . 17-67


Design Mask R-CNN Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-67
Prepare Mask R-CNN Training Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-68
Train Mask R-CNN Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-73
Perform Instance Segmentation and Evaluate Results . . . . . . . . . . . . . . 17-73

Getting Started with Semantic Segmentation Using Deep Learning . . 17-75


Label Training Data for Semantic Segmentation . . . . . . . . . . . . . . . . . . 17-75
Train and Test a Semantic Segmentation Network . . . . . . . . . . . . . . . . 17-76
Segment Objects Using Pretrained DeepLabv3+ Network . . . . . . . . . . 17-76

Point Feature Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-78


Functions That Return Points Objects . . . . . . . . . . . . . . . . . . . . . . . . . . 17-78
Functions That Accept Points Objects . . . . . . . . . . . . . . . . . . . . . . . . . . 17-80

Local Feature Detection and Extraction . . . . . . . . . . . . . . . . . . . . . . . . . 17-84


What Are Local Features? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-84
Benefits and Applications of Local Features . . . . . . . . . . . . . . . . . . . . . 17-84
What Makes a Good Local Feature? . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-85
Feature Detection and Feature Extraction . . . . . . . . . . . . . . . . . . . . . . 17-85
Choose a Feature Detector and Descriptor . . . . . . . . . . . . . . . . . . . . . . 17-86
Use Local Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-88
Image Registration Using Multiple Features . . . . . . . . . . . . . . . . . . . . . 17-94

Get Started with Cascade Object Detector . . . . . . . . . . . . . . . . . . . . . . 17-102


Why Train a Detector? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-102
What Kinds of Objects Can You Detect? . . . . . . . . . . . . . . . . . . . . . . . 17-102
How Does the Cascade Classifier Work? . . . . . . . . . . . . . . . . . . . . . . . 17-102
Create a Cascade Classifier Using the trainCascadeObjectDetector . . 17-103
Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-106
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-108
Train Stop Sign Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-112

Using OCR Trainer App . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-116


Open the OCR Trainer App . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-116
Train OCR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-116
App Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-118

Create a Custom Feature Extractor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-120


Example of a Custom Feature Extractor . . . . . . . . . . . . . . . . . . . . . . . 17-120

Image Retrieval with Bag of Visual Words . . . . . . . . . . . . . . . . . . . . . . 17-123


Retrieval System Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-124
Evaluate Image Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-125

xix
Image Classification with Bag of Visual Words . . . . . . . . . . . . . . . . . . . 17-126
Step 1: Set Up Image Category Sets . . . . . . . . . . . . . . . . . . . . . . . . . . 17-126
Step 2: Create Bag of Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17-126
Step 3: Train an Image Classifier With Bag of Visual Words . . . . . . . . 17-127
Step 4: Classify an Image or Image Set . . . . . . . . . . . . . . . . . . . . . . . . 17-128

Motion Estimation and Tracking


18
Multiple Object Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18-2
Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18-2
Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18-3
Data Association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18-3
Track Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18-4

Filters, Transforms, and Enhancements


19
Adjust the Contrast of Intensity Images . . . . . . . . . . . . . . . . . . . . . . . . . . 19-2

Adjust the Contrast of Color Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19-6

Remove Salt and Pepper Noise from Images . . . . . . . . . . . . . . . . . . . . . 19-10

Sharpen an Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19-14

Statistics and Morphological Operations


20
Correct Nonuniform Illumination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-2

Count Objects in an Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20-8

Fixed-Point Design
21
Fixed-Point Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21-2
Fixed-Point Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21-2
Benefits of Fixed-Point Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21-2
Benefits of Fixed-Point Design with System Toolboxes Software . . . . . . . 21-2

Fixed-Point Concepts and Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . 21-4


Fixed-Point Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21-4

xx Contents
Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21-5
Precision and Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21-6

Arithmetic Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21-8


Modulo Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21-8
Two's Complement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21-8
Addition and Subtraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21-9
Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21-10
Casts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21-12

Fixed-Point Support for MATLAB System Objects . . . . . . . . . . . . . . . . . 21-15


Getting Information About Fixed-Point System Objects . . . . . . . . . . . . . 21-15
Setting System Object Fixed-Point Properties . . . . . . . . . . . . . . . . . . . . 21-15

Specify Fixed-Point Attributes for Blocks . . . . . . . . . . . . . . . . . . . . . . . . 21-16


Fixed-Point Block Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21-16
Specify System-Level Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21-18
Inherit via Internal Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21-18
Specify Data Types for Fixed-Point Blocks . . . . . . . . . . . . . . . . . . . . . . . 21-20

Code Generation and Shared Library


22
Simulink Shared Library Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . 22-2

Accelerating Simulink Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22-3

Portable C Code Generation for Functions That Use OpenCV Library . . 22-4
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22-4

Vision Blocks Examples


23
Rotate ROI in Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-4

Apply Horizontal Shear Transformation to Image . . . . . . . . . . . . . . . . . . 23-7

Find Location of Object in Image Using Template Matching . . . . . . . . 23-10

Compute Optical Flow Velocities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-13

Rotate an Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-15

Generate Image Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-18

Export Image to MATLAB Workspace . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-20

Import Video from MATLAB Workspace . . . . . . . . . . . . . . . . . . . . . . . . . 23-23

xxi
Find Minimum Value in ROI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-25

Write Image to Binary File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-29

Compute Standard Deviation of ROIs . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-30

Read Video Stored as Binary Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-33

Compare Image Quality Using PSNR . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-37

Compute Autocorrelation of Input Matrix . . . . . . . . . . . . . . . . . . . . . . . . 23-39

Compute Correlation between Two Matrices . . . . . . . . . . . . . . . . . . . . . 23-40

Find Statistics of Circular Blobs in Image . . . . . . . . . . . . . . . . . . . . . . . 23-41

Replace Intensity Values in ROI with its Maximum Value . . . . . . . . . . . 23-45

Median based Image Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-49

Import Image From MATLAB Workspace . . . . . . . . . . . . . . . . . . . . . . . . 23-52

Import Image from Specified Location . . . . . . . . . . . . . . . . . . . . . . . . . . 23-54

Remove Interlacing Effect From Image . . . . . . . . . . . . . . . . . . . . . . . . . . 23-58

Estimate Motion between Two Images . . . . . . . . . . . . . . . . . . . . . . . . . . 23-61

Enhance Contrast of Grayscale Image Using Histogram Equalization


........................................................ 23-63

Enhance Contrast of Color Image Using Histogram Equalization . . . . 23-66

Compute Mean of ROIs in Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-69

Detect Corners in Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-72

Edge Detection of Intensity Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-76

Read, Process, and Write Video Frames to File . . . . . . . . . . . . . . . . . . . 23-79

Find Local Maxima in Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-81

Read, Convert, and View Video from File . . . . . . . . . . . . . . . . . . . . . . . . 23-84

Read and Display YCbCr Video from File . . . . . . . . . . . . . . . . . . . . . . . . . 23-86

Display Frame Rate of Input Video . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-88

Draw Rectangles on Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-89

Draw Circles on Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-91

Overlay Images Using Binary Mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-93

xxii Contents
Linearly Combine Two Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-98

Pad Zeros to Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-102

Insert Text into Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-105

Compress Image Using 2-D DCT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-108

Draw Markers on Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-112

Read and Display RGB Video from File . . . . . . . . . . . . . . . . . . . . . . . . . 23-115

Label Objects in Binary Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-117

Boundary Extraction of Binary Image . . . . . . . . . . . . . . . . . . . . . . . . . . 23-121

Select String to Insert into Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-125

Insert Two Strings into Image at Different Locations . . . . . . . . . . . . . 23-128

Dilation of Binary Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-130

Find Complement of Intensity Image . . . . . . . . . . . . . . . . . . . . . . . . . . 23-132

Perform Top-Hat Filtering of Binary Image . . . . . . . . . . . . . . . . . . . . . 23-135

Perform Bottom-hat Filtering of Binary Image . . . . . . . . . . . . . . . . . . 23-138

Perform Opening of Binary Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-141

Perform Closing of Binary Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-144

Blur Image Using Gaussian Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-147

Convert Image Color Space from RGB to YCbCr . . . . . . . . . . . . . . . . . 23-150

Convert Data Type and Color Space of Image from RGB to HSV . . . . . 23-153

Perform Gamma Correction of Image . . . . . . . . . . . . . . . . . . . . . . . . . . 23-156

Adjust Contrast of Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-159

Remove Impulse Noise from Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-162

Draw Hough Lines on Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-165

Construct Laplacian Pyramid Image . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-167

Apply Affine Transformation to Image . . . . . . . . . . . . . . . . . . . . . . . . . . 23-170

Trace Boundary of Object in Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-173

Convert Grayscale Image to Binary Image . . . . . . . . . . . . . . . . . . . . . . 23-177

xxiii
Perform Chroma Resampling of Image . . . . . . . . . . . . . . . . . . . . . . . . . 23-180

Compute Variance of ROIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-183

Smooth Image Using Gaussian Kernel . . . . . . . . . . . . . . . . . . . . . . . . . 23-187

Plot Hough Transform of Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-190

Apply Vertical Shear Transformation to Image . . . . . . . . . . . . . . . . . . . 23-194

Resize ROI in Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-197

Demosaic an Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-200

Rotate an Image in Simulink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-202

Filter Image Using FIR Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-205

Visualize Point Cloud Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23-209

xxiv Contents
1

Camera Calibration and SfM Examples

• “Monocular Visual-Inertial Odometry Using Factor Graph” on page 1-2


• “Visual SLAM with an RGB-D Camera” on page 1-26
• “Import Stereo Camera Parameters from ROS” on page 1-40
• “Import Camera Intrinsic Parameters from ROS” on page 1-44
• “Develop Visual SLAM Algorithm Using Unreal Engine Simulation” on page 1-48
• “Visual Localization in a Parking Lot” on page 1-61
• “Stereo Visual SLAM for UAV Navigation in 3D Simulation” on page 1-67
• “Camera Calibration Using AprilTag Markers” on page 1-73
• “Configure Monocular Fisheye Camera” on page 1-90
• “Monocular Visual Simultaneous Localization and Mapping” on page 1-95
• “Structure From Motion From Two Views” on page 1-113
• “Stereo Visual Simultaneous Localization and Mapping” on page 1-122
• “Evaluating the Accuracy of Single Camera Calibration” on page 1-136
• “Measuring Planar Objects with a Calibrated Camera” on page 1-141
• “Depth Estimation From Stereo Video” on page 1-150
• “Structure From Motion From Multiple Views” on page 1-158
• “Uncalibrated Stereo Image Rectification” on page 1-165
1 Camera Calibration and SfM Examples

Monocular Visual-Inertial Odometry Using Factor Graph

Monocular visual-inertial odometry estimates the position and orientation of the robot using camera
and inertial measurement unit (IMU) sensor data. Camera-based state estimation is accurate during
low-speed navigation. However, camera-based estimation faces challenges such as motion blur and
track loss at higher speeds. Also monocular camera-based estimation can estimate poses at an
arbitrary scale. On the other hand, inertial navigation can handle high-speed navigation easily and
estimate poses at world scale. You can combine the advantages of both types of sensor data to
achieve better accuracy using tightly coupled factor graph optimization.

Overview

The visual-inertial system implemented in this example consists of a simplified version of the
monocular visual odometry front-end of the VINS [1 on page 1-25] algorithm and a factor graph
back-end.

The visual odometry front-end has responsibilities similar to standard structure from motion (SfM)
algorithms, such as oriented FAST and rotated BRIEF (ORB) and simultaneous localization and
mapping (SLAM). The visual odometry front-end detects and tracks key points across multiple
frames, estimates camera poses, and triangulates 3-D points using multi-view geometry. The factor
graph back-end jointly optimizes the estimated camera poses, 3-D points, IMU velocity, and bias.
Before fusing the camera and IMU measurements, you must align the camera and IMU to compute
the camera pose scale, gravity rotation, and initial IMU velocity and bias.

1-2
Monocular Visual-Inertial Odometry Using Factor Graph

Set Up

This example uses the Blackbird data set (NYC Subway Winter) to demonstrate the visual-inertial
odometry workflow. Download the data set.

data = helperDownloadData();

Fix the random seed for repeatability.

rng(0)

Initialize Algorithm Parameters

Use the helperVIOParameters function to initialize and tune these parameters:

Visual Front-End Parameters

• Random sample consensus (RANSAC) threshold (F_Threshold), confidence (F_Confidence),


and iterations (F_Iterations)
• Kanade-Lucas-Tomasi (KLT) tracker bidirectional error (KLT_BiErr), number of levels
(KLT_Levels), and block size (KLT_Block)
• Minimum parallax for key frame selection and triangulating new 3-D points
• Minimum number of key points to track in each frame (numTrackedThresh)
• Maximum number of key points to track in each frame (maxPointsToTrack)

Factor Graph Optimization Back-End Parameters

• Factor graph solver options (SolverOpts)

1-3
1 Camera Calibration and SfM Examples

• Sliding window size (windowSize) - Maximum number of frames in the sliding window.
• Frame rate at which to run the sliding window optimization (optimizationFrequency)
params = helperVIOParameters();
% Set to true if IMU data is available.
useIMU = true;

Initialize variables.
status = struct("firstFrame",true,"isMapInitialized",false,"isIMUAligned",false,"Mediandepth",fal
% Set to true to attempt camera-IMU alignment.
readyToAlignCameraAndIMU = false;
% Set initial scene median depth.
initialSceneMedianDepth = 4;
viewId = 0;
removedFrameIds = [];
allCameraTracks = cell(1,5000);

% Enable visualization.
vis = true;
showMatches = false;
if vis
% Figure variables.
axMatches = [];
axTraj = [];
axMap = [];
end

Set up factor graph for back-end tightly coupled factor graph optimization.
% Set up factor graph for fusion.
slidingWindowFactorGraph = factorGraph;
maxFrames = 10000;
maxLandmarks = 100000;
ids = helperGenerateNodeID(slidingWindowFactorGraph,maxFrames,maxLandmarks);
% Information matrix (measure of accuracy) associated with the camera
% projection factors that relate 3-D points and estimated camera poses.
cameraInformation = ((data.intrinsics.K(1,1)/1.5)^2)*eye(2);
% Initialize IMU parameters. The fusion accuracy depends on these parameters.
imuParams = factorIMUParameters(SampleRate=100,GyroscopeNoise=0.1, ...
GyroscopeBiasNoise=3e-3,AccelerometerNoise=0.3, ...
AccelerometerBiasNoise=1e-3,ReferenceFrame="ENU");

Create the point tracker to track key points across multiple frames.
tracker = vision.PointTracker(MaxBidirectionalError=params.KLT_BiErr, ...
NumPyramidLevels=params.KLT_Levels,BlockSize=params.KLT_Block);

Set up the feature manager to maintain key point tracks.


fManager = helperFeaturePointManager(data.intrinsics,params,maxFrames,maxLandmarks);
% Set up the key point detector.
fManager.DetectorFunc = @(I)helperDetectKeyPoints(I);

Create an image view set to maintain frame poses.


vSet = imageviewset;

Specify the first and last frames to process from the data set. Then, process the first frame.

1-4
Monocular Visual-Inertial Odometry Using Factor Graph

% IMU data is available from frame number 40 in the data set.


startFrameIdx = 40;
% Index of the last frame to process in this example. For reasonable
% example execution time, process up to only frame 1000 of the data set.
endFrameIdx = 1000;
allFrameIds = startFrameIdx:endFrameIdx;

% In the first frame, detect new key points and initialize the tracker for
% future tracking.
status.firstFrame = false;
I = data.images{startFrameIdx};
if params.Equalize
% Enhance contrast if images are dark.
I = adapthisteq(I,NumTiles=params.NumTiles,ClipLimit=params.ClipLimit);
end
if params.Undistort
% Undistort if images contain perspective distortion.
I = undistortImage(I,data.intrinsics);
end
% Assign a unique view ID for each processed camera frame or image.
viewId = viewId + 1;
currPoints = createNewFeaturePoints(fManager,I);
updateSlidingWindow(fManager,I,currPoints,true(size(currPoints,1),1),viewId);
initialize(tracker, currPoints, I);
prevI = I;
firstI = I;
vSet = addView(vSet,viewId,rigidtform3d);

Begin a loop through the entire dataset.

for curIdx = allFrameIds(2:end)

Image Preprocessing

Image preprocessing involves these steps:

• Equalize — Enhance the contrast of an image to correct for dim lighting, which can affect feature
extraction and tracking.
• Undistort — Correct for radial and tangential distortions that can impact state estimation.

% Read image data.


I = data.images{curIdx};
if params.Equalize
% Enhance contrast if images are dark.
I = adapthisteq(I,NumTiles=params.NumTiles,ClipLimit=params.ClipLimit);
end
if params.Undistort
% Undistort if images contain perspective distortion.
I = undistortImage(I,data.intrinsics);
end
% Assign a unique view ID for each processed camera frame or image.
viewId = viewId + 1;

Feature Tracking

To compute a camera frame pose, you must calculate 2D-2D correspondences (2-D image point tracks
across multiple frames). There are several ways to estimate 2-D feature points that see the same

1-5
1 Camera Calibration and SfM Examples

landmark (key point tracks), but this example uses a Kalman tracker for tracking feature points in
multiple images.

Tracks are not all accurate and can contain outliers. Tracking performance also depends on the
Kalman tracker parameters, such as bidirectional error. Even in an ideal case, you can expect some
invalid tracks, such as those due to repetitive structures. As such, outlier rejection is a critical task in
feature tracking. To reject outliers from tracks, use fundamental matrix decomposition in the feature
point manager while updating the sliding window with the latest feature point tracks.

% Track previous frame points in the current frame.


[currPoints,validIdx] = tracker(I);
if status.isMapInitialized
[prevPoints,pointIds,isTriangulated] = getKeyPointsInView(fManager,viewId-1);
end
% Update the sliding window after tracking features in the current
% frame. If the sliding window already contains maximum number of
% frames specified using windowSize, one frame with id
% removeFrameId will be removed from the window to accommodate
% space for the current frame.
[removedFrameId,windowState] = updateSlidingWindow(fManager,I,currPoints,validIdx,viewId);
if (removedFrameId > fManager.slidingWindowViewIds(1))
% Store non-key frames or removed frame IDs.
removedFrameIds(end + 1) = removedFrameId; %#ok
end

Visualize the feature point tracks between the last key frame and current frame.

if status.isMapInitialized
svIds = getSlidingWindowIds(fManager);
if length(svIds) > 2
[matches1,matches2] = get2DCorrespondensesBetweenViews(fManager,svIds(end-2),viewId);

if vis && showMatches


if isempty(axMatches)
axMatches = axes(figure); %#ok
end
% Visualize matches between the last key frame and the
% current view.
showMatchedFeatures(data.images{allFrameIds(svIds(end-2))},I,matches1,matches2, .
Parent=axMatches);
end
end
end

Initial Structure from Motion (SfM)

The accelerometer and gyroscope readings of the IMU data contain some bias and noise. To estimate
bias values, you must obtain accurate pose estimates between the first few frames. You can achieve
this by using SfM. SfM involves these major steps:

• When there is enough parallax between the first key frame and the current frame, estimate the
relative pose between the two, using 2D-2D correspondences (key point tracks across multiple
frames).
• Triangulate 3-D points using the world poses of key frames and 2D-2D correspondences.

1-6
Monocular Visual-Inertial Odometry Using Factor Graph

• Track the 3-D points in the current frame, and compute the current frame pose using 3D-2D
correspondences.

if ~status.isMapInitialized
if windowState.FirstFewViews
% Accept the first few camera views.
vSet = addView(vSet,viewId,rigidtform3d);
elseif windowState.EnoughParallax
% Estimate relative pose between the first key frame in the
% window and the current frame.
svIds = getSlidingWindowIds(fManager);
[matches1,matches2] = get2DCorrespondensesBetweenViews(fManager,svIds(end-1),svId

valRel = false(size(matches1,1),1);
for k = 1:10
[F1,valRel1] = estimateFundamentalMatrix( ...
matches1,matches2,Method="RANSAC", ...
NumTrials=params.F_Iterations,DistanceThreshold=params.F_Threshold, ...
Confidence=params.F_Confidence);
if length(find(valRel)) < length(find(valRel1))
valRel = valRel1;
F = F1;
end
end

inlierPrePoints = matches1(valRel,:);
inlierCurrPoints = matches2(valRel,:);
relPose = estrelpose(F,data.intrinsics, ...
inlierPrePoints,inlierCurrPoints);

% Get the table containing the previous camera pose.


prevPose = rigidtform3d;

% Compute the current camera pose in the global coordinate


% system relative to the first view.
currPose = relPose;

%vSet = addView(vSet,svIds(end-1),currPose);
vSet = addView(vSet,viewId,currPose);

status.isMapInitialized = true;

axisSFM = axes(figure); %#ok


showMatchedFeatures(firstI,I,matches1,matches2, ...
Parent=axisSFM);
title(axisSFM,"Enough Parallax Between Key Frames");
end
else

1-7
1 Camera Calibration and SfM Examples

Camera-IMU Alignment

To optimize camera and IMU measurements, you must align them by bringing them to the same base
coordinate frame and scale. Alignment primarily consists of these major tasks:

• Compute the camera pose scale to make it similar to the IMU or world scale.
• Calculate the gravity rotation required to rotate gravity vector from local navigation reference
frame of IMU to initial camera reference frame. The inverse of this rotation aligns the z-axis of the
camera with the local navigation reference frame.
• Estimate the initial IMU bias.
if ~status.isIMUAligned && readyToAlignCameraAndIMU
svIds = getSlidingWindowIds(fManager);
% Because you have not yet computed the latest frame pose,
% So use only the past few frames for alignment.
svIds = svIds(1:end-1);
[gyro,accel] = helperExtractIMUDataBetweenViews( ...
data.gyroReadings,data.accelReadings,data.timeStamps,allFrameIds(svIds));
[xyz] = getXYZPoints(fManager,xyzIds);
% Align camera with IMU.
camPoses = poses(vSet,svIds);
[gRot,scale,info] = ...

1-8
Monocular Visual-Inertial Odometry Using Factor Graph

estimateGravityRotationAndPoseScale(camPoses,gyro,accel, ...
SensorTransform=data.camToIMUTransform,IMUParameters=imuParams);
disp("Estimated scale: " + scale);

If the alignment is successful, update the camera poses, 3-D points, and add IMU factors between the
initial frames in the current sliding window.

if info.IsSolutionUsable && scale > 1e-3


status.isIMUAligned = true;
posesUpdated = poses(vSet);
% Transform camera poses to navigation frame using
% computed gravity rotation and pose scale.
[posesUpdated,xyz] = helperTransformToNavigationFrame(posesUpdated,xyz,gRot,s
vSet = updateView(vSet,posesUpdated);
% Plot the scaled and unscaled estimated trajectory against
% ground truth.
if vis
p1 = data.camToIMUTransform.transform(vertcat(camPoses.AbsolutePose.Trans
axAlign = axes(figure); %#ok
g1 = data.gTruth(allFrameIds(camPoses.ViewId),1:3);
plot3(g1(:,1),g1(:,2),g1(:,3),"g",Parent=axAlign);
hold(axAlign,"on")
plot3(scale*p1(:,1),scale*p1(:,2),scale*p1(:,3),"r",Parent=axAlign);
plot3(p1(:,1),p1(:,2),p1(:,3),"b",Parent=axAlign);
hold(axAlign,"off")
legend(axAlign,"Ground Truth","Estimated scaled trajectory","Estimated tr
title("Camera-IMU Alignment")
drawnow
end

if status.isIMUAligned
% After alignment, add IMU factors to factor graph.
for k = 1:length(gyro)
nId = [ids.pose(svIds(k)),ids.vel(svIds(k)),ids.bias(svIds(k)), ...
ids.pose(svIds(k+1)),ids.vel(svIds(k+1)),ids.bias(svIds(k+1))];
fIMU = factorIMU(nId,gyro{k},accel{k},imuParams,SensorTransform=data.
slidingWindowFactorGraph.addFactor(fIMU);
end
end

% Set camera pose node guesses and 3-D point guesses


% after alignment.
slidingWindowFactorGraph.nodeState( ...
ids.pose(svIds), ...
helperCameraPoseTableToSE3Vector( ...
poses(vSet,svIds)));
slidingWindowFactorGraph.nodeState( ...
ids.point3(xyzIds),xyz);

Estimate an initial guess for IMU bias by using factor graph optimization with the camera projection
and IMU factors.

% Add prior to first camera pose to fix it softly during


% optimization.
fixNode(slidingWindowFactorGraph,ids.pose(svIds));
fixNode(slidingWindowFactorGraph,ids.point3(xyzIds));
% Add velocity prior to first IMU velocity node.
fVelPrior = factorVelocity3Prior(ids.vel(svIds(1)));

1-9
1 Camera Calibration and SfM Examples

addFactor(slidingWindowFactorGraph,fVelPrior);

% Add bias prior to first bias node.


fBiasPrior = factorIMUBiasPrior(ids.bias(svIds(1)));
addFactor(slidingWindowFactorGraph,fBiasPrior);

% Perform visual-inertial optimization after alignment to estimate


% initial IMU bias values.
soll1 = optimize(slidingWindowFactorGraph, ...
params.SolverOpts);
fixNode(slidingWindowFactorGraph,ids.pose(svIds),false);
fixNode(slidingWindowFactorGraph,ids.point3(xyzIds),false);

fixNode(slidingWindowFactorGraph,ids.pose(svIds(1)));
soll = optimize(slidingWindowFactorGraph, ...
params.SolverOpts);
fixNode(slidingWindowFactorGraph,ids.pose(svIds(1)),false);

% Update feature manager and view set after optimization.


vSet = updateView(vSet,helperUpdateCameraPoseTable(poses(vSet,svIds), ...
slidingWindowFactorGraph.nodeState( ...
ids.pose(svIds))));
xyz = slidingWindowFactorGraph.nodeState( ...
ids.point3(xyzIds));
setXYZPoints(fManager,xyz,xyzIds);
end
end

Estimated scale: 1.9661

1-10
Monocular Visual-Inertial Odometry Using Factor Graph

IMU Pose Prediction

When IMU data is available, you can predict the world pose of the camera by integrating
accelerometer and gyroscope readings. Use factor graph optimization to further refine this
prediction.
imuGuess = false;
if status.isIMUAligned
% Extract gyro and accel reading between current image frame
% and last acquired image frame to create IMU factor.
svIds = getSlidingWindowIds(fManager);
svs = svIds((end-1):end);
[gyro,accel] = helperExtractIMUDataBetweenViews(data.gyroReadings, ...
data.accelReadings,data.timeStamps,allFrameIds(svs));
nodeID = [ids.pose(svs(1)) ...
ids.vel(svs(1)) ...
ids.bias(svs(1)) ...
ids.pose(svs(2)) ...
ids.vel(svs(2)) ...
ids.bias(svs(2))];
% Create the transformation required to trasform a camera pose
% to IMU base frame for the IMU residual computation.
fIMU = factorIMU(nodeID,gyro{1},accel{1},imuParams, ...
SensorTransform=data.camToIMUTransform);

1-11
1 Camera Calibration and SfM Examples

% Add camera pose and IMU factor to graph.


slidingWindowFactorGraph.addFactor(fIMU);
% Set velocity and bias guess.
prevP = nodeState(slidingWindowFactorGraph,ids.pose(svs(1)));
prevVel = nodeState(slidingWindowFactorGraph,ids.vel(svs(1)));
prevBias = nodeState(slidingWindowFactorGraph,ids.bias(svs(1)));
[pp,pv] = fIMU.predict(prevP,prevVel,prevBias);
imuGuess = true;
end

[currPoints,pointIds,isTriangulated] = getKeyPointsInView(fManager,viewId);
cVal = true(size(currPoints,1),1);
cTrf = find(isTriangulated);

If no IMU prediction is available, then use 3D-2D correspondences to estimate the current view pose.

if ~imuGuess
x3D = getXYZPoints(fManager,pointIds(isTriangulated));
c2D = currPoints(isTriangulated,:);
ii = false(size(x3D,1),1);
currPose = rigidtform3d;
for k = 1:params.F_loop
[currPosel,iil] = estworldpose( ...
currPoints(isTriangulated,:),x3D, ...
data.intrinsics,MaxReprojectionError=params.F_Threshold,Confidence=params.F_Confi
MaxNumTrials=params.F_Iterations);
if length(find(ii)) < length(find(iil))
ii = iil;
currPose = currPosel;
end
end
cVal(cTrf(~ii)) = false;
else

Use the IMU predicted pose as an initial guess for motion-only bundle adjustment.

x3D = getXYZPoints(fManager,pointIds(isTriangulated));
c2D = currPoints(isTriangulated,:);
[currPose,velRefined,biasRefined,ii] = helperBundleAdjustmentMotion( ...
x3D,c2D,data.intrinsics,size(I),pp,pv,prevP,prevVel,prevBias,fIMU);
slidingWindowFactorGraph.nodeState( ...
ids.vel(viewId),velRefined);
slidingWindowFactorGraph.nodeState( ...
ids.bias(viewId),biasRefined);
cVal(cTrf(~ii)) = false;
end
setKeyPointValidityInView(fManager,viewId,cVal);
vSet = addView(vSet,viewId,currPose);

Add camera projection factors related to the 3-D point tracks of the current view.

obs2 = pointIds(isTriangulated);
obs2 = obs2(ii);
fCam = factorCameraSE3AndPointXYZ( ...
[ids.pose(viewId*ones(size(obs2))) ids.point3(obs2)], ...
data.intrinsics.K,Measurement=c2D(ii,:), ...
Information=cameraInformation);
allCameraTracks{viewId} = [viewId*ones(size(obs2)) obs2 fCam.Measurement];

1-12
Monocular Visual-Inertial Odometry Using Factor Graph

slidingWindowFactorGraph.addFactor(fCam);
end

3-D Point Triangulation

When using the latest 2D-2D correspondences for camera-world pose estimation, you must frequently
create new 3-D points.

if status.isMapInitialized
[newXYZ,newXYZID,newPointViews,newPointObs] = triangulateNew3DPoints(fManager,vSet);

if isempty(axMap) && windowState.WindowFull


axMap = axes(figure); %#ok
% Plot the map created by Initial SfM
helperPlotCameraPosesAndLandmarks(axMap,fManager,vSet,removedFrameIds,true);
end

Estimated Pose Refinement Using Factor Graph Optimization

Factor graph optimization reduces the error in trajectory or camera pose estimation. Various factors,
like inaccurate tracking and outliers, can contribute to estimation errors.

Graph optimization adjusts camera poses to satisfy various sensor measurement constraints, like
camera observations (3-D point projection onto an image-frame-generating 2-D image point

1-13
1 Camera Calibration and SfM Examples

observation), IMU relative poses, and relative velocity change. You can categorize optimization based
on the type of factors used.

• Bundle adjustment — Uses only camera measurements. factorCameraSE3AndPointXYZ


(Navigation Toolbox) is useful for adding camera measurement constraints to the graph.
• Visual-inertial optimization — Along with camera measurements, add IMU measurements, like
gyroscope and accelerometer readings, to the graph by using factorIMU (Navigation Toolbox).

The visual-inertial factor graph system consists of these processes:

• Estimate the camera pose nodes at different timestamps, and connect them to both the camera
projection and the IMU factors.
• Connect the 3-D point landmark nodes to the camera projection factor.
• Connect the IMU velocity and bias nodes to only IMU factors.

Update the sliding window with the latest 3-D points and camera view pose.

newPointsTriangulated = false;
if ~isempty(newXYZ)
newPointsTriangulated = true;
% Store all new 3D-2D correspondenses.
for pIte = 1:length(newPointViews)
allCameraTracks{newPointViews(pIte)} = [allCameraTracks{newPointViews(pIte)}; new
end
obs = vertcat(allCameraTracks{:});
% Create camera projection factors with the latest

1-14
Monocular Visual-Inertial Odometry Using Factor Graph

% 3-D point landmark observations in the current image.


fCam = factorCameraSE3AndPointXYZ([ids.pose(obs(:,1)) ...
ids.point3(obs(:,2))],data.intrinsics.K, ...
Measurement=obs(:,3:4), ...
Information=cameraInformation);
addFactor(slidingWindowFactorGraph,fCam);
end

% Set current camera pose node state guess.


svIdds = getSlidingWindowIds(fManager);
slidingWindowFactorGraph.nodeState(ids.pose(svIdds), ...
helperCameraPoseTableToSE3Vector(poses(vSet,svIdds)));

Refine the estimated camera frame poses and 3-D points using factor graph optimization. The
optimization is time consuming. So, the optimization is not run after estimating the pose of each
frame. The frame frequency at which the optimization is run can be controlled using a parameter.

if helperDecideToRunGraphOptimization(curIdx,newPointsTriangulated,params)
% Recreate sliding window factor graph with only the latest key
% frames, for performance.
[slidingWindowFactorGraph,xyzIds] = helperRecreateSlidingWindowFactorGraph( ...
slidingWindowFactorGraph,fManager,allCameraTracks,data.intrinsics,cameraInformati
imuParams,data.gyroReadings,data.accelReadings,data.timeStamps,allFrameIds,ids,da

% Add guess for newly triangulated 3-D point node states.


xyz = getXYZPoints(fManager,xyzIds);
slidingWindowFactorGraph.nodeState( ...
ids.point3(xyzIds),xyz);

% Fix a few nodes during graph optimization


% to fix the camera pose scale. Unfix them after optimization.
if windowState.WindowFull
fixNode(slidingWindowFactorGraph, ...
ids.pose(fManager.slidingWindowViewIds(1:11)));
else
fixNode(slidingWindowFactorGraph, ...
ids.pose(fManager.slidingWindowViewIds(1)));
end
if status.isIMUAligned
% Fix the first velocity and bias nodes in the sliding
% window.
fixNode(slidingWindowFactorGraph, ...
ids.vel(fManager.slidingWindowViewIds(1)));
fixNode(slidingWindowFactorGraph, ...
ids.bias(fManager.slidingWindowViewIds(1)));
end
% Optimize the sliding window.
optiInfo = optimize(slidingWindowFactorGraph,params.SolverOpts);

Update the feature manager and view set with your optimization results.

slidingWindowViewIds = getSlidingWindowIds(fManager);
if ~status.Mediandepth
status.Mediandepth = true;
xyz = slidingWindowFactorGraph.nodeState( ...
ids.point3(xyzIds));
medianDepth = median(vecnorm(xyz.'));
[posesUpdated,xyz] = helperTransformToNavigationFrame(helperUpdateCameraPoseTable

1-15
1 Camera Calibration and SfM Examples

slidingWindowFactorGraph.nodeState(ids.pose(slidingWindowViewIds))), ...
xyz,rigidtform3d,initialSceneMedianDepth/medianDepth);
% Set current camera pose node state guess.
slidingWindowFactorGraph.nodeState(ids.pose(slidingWindowViewIds), ...
helperCameraPoseTableToSE3Vector(posesUpdated));
% Add guess for newly triangulated 3-D points node states.
slidingWindowFactorGraph.nodeState( ...
ids.point3(xyzIds),xyz);
else
posesUpdated = helperUpdateCameraPoseTable(poses(vSet,slidingWindowViewIds), ...
slidingWindowFactorGraph.nodeState( ...
ids.pose(slidingWindowViewIds)));

xyz = slidingWindowFactorGraph.nodeState( ...


ids.point3(xyzIds));
end
% Update the view set after visual-inertial optimization.
vSet = updateView(vSet,posesUpdated);
setXYZPoints(fManager,xyz,xyzIds);
end

end

Add a new feature point to the Kalman tracker, in case the number of points goes below the feature
tracking threshold.

createNewFeaturePoints(fManager,I);
currPoints = getKeyPointsInView(fManager,viewId);
setPoints(tracker,currPoints);

if ~status.isIMUAligned && useIMU && status.isMapInitialized && windowState.WindowFull


% The sliding window is full and the camera and IMU are not yet aligned.
readyToAlignCameraAndIMU = true;
end

prevPrevI = prevI;
prevI = I;

Visualize the estimated trajectory.

if status.isMapInitialized && (mod(curIdx,10)==0)


if vis
if isempty(axTraj)
axTraj = helperCreateTrajectoryVisualization([-4 7 -8 3 -3 1]);
end
% Visualize the estimated trajectory.
helperVisualizeTrajectory(axTraj,fManager,vSet,removedFrameIds);
end
end
end

1-16
Monocular Visual-Inertial Odometry Using Factor Graph

Sample image of the scene.

1-17
1 Camera Calibration and SfM Examples

Plot all key frame camera poses and 3-D points. Observe the landmarks on features such as the
ceiling, floor, and pillars.

helperPlotCameraPosesAndLandmarks(axMap,fManager,vSet,removedFrameIds);

1-18
Monocular Visual-Inertial Odometry Using Factor Graph

1-19
1 Camera Calibration and SfM Examples

Compare Estimated Trajectory with Ground Truth

As a measure of accuracy, compute these metrics:

• Absolute trajectory error (ATE) - Root Mean Squared Error (RMSE) between computed camera
locations and ground truth camera locations.
• Scale error - Percentage of how far the computed median scale is to original scale.

Plot the estimated trajectory against the ground truth.

addedFrameIds = allFrameIds(vSet.Views.ViewId);
axf = axes(figure);
helperPlotAgainstGroundTruth(vSet,data.gTruth,data.camToIMUTransform, ...
addedFrameIds,axf,removedFrameIds);

1-20
Monocular Visual-Inertial Odometry Using Factor Graph

Evaluate the tracking accuracy, based on root mean square error (RMSE) and median scale error.
helperComputeErrorAgainstGroundTruth(data.gTruth,vSet,allFrameIds,removedFrameIds,data.camToIMUTr

"Absolute RMSE for key frame trajectory (m): " "0.20406"

"Percentage of median scale error: " "2.3038"

Supporting Functions

This section details the short helper functions included in this example. Larger helper functions have
been included in separate files.

helperFeaturePointManager manages key point tracks.

helperVIOParameters initializes visual-inertial odometry algorithm tunable parameters.

helperBundleAdjustmentMotion refines pose of current frame using motion-only bundle


adjustment.

helperSelectNewKeyPointsUniformly selects specified number of newly created key points a


specified distance from tracked points in current key frame.

helperRecreateSlidingWindowFactorGraph recreates factor graph with key frame data within


current sliding window.

1-21
1 Camera Calibration and SfM Examples

helperCreateTrajectoryVisualization creates trajectory plot with highlighted sliding window.

helperVisualizeTrajectory updates trajectory plot with latest data stored in view set and
feature manager.

helperPlotAgainstGroundTruth plots estimated trajectory and ground truth trajectory for visual
comparison.

helperGenerateNodeID generates unique factor graph node IDs for fixed number of camera view
poses, IMU velocities, IMU biases, and 3-D point nodes.

function ids = helperGenerateNodeID(fg,maxFrames,maxLandmarks)


% helperGenerateNodeID

ids.pose = generateNodeID(fg,[maxFrames 1]);


ids.vel = generateNodeID(fg,[maxFrames 1]);
ids.bias = generateNodeID(fg,[maxFrames 1]);
ids.point3 = generateNodeID(fg,[maxLandmarks 1]);
end

helperCameraPoseTableToSE3Vector converts pose table to N-by-7 SE(3) pose matrix.

function cameraPoses = helperCameraPoseTableToSE3Vector(cameraPoseTable)


% helperCameraPoseTableToSE3Vector converts camera pose table returned by
% poses method of imageviewset to N-by-7 SE3 pose vector format.

cameraPoses = [cat(1,cameraPoseTable.AbsolutePose.Translation) rotm2quat(cat(3,cameraPoseTable.Ab


end

helperUpdateCameraPoseTable updates pose table with latest estimated N-by-7 SE(3) poses.

function cameraPoseTableUpdated = helperUpdateCameraPoseTable(cameraPoseTable,cameraPoses)


% helperUpdateCameraPoseTable updates camera pose table with specified
% N-by-7 SE(3) camera poses.

cameraPoseTableUpdated = cameraPoseTable;
R = quat2rotm(cameraPoses(:,4:7));
for k = 1:size(cameraPoses,1)
cameraPoseTableUpdated.AbsolutePose(k).Translation = cameraPoses(k,1:3);
cameraPoseTableUpdated.AbsolutePose(k).R = R(:,:,k);
end
end

helperDetectKeyPoints detects key points.

function keyPoints = helperDetectKeyPoints(grayImage)


%helperDetectKeyPoints

% Detect multi-scale FAST corners.


keyPoints = detectORBFeatures(grayImage,ScaleFactor=1.2,NumLevels=4);
% Uncomment any of the following or try different detectors to tune
% keyPoints = detectFASTFeatures(grayImage,MinQuality=0.0786);
% keyPoints = detectMinEigenFeatures(grayImage,MinQuality=0.01,FilterSize=3);
end

helperDecideToRunGraphOptimization decides whether to run or skip graph optimization at


current frame.

1-22
Monocular Visual-Inertial Odometry Using Factor Graph

function shouldOptimize = helperDecideToRunGraphOptimization(curIdx,newPointsTriangulated,params)


% helperDecideToRunGraphOptimization

% If the current frame belongs to the initial set of frames, then run graph
% optimization every frame, because the initial SfM is still running.
% Otherwise, after a number of frames specified by optimization frequency,
% run graph optimization. Lower frequency can result in a more accurate
% estimation, but can increase execution time.
numberOfInitialFrames = 250;
shouldOptimize = (curIdx < numberOfInitialFrames) || (mod(curIdx,params.optimizationFrequency) ==
end

helperTransformToNavigationFrame transforms and scales input poses and XYZ 3-D points to
local navigation reference frame of IMU using gravity rotation and pose scale.

function [posesUpdated,xyzUpdated] = helperTransformToNavigationFrame(poses,xyz,gRot,poseScale)


% helperTransformToNavigationFrame transforms and scales the input poses and XYZ points
% using specified gravity rotation and pose scale.

posesUpdated = poses;
% Input gravity rotation transforms the gravity vector from local
% navigation reference frame to initial camera pose reference frame.
% The inverse of this transforms the poses from camera reference frame
% to local navigation reference frame.
Ai = gRot.A';
for k = 1:length(poses.AbsolutePose)
T = Ai*poses.AbsolutePose(k).A;
T(1:3,4) = poseScale*T(1:3,4);
posesUpdated.AbsolutePose(k) = rigidtform3d(T);
end
% Transform points from initial camera pose reference frame to
% local navigation reference frame of IMU.
xyzUpdated = poseScale*gRot.transformPointsInverse(xyz);
end

helperExtractIMUDataBetweenViews extracts IMU data between specified views.

function [gyro,accel] = helperExtractIMUDataBetweenViews(gyroReadings,accelReadings,timeStamps,fr


% helperExtractIMUDataBetweenViews extracts IMU Data (accelerometer and
% gyroscope readings) between specified consecutive frames.

len = length(frameIds);
gyro = cell(1,len-1);
accel = cell(1,len-1);
for k = 2:len
% Assumes the IMU data is time-synchorized with the camera data. Compute
% indices of accelerometer readings between consecutive view IDs.
[~,ind1] = min(abs(timeStamps.imuTimeStamps - timeStamps.imageTimeStamps(frameIds(k-1))));
[~,ind2] = min(abs(timeStamps.imuTimeStamps - timeStamps.imageTimeStamps(frameIds(k))));
imuIndBetweenFrames = ind1:(ind2-1);
% Extract the data at the computed indices and store in a cell.
gyro{k-1} = gyroReadings(imuIndBetweenFrames,:);
accel{k-1} = accelReadings(imuIndBetweenFrames,:);
end
end

helperPlotCameraPosesAndLandmarks plots estimated trajectory and 3-D landmarks.

1-23
1 Camera Calibration and SfM Examples

function helperPlotCameraPosesAndLandmarks(axisHandle,fManager,vSet,removedFrameIds,plotCams)
% helperPlotCameraPosesAndLandmarks plots the key frame camera poses and
% triangulated 3-D point landmarks.

if nargin < 5
% By deafult plot trajectory as a line plot. If plotCams is true the
% function uses the plotCamera utility to draw trajectory.
plotCams = false;
end

% Extract key frame camera poses from view set


vId = vSet.Views.ViewId;
kfInd = true(length(vId),1);
[~,ind] = intersect(vId,removedFrameIds);
kfInd(ind) = false;
camPoses = poses(vSet,vId(kfInd));
% Extract triangulated 3-D point landmarks
xyzPoints = getXYZPoints(fManager);
% Compute indices of nearby points
indToPlot = vecnorm(xyzPoints,2,2) < 10;

pcshow(xyzPoints(indToPlot,:),Parent=axisHandle,Projection="orthographic");
hold(axisHandle,"on")
if plotCams
c = table(camPoses.AbsolutePose,VariableNames={'AbsolutePose'});
plotCamera(c,Parent=axisHandle,Size=0.25);
title(axisHandle,"Initial Structure from Motion")
else
traj = vertcat(camPoses.AbsolutePose.Translation);
plot3(traj(:,1),traj(:,2),traj(:,3),"r-",Parent=axisHandle);
view(axisHandle,27.28,-2.81)
title(axisHandle,"Estimated Trajectory and Landmarks")
end
hold off
drawnow
end

helperComputeErrorAgainstGroundTruth computes absolute trajectory error and scale error


compared to known ground truth.

function [rmse,scaleError] = helperComputeErrorAgainstGroundTruth(gTruth,vSet,allFrameIds,removed


% helperComputeErrorAgainstGroundTruth computes the absolute trajectory
% error and scale error.

% Extract key frame camera poses from view set


vId = vSet.Views.ViewId;
kfInd = true(length(vId),1);
[~,ind] = intersect(vId,removedFrameIds);
kfInd(ind) = false;
camPoses = poses(vSet,vId(kfInd));
locations = vertcat(camPoses.AbsolutePose.Translation);
% Convert camera positions to first IMU reference frame
T = se3(camPoses.AbsolutePose(1).R*(camToIMUTransform.rotm')).inv;
locations = T.transform(locations);
% Convert ground truth to first IMU reference frame
g1 = se3(quat2rotm(gTruth(:,4:7)),gTruth(:,1:3));
g11 = (g1(1).inv)*g1;
gl = vertcat(g11.trvec);

1-24
Monocular Visual-Inertial Odometry Using Factor Graph

gLocations = gl(allFrameIds(vId(kfInd)),1:3);
scale = median(vecnorm(gLocations,2,2))/median(vecnorm(locations,2,2));

rmse = sqrt(mean(sum((locations - gLocations).^2,2)));


scaleError = abs(scale-1)*100;

disp(["Absolute RMSE for key frame trajectory (m): ",num2str(rmse)])


disp(["Percentage of median scale error: ",num2str(scaleError)])
end

helperDownloadData downloads data set from specified URL to specified output folder.

function vioData = helperDownloadData()


% helperDownloadData downloads the data set from the specified URL to the
% specified output folder.

vioDataTarFile = matlab.internal.examples.downloadSupportFile(...
'shared_nav_vision/data','BlackbirdVIOData.tar');

% Extract the file.


outputFolder = fileparts(vioDataTarFile);
if (~exist(fullfile(outputFolder,"BlackbirdVIOData"),"dir"))
untar(vioDataTarFile,outputFolder);
end

vioData = load(fullfile(outputFolder,"BlackbirdVIOData","data.mat"));
end

References

[1] Qin, Tong, Peiliang Li, and Shaojie Shen. “VINS-Mono: A Robust and Versatile Monocular Visual-
Inertial State Estimator.” IEEE Transactions on Robotics 34, no. 4 (August 2018): 1004–20. https://
doi.org/10.1109/TRO.2018.2853729

[2] Antonini, Amado, Winter Guerra, Varun Murali, Thomas Sayre-McCord, and Sertac Karaman. “The
Blackbird Dataset: A Large-Scale Dataset for UAV Perception in Aggressive Flight.” In Proceedings of
the 2018 International Symposium on Experimental Robotics, edited by Jing Xiao, Torsten Kröger, and
Oussama Khatib, 11:130–39. Cham: Springer International Publishing, 2020. https://doi.org/
10.1007/978-3-030-33950-0_12

1-25
1 Camera Calibration and SfM Examples

Visual SLAM with an RGB-D Camera

Visual simultaneous localization and mapping (vSLAM), refers to the process of calculating the
position and orientation of a camera with respect to its surroundings, while simultaneously mapping
the environment.

You can perform vSLAM using a monocular camera. However, the depth cannot be accurately
calculated, and the estimated trajectory is unknown and drifts over time. To produce an initial map,
which cannot be triangulated from the first frame, you must use multiple views of a monocular
camera. A better, more reliable solution is to use an RGB-D camera, which is composed of one RGB
color image and one depth image.

This example shows how to process RGB-D image data to build a map of an indoor environment and
estimate the trajectory of the camera. The example uses a version of the ORB-SLAM2 [1] algorithm,
which is feature-based and supports RGB-D cameras.

Overview of Processing Pipeline

The pipeline for RGB-D vSLAM is very similar to the monocular vSLAM pipeline in the “Monocular
Visual Simultaneous Localization and Mapping” on page 1-95 example. The major difference is that
in the Map Initialization stage, the 3-D map points are created from a pair of images consisting of
one color image and one depth image instead of two frames of color images.

• Map Initialization: The initial 3-D world points can be constructed by extracting ORB feature
points from the color image and then computing their 3-D world locations from the depth image.
The color image is stored as the first key frame.
• Tracking: Once a map is initialized, the pose of the camera is estimated for each new RGB-D
image by matching features in the color image to features in the last key frame.
• Local Mapping: If the current color image is identified as a key frame, new 3-D map points are
computed from the depth image. At this stage, bundle adjustment is used to minimize reprojection
errors by adjusting the camera pose and 3-D points.
• Loop Closure: Loops are detected for each key frame by comparing it against all previous key
frames using the bag-of-features approach. Once a loop closure is detected, the pose graph is
optimized to refine the camera poses of all the key frames.

1-26
Random documents with unrelated
content Scribd suggests to you:
made of the exhaustive notes on Yahwe (pp. 243 ff.) and the Ashera (pp.
196 ff.), and the valuable section> on early Hebrew poetry.
[49] Ovid, Fasti, IV., 679 ff.; and cf. Frazer, "Spirits of the Corn," I., p. 297
f.
[50] See Burney, op. cit., additional note on "The mythical element in the
story of Samson."
[51] See "Das Gilgamesch-Epos in der Weltliteratur" (Strassburg, 1906).
[52] "Studien zur Odyssée" in the "Mitteilungen der Vorderasiatischen
Gesellschaft," 1910, Hefte 2-4; 1911, Heft 4.
[53] The fifty-two noble youths, for example, whom Alcinous entrusts with
the task of preparing the ship and escorting Odysseus homewards may
correspond to the fifty-two weeks of the year, sun-heroes who accompany
the sun on his voyage through the year. In the challenge of Euryalus to
Odysseus and the latter's triumph in the discus-throwing, we are to see a
glimmer of the old light-myth. The dance of Halius and Laodamas, with the
purple ball which Polybus made for them, again symbolizes the battle of
light, the colour of the ball being specially significant. Indeed, there are
few limits to be placed to this system of astrological interpretation, since,
according to Dr. Fries, even lawn-tennis goes back to the same idea: he
remarks that "ailes Ballspiel ja bis herab zum Lawn-Tennis auf denselben
Gedanken [der Lichtkampf] zurückgeht" ("Studien zur Odyssée," i., p.
324).
[54] One point, at which the colouring is said to be peculiarly Babylonian,
is the prophecy that death shall come to Odysseus from the sea; for this is
traced to the Babylonian legend of Oannes, the benefactor of mankind,
who ever returns to the sea from which he rose, but here, too, Odysseus is
the god of heaven who sinks at the approach of night.
[55] With regard to its application to the Hebrew narratives, the "Church
Quarterly" reviewer of Dr. Jeremias' work (see above, p. 304, n. 1**)
points out the resemblance between this procedure and Philo's method of
interpretation.
[56] In 1870 the same plan was adopted to discredit Professor Max
Müller's theory of the Solar Myth. The demonstration, though humorous
(since its subject was the professor himself), constituted a legitimate form
of criticism, and it has been borrowed by Dr. Kugler, the Dutch astronomer,
and applied to the astral theory. For the astral theory is in essence the old
Solar Myth revived and grafted on to a Babylonian stem. In his book "Im
Bannkreis Babels" (1910), Dr. Kugler selects at random the historical figure
of Louis IX. of France, and has no difficulty in demonstrating by astral
methods that the extant records of his life and reign are full of solar and
astral motifs.
[57] Cf. Kugler, op. cit.
[58] His interpretation of Euripides' story of the Golden Lamb must share
the fate of the main structure of his theory, but the legend itself may well
have been of Babylonian origin (see above, p. 293).
[59] See above, pp. 106 ff.
[60] For an exhaustive discussion or the astrological material contained in
the omen-literature, see Jastrow, "Religion Babyloniens und Assyriens," II.,
pp. 138 ff. (1909-12). A Neo-Babylonian astronomical treatise, recently
acquired by the British Museum (see Plate XXXII., opposite p. 310),
containing classified and descriptive lists of the principal stars and
constellations, with their heliacal risings and settings, culminations in the
south, etc., does not surest a profound knowledge of astronomy on the
part of its compiler (cf. King, "Cun. Texts," XXXIII., 1912, pp. 30 ff., and
"Proc. Soc. Bibl. Arch.," XXXV., 1913, pp. 41 ff.).
[61] See "Sternkunde und Sterndienst," II., pp. 30 ff.; cf. also Cumont,
"Babylon und der griechische Astrologie," in the "Neue Jahrbücher für das
klassische Altertum," Bd. 27 (1911), pp. Off., and the earlier of his
"American Lectures on the History of Religions," published under the title
"Astrology and Religion among the Greeks and Romans" (1912).
[62] See above, p. 208 f.
[63] They are emphasized by Schiarparelli, in his "Astronomy in the Old
Testament" (Engl. transl.), pp. 39 ff., 99 ff., 104 f.
[64] During their pastoral and agricultural life in Palestine the Hebrews
found it quite sufficient to refer to time by describing the period of the day:
see further, Schiarparelli, op. cit., p. 96.
[65] Amos, v., 20.
[66] Cf. "Greece and Babylon" (published as the Wilde Lectures, 1911).
[67] See his "Cults of the Creek States," Oxford, 1896-1909.
[68] Cf. Hogarth, "Ionia and the East," pp. 27 ff., 64 ff.

APPENDICES

I.—A COMPARATIVE LIST OF THE DYNASTIES OF NÎSIN, LARSA AND BABYLON.


II.—A DYNASTIC LIST OF THE KINGS OF BABYLON.

EXPLANATORY NOTE.—A comma after a king's name implies that


he was succeeded by his son. The figures within parentheses,
which follow a king's name, indicate the number of years he
ruled. Contemporaneous reigns are set opposite each other in the
parallel columns, but their respective lengths are indicated only
approximately by the spacing of the names.
I.—A COMPARATIVE LIST OF THE DYNASTIES OF NÎSIN, LARSA AND BABYLON.

I. A COMPARATIVE LIST OF THE DYNASTIES OF NÎSIN, LARSA, AND BABYLON


(continued).

II. A DYNASTIC LIST OF THE KINGS OF BABYLON.


A comma after a king's name implies that he was succeeded by
his son.

II. A DYNASTIC LIST OF THE KINOS OF BABYLON (continued).


INDEX

A-DARA-KALAMA, king of Second


Dynasty, 202; in List, 320
Abba-dugga, priest of Lagash, 147
Abbasid Caliphate, 11
Abi-eshu', king of First Dynasty,
105, 205 f.; letters of, 171, 192;
in List, 320
Abi-eshu' Canal, 205
Abi-rattash, king of Third Dynasty,
217 f.; in List, 320
Abi-sarê, king of Larsa, 89 f., 147;
in List, 318
Abraham, 305
Abû Habba, 134, 143
Abû Hatab, 85
Abydenus, 115, 280
Abydos, 235
Achæmenian kings, 2, 8, 285 ff., 321
Actæon, 290
Adab, 159, 213
Adad, 148, 256; reading of name of,
150; representations of, 266, 271;
see also E-namkhe, E-ugalgal
Adad-aplu-iddina, king of Fourth
Dynasty, 256 f.; in List, 321
Adad-nirari I., king of Assyria, 243;
in List, 320
Adad-nirari III., king of Assyria, 259
Adad-nirari IV., king of Assyria, 265
Adad-rabi, of Nippur, 102
Adad-shar-ilâni, Kassite ambassador, 238
Adad-shum-iddin, king of Third
Dynasty, 244; in List, 320
Adad-shum-usur, king of Third
Dynasty, 244; in List, 320
Addis, Rev. W. E., 62
Aden, 121
Adhem, 212
Adini, Chaldean ruler, 263
Adonis, 290, 304 f.
Adoption, laws of, 185
Ae-aplu-usur, king of Seventh
Dynasty, 258; in List, 321
Africa, 120
Agade, see Akkad
Agriculture, Babylonian, 167 ff.
Agum I., king of Third Dynasty,
217; in List, 320
Agum, son of Kashtiliash I., 217 f.
Agum-kakrime, king of Third
Dynasty, 210, 218, 241, 290;
genealogy of, 217; in List, 320
Ahîmer, mounds of, 143
Aia, goddess, 261
Aia-khegallum Canal, 153
Aibur-shabû, 30
Ak-su, 212
'Akarḳûf, 15
Akarsallu, 243
Akhenaten (Amen-hetep IV.), king
of Egypt, 219; letters to, 220, 222;
policy of, 222; and the Hittites, 234
Akhetaten, in Egypt, 219
Akhetaten, in Canaan, 225
Akhlamî, 260
Akia, Kassite ambassador, 225
Akkad, 3, 10 f., 118 f., 148 f.; in
astrology, 140; as geographical
term, 244
Akkad (Agade), 159; see also Ishtar
Akshimakshu, rebel leader, 286
Akur-ul-ana, king of Second Dynasty,
202; in List, 320
Al-Bît-shar-Bâbili, 41
Al-'Irâkân, 10
Al-'Irâkayn, 10
Al-Madâin, 9
Alabaster jars, manufacture of, 41
Alcinous, 309
Aleppo, 14 f., 152
Alexander, the Great, 7 f., 73, 115 f.,
287; in List, 321
Alman, 218
Alorus, 115
Altars, Babylonian, 61 f., 64, 66, 69:
Hebrew, 62
Alyattes, king of Lydia, 279, 282
Amal, 204
Amanus, Alt., 262
Amarna, in Syria, 127
Amarna, in Egypt; see Tell el-Amarna
Amasis, king of Egypt, 278
Amêl-Marduk, king of Babylon, 280; in List, 321
Amen-hetep III., king of Egypt,
219 ff., 230, 233; letters of, 220 f.
Amen-hetep IV., king of Egypt, 111,
219; see also Akhenaten
Ammi-baïl, king of Khana, 129 f.
Ammi-ditana, king of First Dynasty,
84, 206 ff.; estimate of, 205; letter
of, 191; in List, 320
Ammi-ditana Canal, 207
Ammi-zaduga, king of First Dynasty,
107 ff, 116 f., 206, 209; letters of,
168; in List, 320
Ammi-zaduga-nukhush-nishi Canal, 210
Ammon, 293
Amorite migration, 119 f.
Amorites, raids of, 136, 182, 204 f.
Amos, 313
Amraphel, king of Shinar, 159
Amurru, West-Semitic god, 150
Amurru, the Western Semites, 119 f.,
125 f., 136, 152, 157, 203, 210, 237,
255; in astrology, 140; their
quarter in Sippar, 207
Amursha-Dagan, of Khana, 131
An-am, king of Erech, 211 f.
Anana, 23
Anatolia, 128
Andrae, W., 23, 25, 28, 33, 53, 64 f.,
68, 71, 137
Animal forms, in Babylonian mythology, 297 f.
Animism, 299
Anna-Bêl, 287
Anshan, in Elam, 282
Ante-chambers, to shrines, 64
Antinous, 309
Antiochus Soter, 287
Anu, 95, 155, 144, 146; see also E-anna
Aphrodite, 290, 304 f.
Apil-Sin, king of First Dynasty, 136,
149 f.; in List, 319 f.
Appeal, right of, 185
Apries, king of Egypt, 277 f.
Apsû, god of the abyss, 206
Arab conquest, of Mesopotamia, 10
Arabia, as cradle of the Semites,
119 f.; physiographical features
of, 120 ff.; Southern, 121
Arabian coast, 6
Arabians, as nomads, 121 ff.
Arad-Nannar, father-in-law of Rîm-Sin, 156
Arad-shasha, king of Erech, 212
Arad-sibitti, 258 f.
Arakha, rebel leader, 286
Arakhab, Sumerian leader, 208
Arakhtu Canal, 205, 207; later employment
of name, 34, 36
Arakhtu-wall, 74
Aramean migration, 120
Arameans, raids by, 4, 258 f., 264 f.;
and Assyria, 260; and the Sutû,256
Arched doorway, in Babylon, 39
Arches, in vaulted building, 47
Architecture, Babylonian, 19; religious,
63, 66; military, 63, 66 f.
Ardashir I., founder of Sassanian Empire, 9
Ardys, king of Lydia, 279
Argives, 293
Argos, 290
Ari-Teshub, Mitannian name, 139
Aries, 293
Arioch, king of Ellasar, 159
Aristotle, 116
Arkum, 204
Armenia, I, 262, 265; see also Urartu
Arnuanta, Hittite king, 240 f.
Arsacidæ, 9
Arses, king of Persia, 321
Artaxerxes I., Longimanus, king of Persia, 321
Artaxerxes II., Mnemon, king of Persia, 21, 321
Artaxerxes III., Ochus, king of Persia, 321
Artemis, 290
Aruna, Hittite town, 227
Aryans, as horse-keepers, 216
Arzawa, Hittite kingdom of, 230
Ashdod, 132
Ashduni-erim, king of Kish, 143 f.
Ashera, 307
Ashir, god of Ashur, 139; see also Ashur
Ashir-rîm-nishêshu, king of Assyria, 139;
see also Ashur-rîm-nishêshu
Ashnunnak, 157, 218; see also Tupliash
Ashratum, 150
Ashukhi Canal, 143
Ashur, city, 21, 157 ff.;
discoveries at, 20, 137 if.;
early inhabitants of, 128, 140
Ashur-bani-pal, king of Assyria, 8, 31, 73, 113, 271 ff.
Ashur-bêl-kala, king of Assyria, 254, 256
Ashur-dân I., king of Assyria, 244; in List, 320
Ashur-etil-ilâni, king of Assyria, 273; in List, 321
Ashur-nadin-shum, king of Babylon, 270; in List, 321
Ashur-naṣir-pal, king of Assyria, 257 ff; policy of, 262
Ashur-rêsh-ishi, king of Assyria, 112, 255
Ashur-rîm-nishêshu, king of Assyria,
242; in List, 320
Ashur-uballit, king of Assyria, 222 f.,
243; in List, 320
Asiru, father of Pukhia, 212
Ass, as beast of burden, 122, 183, 215
Assault, penalties for, 105
Assyria, expansion of, 12, 205; and
Babylon, 3 f., 157, 241 ff., 273;
and Egypt, 210, 209, 272 f.; and
Mitanni, 220 f., 241; and the
Hittites, 239, 241, 243
Assyrian settlements, in Cappadocia, 227
Assyrians, racial character of, 141
Astarte worship, centres of, 290
Astrologers, Babylonian, 189; Greek, 292
Astrological texts, 140
Astrology, 291 f., 299 f.
Astronomical omens, 100 f.
Astronomy, Babylonian, 289, 311 ff.
Astyages, king of Media, 282
Aten, Egyptian cult of, 219, 223
Athene, 309
Atlila, in Zamua, 259
Atreus, 292 f.
Attica, 290
Aushpia, founder of temple of Ashir, 139
Aÿ, Egyptian priest, 223
Azariah, 14
Aziru, Syrian prince, 234

BA'ALÎM, of Canaan, 126; of Khana, 131


Bâb Bêlti, 40.
Bâb-ilî, 14, 28
Babel, Tower of, 15
Babil, mound of, 14 ff., 22, 27; in plan, 23
Babylon, strategic position of, 4 ff.;
remains of, 14 ff.; walls of, 21 ff,
29 ff.; size of, 27; plans of, 10, 23
Babylonia, climate of, 40, 170; fertility
of, 167; names for, 244;
political centre of gravity in, 3, 9
Babylonian Chronicle, 265
Babylonian language, I, 218 f.
Baghdad, 5, 11, 14 f., 17, 22
Bahrein, 6
Baka, in Sukhi, 266
Bakâni, 263
Ball, Rev. C. J., 37
Banti-shinni, Amorite prince, 237 f.
Bardiya, 285
Barges, Babylonian, 180 f.
Barter, 196
Barzi, 149
Baṣra, 9 f., 11
Baṣu, 153, 155
Battlements, in architecture, 67
Bau, goddess, 297
Bau-akhi-iddina, king of Eighth
Dynasty, 265; in List, 321
Bavian, 112
Be'er-sheba', 307
Bees, in Sukhi, 260 f.
Behistun, 286
Bêl, taking hands of, 38, 296
Bêl-aplu-iddin, Babylonian general, 260
Bêl-ibni, king of Babylon. 270; in List, 321
Bêl-nadin-[akhi], king of Third
Dynasty, 245; in List, 320
Bêl-shalti-Nannar, daughter of Nabonidus, 281
Bêl-shar-usur, see Belshazzar
Bêl-shemeà, 212
Bêl-shum-ishkun, father of Neriglissar, 280
Bêl-simanni, rebel leader, 286
Belshazzar, 282 ff.
Benjamin, of Tudela, 14 f.
Bentresh, Hittite princess, 240
Berossus, 47, 280, 301; history of, 106;
dynasties of, 114 ff.
Beuyuk Kale, 230
Bevan, Prof. A. A., 305
Bevan, E. R., 7, 280
Bewsher, Lieut. J. B., 17
Bezold, Prof. C., 72, 107, 110, 219
Bird, of Bau, 297
Birds, as foundation-deposits, 63
Birizzarru, West-Semitic month, 131
Birs-Nimrûd, 15, 22; see also El-Birs,
Borsippa
Bismâya, 20, 138
Bît-Adini, 260
Bît-Bazi, 257
Bît-Iakin, 269
Bît-Karkara, 152 f., 159
Bît-Karziabku, 253
Bît-Khadippi, 260
Bît-Pir-Shadû-rabû, 248
Bît-rêsh, 287
Bît-Sikkamidu, 249
Bitti-Dagan, of Khana, 132
Black Sea, 5
Bliss, F. J., 125
Bloomfield, Prof. M.. 227
Boat-builders, 180
Boatmen, 180 f.
Boats, of Khonsu, 238 ff.
Boghaz Keui, 219, 230; letters from,
219 f., 239 f.; see also Khatti
Boissier, A., 133, 154 f., 286
Borsippa, 60, 159, 259, 263 f.;
temple-tower of, 77 f.; plan of, 16
Bosanquet, R. H. M., 106
Botta, Emil, 18
Boundary-stones, 241, 244 ff., 252
Breach of promise, of marriage, 186
Breasted, Prof. J. H., 111, 133, 219,
222, 235, 240
Breccia, for paving, 59
Bribery, punishment for, 189
Bride-price, 186
Bridge, over Euphrates, 47, 60, 74 f.,
81; over canal, 37
Bridge-building, 249
Bridges-of-boats, 81, 202, 264
Bronze age, at Carchemish, 128
Bronze-casting, 207
Bronze step, from E-zida, 27, 77
Budge, Dr. E. A. Wallis, 111, 150,
176, 219, 235, 241
Builders, responsibilities of, 184
Building, art of, 19 f.
Bull, in mythology, 55, 294, 303; in
symbolism, 298
Bulls, enamelled, 50 f.
Bunutakhtun-ila, vassal-ruler of Sippar, 143
Bûr-Sin II., king of Nîsin, 147; in List, 318
Burial, Neo-Babylonian, 66 f.
Burna-Burariash, see Burna-Buriash 217
Burna-Buriash, king of Third Dynasty, 242 f.;
date of, 110 f.; letters
from, 220 ff.; in List, 320
Burna-Buriash, Kassite chieftain, 217
Burney, Prof. C. F., 290, 292, 307
Burrows, Prof. R. M., 293
Burusha, jewel-worker, 259
Bury, G. W., 121
Byblos, 290

CALENDAR, regulation of, 189 f.


Callisthenes, 116
Cambyses, king of Persia, 285; in List, 321
Camel, introduction of, 122
Canaan, I; inhabitants of, 119 f.,
124 ff.; civilization of, 124 ff.;
Egyptian conquest of, 219
Canaanites, and Babylon, 224 f.
Canals, repair of, 170 f.
Cancer, constellation, 301
Cappadocia, 3 f., 227
Capricorn, constellation, 301
Caravans, 182 f., 225, 237
Carchemish, 128 f., 182, 227, 260, 262;
Battle of, 277; excavations at, 127
Carchemisian, pottery-name, 128
Castor, star, 310
Cedar, in construction, 40, 52, 141, 263
Central Citadel, of Babylon, 28
Cerealia, 307
Chaldea, 262 ff, 270
Chaldeans, 257, 263 f.; of Nagitu, 270
Chedorlaomer, king of Elam, 159
Chief-baker, 305
Chief-butler, 305
Chiera, E., 92 ff, 102, 104, 150 ff, 155 f., 204
China, Great Wall of, 21; city-sites in, 22
Chronicles, 210
Chronology, 87 ff, 117 f.
Cilicia, 230, 262
Cilician Gates, 4 f.
Cimmerians, 269, 275, 279
Citadel, character of Babylonian, 27
Class privileges, 164 f.
Clay, Prof. A. T., 91, 99, 150, 156,
245, 254, 282; discoveries of,
89 f., 94 ff, 148, 163, 287
Code, of Hammurabi, 160 f., 252;
Prologue of, 158 f.; Sumerian, 163, 299
Collingwood, Lieut. W., 17
Columns, in decoration, 44
Combe, E., 164
Commercial life, 181 f., 195, 207, 237, 285 ff.
Condamin, Père A., 129
Contracts, 41, 109, 163, 183
Copper, ratio of, to silver, 211
Corvée,193, 249, 253
Courts, of justice, 40 f.; of palace, 28, 30, 40 f.
Cowell, P. H., 257
Craig, Prof. J. A., 107
Creation legends, 195, 306
Crœsus, king of Lydia, 282
Ctesias, 21, 24, 47
Ctesiphon, 5, 9, 11
Cult-images, of kings, 206
Cumont, Prof. Franz, 292, 312
Cuq, Prof. Edouard, 247, 250
Curses, on boundary-stones, 246 f.
Curtius Rufus, 47, 49
Cuthah, 146 f., 149, 159, 263
Cyaxares, king of Media, 276, 278 f.
Cylinder-seals, 127 f., 261, 271, 298 f.
Cyprus, 290
Cyrene, 278
Cyrus, king of Persia, 282 ff., 286; in List, 321

DAGAN, 131, 136, 159


Dagan-takala, Canaanite prince, 132
Dagon, god of Ashdod, 132;
as Ba'al of Khana, 131;
cult of, on Euphrates, 132
Damascus, 11, 120, 262
Damik-Adad, in Akkad, 249
Damik-ilishu, king of Nîsin, 93 f.,
97, 101, 153 ff., 209; in List, 319
Damki-ilishu, king of Second Dynasty, 208 f.;
in List, 320
Darius I. Hystaspis, king of Persia, 7, 285 f.;
in List, 321
Darius II., king of Persia, 321
Darius III., Codomanus, king of
Persia, 287; in List, 321
Date-formulæ, 190
Date-palm, cultivation of, 177
David, 307
Davies, N. de G., 223
De Sarzec, E., 138
Deification of kings, 206
Dêlem, 141
Delitzsch, Prof. Friedrich, 6, 33, 35,
139, 151, 244
Deluge, 114 f.
Deportation, Assyrian policy of, 267 f.
Dêr, or Dûr-ilu, 145, 244, 253, 269
Dêr ez-Zôr, 129 f.
Dhorme, Père Paul, 281
Diarbekr, 5
Dieulafoy, Marcel, 80 f.
Dilbat, 141 f., 159; site of, 141
Dilmun, 6
Diodorus, 48 f., 81
Diorite, from Magan, 6
Dioscuri, 303
Disease, Babylonian conception of, 194
Divination, 299; lamb for, 206
Divorce, laws of, 185 f.
Dog, of Gula, 297; votive figure of, 147
Double-dates, at Nîsin, 94 ff.
Draco, constellation, 292
Dragon, of Marduk, 55, 261;
of Nabû, 79; of the deep, 195
Dragon-combat, 306
Dragons, of chaos, 195, 306;
enamelled, 51 f.; bronze, 52
Drainage, Babylonian system of, 45
Driver, Prof. S. R., 126
Drowning, as penalty, 185
Dudkhalia, Hittite king, 160, 240
Dungi, king of Ur, 145
Dûr-Abi-eshu', on Tigris, 205
Dûr-Ammi-ditana, on Zilakum Canal, 207
Dûr-Ammi-zaduga, on Euphrates, 209
Dûr-Ashur, in Zarnua, 259
Dûr-Enlil, in Sea-Country, 217
Dûr-Cula-dûru, in Akkad, 148
Dûr-gurgurri, on Tigris, 151, 189, 191
Dûr-Iabugani, in Akkad, 148
Dûr-ilu; see Dêr
Dûr-Kurigalzu, in Akkad, 218, 256
Dûr-Lagaba, in Akkad, 148
Dûr-muti, 149
Dûr-Padda, in Akkad, 148
Dûr-Papsukal, 204
Dûr-Sin-muballit, 153
Dûr-Sin-muballit-abim-walidia, 158
Dûr-uṣi-ana-Ura, in Akkad, 148
Dûr-Zakar, fortress of Nippur, 147 f., 204
Dushratta, king of Mitanni, 221, 234
Dwellings, arrangement of, 41 f.

E-ANNA, temple of Anu and Ishtar


at Erech, 159, 211, 287
E-anna-shum-iddina, governor of
Sea-Country, 255 f.
E-apsû, temple of Enki at Eridu, 158
E-babbar, temple of Shamash at
Sippar, 110, 149, 159, 201
E-babbar, temple of Shamash at
Larsa, 151, 159
E-galmakh, temple at Nîsin, 159
E-gishshirgal, temple of Sin at Ur, 159, 200;
temple of Sin at Babylon, 206
E-ibianu, temple, 149
E-kankal, temple of Lugal-banda and
Ninsun at Erech, 211
E-khulkhul, temple of Sin at Harran, 276
E-kiku, temple of Ishtar at Babylon, 149
E-kua, shrine of Marduk in E-sagila, 72
E-kur, temple of Enlil at Nippur, 158
E-kur-shum-ushabshi, priest, 261
E-makh, temple of Nininakh in
Babylon, 61 ff., 65;
ground-plan of, 64;
in plans, 23, 83
E-makh, temple at Adab, 159;
E-malga-uruna, temple of Enlil at
Dûr-Enlil, 217 f.
E-meslam, temple of Nergal at
Cuthah, 149, 159
E-mete-ursag, temple of Zamama at Kish, 159
E-mishmish, temple of Ishtar at Nineveh, 159
E-namkhe, temple of Adad at Babylon, 155
E-namtila, temple, 209
E-ninnû, temple of Ningirsu at
Lagash, 153, 159, 299
E-patutila, temple of Ninib at Babylon, 23;
ground-plan of, 71
E sagil-shadûni, reputed father of usurper, 250
E-sagila, temple of Marduk at Babylon,
28 f., 37, 80 f., 142, 149, 158,
283 f., 280 f.; remains of, 71 ff.;
excavation of, 20 f.; orientation
of, 69; plan of, 74; restoration of,
75; in plan, 23
E-temen-anki, temple-tower of E-sagila,
38, 60, 73 ff.; plan of, 74;
restoration of, 75; in plan, 23;
see also Tower of Babylon
E-ugalgal, temple of Adad at Bît-Karkara, 159
E-ulmash, temple of Ishtar at Akkad (Agade), 159
E-ulmash-shakin-shum, king of Sixth
Dynasty, 257; in List, 321
E-zida, temple of Nabû at Borsippa,
16, 78 f., 159, 279; plan of, 78;
bronze step from, 27, 77
Ea, 73, 297; see also Enki
Ea-gamil, king of Second Dynasty,
211 f., 217; in List, 202
Ea-mukîn-zêr, king of Fifth Dynasty, 257;
in List, 321
Ea-nadin-[...], possibly king of
Fourth Dynasty, 255
Ecbatana, 8, 286
Eclipses, solar, 257, 279
Ecliptic constellations, 310 f.
Edina, in S. Babylonia, 255
Egypt, I, 4, 38, 41, 219 ff.; and
Canaan, 120 f., 219; and Syria,
276 f.; and Assyria, 269, 272; and
Lydia, 283; and Persia, 285; and
the Hittites, 234 ff.; as Asiatic
power, 219 ff.; irrigation in, 172;
boundary-records of, 247; in early
Christian writings, 305
Ekallâti, 256
Ekron, 270
El-Birs, 15; see also Birs-Nimrûd
El-Ohêmir, see Aḥimer
Elam, 7 f., 133, 315; and the Western
Semites, 7, 150 ff.; and the later
Kassites, 244, 252; in alliance
with Babylon, 264, 209, 272; trade
of, 5, 181; importations from,
207; goddesses of, 296; systems
of writing in, 2; in astrology, 140
Eldred, John, 14 f.
Electra, of Euripides, 293
Elijah, 307
Eltekeh, 270
Emblems, divine, 55, 79, 297;
on boundary-stones, 246 f.
Emisu, king of Larsa, 89 f., 134; in List, 318
Emutbal, 150, 154, 157, 198, 200
Enamelled brickwork, 43
Enamelling, process of, 57
Enannatum, chief priest in Ur, 135
Enki, 95, 155, 297; see also Ea, E-apsû
Enlil, 95, 194; cult of, at Babylon,
155, 206; see aluo E-kur,
E-malga-uruna, Nippur
Enlil-bani, king of Nîsin, 148, 150; in List, 319
Enlil-kudur-usur, king of Assyria, 244; in List, 320
Enlil-nadin-apli, king of Fourth
Dynasty, 112, 254 f.; in List, 320
Enlil-nadin-shum, king of Third
Dynasty, 244; tablets of time of, 84; in List, 320
Enlil-nirari, king of Assyria, 243; in List, 320
Entemena, patesi of Lagash, 246;
cult of deified, 206
Ephesus, 5
Equinoxes, precession of, 312
Erba-Murduk, king of Eighth
Dynasty, 264, 269; in List, 321
Erech, 11, 113, 135, 147, 155, 159,
198 f., 287; local dynasty of, 211;
Neo-Babylonian letter from, 281
Ereshkigal, 304
Eridu, 135, 147, 152 f., 155, 158;
oracle of, 153, 158
Esarhaddon, 139, 269, 271 f.;
Babylonian policy of, 271, 273;
Black-Stone of, 176; in List, 321
Etana, 290 f.
Ethics, Babylonian, 2
Euphrates, 4 f., 185; change in course
of, 30, 37 f., 58; West Semitic
settlements on, 157, 159; canalization
of, 156; irrigation on, 173 f.
Euphrates route, 4, 8
Euripides, 293, 311
Europe, Babylonian influence on, 12, 289
Euryalus, 309
Eurymachus 309
Eusebius, 114 f., 116, 276, 280
Evil spirit, possession by, 240
Exchange, medium of, 196
Exorcist, Babylonian, 240
Expansion-joint, in building, 19
Ezekiel, 62, 304

FAÇADE, of Nebuchadnezzar's Throne Room, 43 f.


Faluja, 14
Family-life, in Babylonia, 184 ff.
Fâra, 85, 300
Farming, Babylonian, 108 f.
Farnell, Dr. L. R., 314
Feast, of New Year, 190, 254, 259, 296, 302, 308
Fetish, 294
Fillets, in temple-decoration, 63
Fishes, constellation, 310 f.
Fishing-rights, 171
Flocks, tribute of, 168
Fortification-walls, 28, 32 f.; drainage of, 46
Foundation-deposits, 63
Foxes, with firebrands, 307
Frank, O., 176
Frankincense, 62
Frazer, Sir J. G., 290, 305, 307
Fries, Dr. Carl, 308 f.

GABBARU-IBNI, in Sukhi, 266


Gagûm, Cloister of Sippar, 154, 207
Gate-house, of palace, 40
Gate-sockets, 63, 246
Gates, of Babylon, 27
Gaddash, see Gandash
Gandash, founder of Third Dynasty, 216; in List, 320
Garrison-duty, 192
Garstang, Prof. John, 230
Gaugamela, 287
Gaumata, the Magian, 285 f.
Gaza, 282
Genesis, 159 f., 305
Geshtinna, goddess of the plough, 176
Gezer, 126
Gift, deeds of, 129 ff.
Gilead, 305
Gilgamesh, 212, 308; legends of, 290
Gimil-ilishu, king of Nîsin, 134; in List, 318
Girsu, 152 f., 155, 159
Glacial epoch, 124
Goat-fish, of Enki, 297
Gobryas, governor of Gutium, 283;
see also Gubaru
Golden Age, of Hesiod, 302
Golden Lamb, legend of, 292 f., 311
Goliath, 307
Grain-drill, Babylonian, 176
Granary, at Babylon, 158
Greece, and Babylon, 12, 287, 290, 314;
and Persia, 286 f.
Greek mythology, Babylonian influence on, 12, 289, 315
Greek names, as privilege, 287
Greek theatre, at Babylon, 287
Grooves, stepped, in temple-decoration, 63
Gubaru, Babylonian general, 281;
governor of Gutuim, 281, 283 f.
Gudea, patesi of Lagash, 6, 298
Gufa, prototype of, 179 f.
Gula, 297
Gulkishar, king of Second Dynasty,
112 f., 202, 212; in List, 320
Gungunum, king of Larsa, 89 f.,
135 f.; in List, 318
Gunkell, Prof. H., 300
Gutium, 139, 218, 283
Gutschmid, A. von, 115
Gypsum-plaster, as decoration, 43

HADES, 308
Hagen, O. E., 283
Hakluyt, Richard, 15
Halius, 309
Hall, H. P., 111, 126, 160, 219, 235, 277
Halys, 5, 229, 279
Hammam, in Syria, 127
Hammurabi, king of First Dynasty,
89 f., 99 ff., 103, 128, 130, 153 ff,
156 f., 290: character of, 100 f.;
empire of, 158 f.; Babylon of, 20,
84 ff.; palace of, 86; Code of, 154,
158 f., 161 ff., 252; letters of, 181,
188 ff.; date of, 94, 110 f.; period
of, 39, 162 ff., 315; in Lists, 319 f.
Hammurabi-khegallum Canal, 155
Hammurabi-nukhush-nishi Canal, 158
Hammurabih, king of Khana, 130
Hananiah, 14
Handcock, P. S. P., 120
Hanging Gardens, of Babylon, 40 ff., 279
Harbour, of Babylon, 30
Harp, Sumerian, 298
Harran, 270, 282
Harûn-ar-Rashîd, 11
Hastings, Dr. James, 102
Haverfield, Prof. F. J., 22
Hebrew religion, 12; traditions. 159; law, 299
Hebrews, altars of, 62: and Babylonian mythology, 280
Helios, 307
Hera, 290
Heracles, 290
Herds, tribute of, 108
Herdsmen, Babylonian, 108 f.
Herodotus. 4 f., 15, 21 f., 24, 20 f.,
38, 61 f., 72, 76 f., 81, 85, 167, 177,
179, 270, 279
Hesiod, 302
Heuzey, Léon. 298 f.
Hezekiah, king of Judah, 270
High places, Canaanite, 126
Hilla, 14, 23
Hilprecht, Prof. II. V., 91 f., 112,
134, 150, 208, 212, 242
Himyarite period, 121
Hincke, Prof. W. J., 246, 250
Hindîya Canal, 16
Hipparchus, of Nicæa, 312
Hire of land, system of, 167
Hit, 174
Hittite correspondence, character of, 239 f.;
invasion, 3, 84, 210;
states, 230; migration, 128
Hittite Empire, rise of, 220;
history of, 229 ff.; fall of, 241;
communications of, 5; as barrier, 314
Hittites, 3, 128, 234 ff., 243;
racial character of, 226 f.;
civilization of, 227 f.;
art of, 228, 233;
inscriptions and records of, 226 ff.
Hogarth, D. G., 4 f., 120, 128, 276, 278, 282, 314;
Carchemish excavations of, 127
Homer, see Odyssey
Homera, mound of, 29, 31, 35; in plan, 23
Horse, introduction of, 122, 198, 215 f.
Horses, export of, 224
House-property, in Babylon, 84
Houses, Babylonian, 184
How, Walter W., 5, 7, 21
Hrozný, F., 97 f., 150
Huber, E., 134
Humped cattle, 175, 202
Huntington, Ellsworth, 121
Hydra, constellation, 292
Hydraulic machine, 48
Hyksos, in Egypt, 132 f.

IADI-khabum, antagonist of Samsu-iluna, 204


Iakhzir-ilum, of Kazallu, 146
Iakin, king of Sea-Country, 263
Ialman, Mt., 259
Ia'mu-Dagan, in Khana, 132
Iashma(?)-Dagan, 132
Iawium, vassal-ruler of Kish, 145
Iazi-Dagan, of Khana, 131
Ibkushu, priest, 94, 101
Icarus, 290
Idamaraz, 198
Idin-Dagan, king of Nîsin, 132, 134; in List, 318
Igi-kharsagga, 155
Igitlim, possibly a king of Khana, 130
Iluma-ila, vassal-ruler of Sippar, 143
Iluma-ilum, founder of Second
Dynasty, 104 f., 199 f., 205; in List, 320
Ilu-shûma, king of Assyria, 136
Image-worship, Babylonian, 294 ff.
Imgur-Bêl, wall of Babylon, 30 ff., 51
Immer, suggested reading of Adad's name, 150
Immerum, vassal-ruler of Sippar, 143
Incantations, 194
Incense, 314
India, and the Persian Gulf, 7;
village communities of, 250
India Office, Babylonian map issued by, 16
Indra, Aryan god, 227
Infant-sacrifice, 127
Inheritance, laws of, 185
Intercalary months, 189 f.
Ionia, cities of, 279
'Irâk, 9 f., 11
Iranian plateau, 5
Iranians, groups of, 282
Irnina Canal, 171
Irrigation, method of, 176 f.
Irrigation-machines, 172 ff.
Irsit-Bâbili, city-square of Babylon, 28
Isaiah, 292
Isharlim, king of Khana, 129 f.
Ishbi-Ura, founder of Dynasty of
Nîsin, 132 ff.; in List, 318
Ishhi-aswad, mound, 84; in plan, 23
Ishkhara, goddess, 297
Ishkibal, king of Second Dynasty, 202; in List, 320
Ishkun-Marduk, city, 207
Ishkur, suggested reading of Adad's name, 150
Ishme-Dagan, king of Nîsin, 132, 134 f.; in List, 318
Ishtar, of Akkad (Agade), 23, 69 f.,
83 f., 159; of Ashur, 20, 137; of
Babylon, 80, 149; of Bît-Karkara,
159; of Erech, 159; of Khallabu,
159; of Kibalbarru, 155; of Kish,
143; of Nineveh, 159, 221 f.; and
Tammuz, 290; Descent of, 304;
lion of, 55, 58 f.; representation
of, 266; see also E-anna
Ishtar Gate, at Babylon, 33, 51 ff.,
57; beasts on, 50 f., 54 ff.; section
of, 53; restoration of, 28; ground-plan
of, 52; in plans, 30, 57
Isin, Dynasty of, 254 ff.; original
form of name of, 91, 254; see also Nîsin
Islam, 10, 120
Israel, 12, 290
Itêr-pîsha, king of Nîsin, 148; in List, 318
Itti-ili-nibi, king of Second Dynasty, 208; in List, 320
Itti-Marduk-balâtu, Kassite chief minister, 237
Itti-Marduk-balâtu, the Aramean, 256
Iturmer, local god of Tirḳa, 131

JASTROW, Prof. Morris, 101, 312


Jensen, Prof. P., 112, 308
Jerablus, 127
Jeremiah, 277, 280
Jeremias, Dr. Alfred, 292, 304 f., 307
Jericho, 126
Jerusalem, 277, 280
Jewish traditions, 313 f.
Jews, captivity of, 277; of Baghdad, 14
Johns, Canon C. H. W., 131, 145, 162, 190, 304
Johns, Mrs. C. H. W., 18
Johnson, C. W., 179
Jones, Capt. J. Felix, 17
Jordan, 306
Joseph, 305
Josephus, 278, 280, 307
Joshua, 305 f.
Josiah, king of Judah, 270
Judah, 270, 276 f.
Judges, Babylonian, 188
Jumjumma, 23
Jupiter Ammon, 303
Justi, Prof. Ferdinand, 215
Justice, administration of, 188 f.

KADASHMAN-ENLIL I., king of Third Dynasty, 220 f., 241


f.; in List, 320
Kadashman-Enlil II., king of Third Dynasty, 236 ff.,
240, 243; in List, 320
Kadashman-Kharbe I., king of Third Dynasty, 241; in List,
320
Kadashman-Kharbe II., king of Third Dynasty, 244; in List,
320
Kadashman-turgu, king of Third Dynasty, 236 f.; in List,
320
Kadesh, Battle of, 227, 235
Kagmum, 157
Kandalanu, king of Babylon, 273; in List, 321;
see Ashur-bani-pal
Kâr-bêl-mâtâti, 259
Kâr-Irnina, 171
Kâr-Ishtar, 243
Kâr-Shamash, 149; on Tigris, 158;
on Euphrates, 207
Kâr-Sippar, 204
Kara-indash I., king of Third Dynasty, 221, 241 f.; in List,
320
Kara-indash II., king of Third Dynasty, 243; in List, 320
Kara-Kuzal, 127
Karashtu, Babylonian general, 255
Karduniash, 244
Karnak, 220, 235
Kashbaran, 154
Kashdakh, in Khana, 130
Kashshû-nadin-akhi, king of Fifth Dynasty, 257; in List,
321
Kashtiliash I., king of Third Dynasty, 217; in List, 320
Kashtiliash II., king of Third Dynasty, 243 f.; in List, 320
Ḳaṣr, mound, 14, 16 f., 21 ff., 24,
27 f., 30 ff.; buildings on, 28 ff.
Kassites, 3, 130 f., 197 f., 214 ff.;
racial character of, 214 f.; introduction
of horse by, 215 f.;
Babylon of, 29
Kazallu, 144 ff., 149
Keleks, early, 178 f.
Kesh, 155, 159
Khabilu, river, 150
Khabkha-tribe, 207
Khâbûr, 129 ff., 260
Khabur-ibal-bugash Canal, 130
Khalambû tribe, 140
Khalium, vassal-ruler of Kish, 145
Khallabu, 152, 159
Khalule, 271
Khana, kingdom of, 129 ff., 157, 210 f., 218
Khanî, 210; see also Khana
Khanirabbat, 222
Khatti, Hittite capital, 229 ff.; site
of, 219; communications of, 5;
use of term, 210; see also Hittites
Khattusil I., Hittite king, 230
Khattusil II, Hittite king, 236 ff., 243
Khinnatuni, in Canaan, 225
Khonsu, Egyptian Moon-god, 222, 238 ff.
Khorsabad, 176
Khumbanigash, king of Elam, 209
Khurpatila, king of Elam, 243
Khurshitu, 212
Khuṣṣi, 254
Kibalbarru, 141, 144, 155
Kidin-Khutrutash, king of Elam, 244
Kikia, early ruler of Ashur, 139
Kinunu, West-Semitic month, 131
Kiriath-arba, 307
Kirmanshah, 5
Kish, 143 f., 159, 203 f.
Kisurra, 85. 155, 199, 212 f.
Knudtzon. Prof. J. A., 219. 221 f. 224 f., 230, 243
Kohler, Prof. J., 102
Koldewey, Dr. Robert, 17 f., 23, 25,
30, 32 f., 35, 46 ff., 50, 52 f., 67 f.,
74, 76 f., 80, 83
Kudur-Enlil, king of Third Dynasty, 243; in List, 320
Kudur-Mabuk, ruler of Western Elam, 89,
113, 150 ff, 154, 156, 159;
Adda of Amurru, 152
Kudur-Nankhundi, king of Elam, 113
Kudurrus, or boundary-stones, 241, 244, 245 ff.
Kûfa, 9 f., 11
Kugler, Dr. F. X., 106 ff., 116, 310 ff
Kurigalzu I., king of Third Dynasty, 241, 243; in List, 320
Kurigalzu II., king of Third Dynasty, 221, 224, 242; in
List, 320
Kurigalzu III., king of Third Dynasty, 243; in List, 320
Kussar, Hittite city, 230
Kutir-Nakhkhunte, Elamite prince, 244, 252
Kweiresh, 23

LABASHI-MARDUK, king of Babylon, 281; in List, 321


Labynetus, 279; see Nebuchadnezzar II.
Lachish, 270
Lagamal, goddess of Dilbat, 142
Lagash, 147, 152 f., 155, 159, 212 f.
Land, sale of, 195 f.
Land-tenure, system of, 167, 249 ff.
Landowners, Babylonian, 167 f.
Langdon, S., 37, 40, 52, 72, 92, 111,
145, 276, 280, 282, 290
Laodamas, 309
Lapis-lazuli, at time of First Dynasty, 207;
Kassite export of, 224
Larsa, Dynasty and kings of, 89 ff.,
110, 133 f., 147 f., 150 ff., 158 f.,
198, 200; Sun-temple at, 135;
tablets from, 156
Law, Babylonian, 299; systematization
of, 196; spread of, 237 f.
Lawrence, T. E., 127
Layard, Sir A. H., 17 f., 106, 295 f.
Le Strange, G., 10 f.
Lebanon, 72, 225; monolith from, 203 f.
Legends, 195
Legislation, 2; see also Code, Law
Lehmann-Haupt, Prof. C. F., 116
Leo, constellation, 310
Libil-khegalla Canal, 30, 37
Libit-Ishtar, king of Nîsin, 134 ff.; in List, 318
Libit-Ishtar, governor of Sippar, 136
Light-wells, 28, 44
Lightning-fork, of Adad, 297
Lion, of Ishtar, 55
Lion Frieze, at Babylon, 30, 57 ff.
Lions, of the Sun-god, 298 f.; enamelled, 44
Lirish-gamium, daughter of Rîm-Sin, 156
Literature, Babylonian, 2, 194 f., 299
Liver-markings, 297; see also Divination Lot, 305
Lugal-banda, 211
Lugal-diri-tugab, 148
Lukhaia, on Arakhtu Canal, 205
Lulubu, 255
Lunar observations, 140
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.

More than just a book-buying platform, we strive to be a bridge


connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.

Join us on a journey of knowledge exploration, passion nurturing, and


personal growth every day!

ebookbell.com

You might also like