Changing Reference Frames (30 Points)
Nearly every robot task requires operating in multiple reference frames. Download and open lidarScan.mat,
containing a sample lidar scan (as returned by LidarSensorCreate.m). The 681 data points correspond to
the measured ranges (in meters) equally spaced over the angles [−120◦
, 120◦
].
1. Edit the file lidar range2xy.m to convert the raw lidar scan into 2D local robot coordinates. Assume
the lidar sensor is located at the front of the robot pointing forward (i.e. the sensor pose in local
coordinates is [xl
, yl
, θl
] = [r, 0, 0], where r is the robot radius). The robot’s x-axis points forward
and y-axis points left. Assume the robot radius is 0.2 meters. Submit this function in the autograded
assignment Homework 1 lidar range2xy on Canvas.
2. Edit the file robot2global.m to transform any 2D point from the local robot coordinates into global
coordinates, given the robot’s pose. Submit this function in the autograded assignment Homework
1 robot2global on Canvas.
3. Edit the file global2robot.m to transform any 2D point from global coordinates into local robot
coordinates, given the robot’s pose. Submit this function in the autograded assignment Homework
1 global2robot on Canvas.
4. In the same figure, plot using lidar range2xy.m the lidar scan (points in 2D) in the global reference
frame assuming the robot pose is
• [3, −4, 2π/3]
• [0, 3, π/2]
Plot each set of points (corresponding to the scan as viewed by the robot) in a different color. Make
sure to label your axis and include a legend.
Getting to Know the Simulator and Running a Simple Control
Program (20 Points)
Download the MATLAB CreateSim toolbox (download as zip)
https://github.com/autonomousmobilerobots/iRobotCreateSimulatorToolbox
download the latest version of the Matlab Toolbox for the Create from
https://github.coecis.cornell.edu/AMR/MatlabiRobotCreateRaspPi
and add both to your MATLAB path. Review the iRobot Create Simulator User Guide pages 4–10, and the
MATLAB Toolbox for the iRobot Create (available on Canvas) pages 11–12.
The file turnInPlace.m is a simple control program that reads the robot’s sensor data and makes the
robot turn in place. You can use this function as a template for later functions you will write.
Make sure the CreateSim toolbox is in your MATLAB path and run SimulatorGUI.m. This will open
the simulator interface. Click the Load Map button and select box map from the files provided for this
homework. Place the simulated robot anywhere on the map by clicking the Set Position button (click once
to set the center location, click again to set the heading direction). Run the control program by clicking
the Start button in the Autonomous group (bottom right side of the interface) and selecting turnInPlace.m.
1. What does the command SetFwdVelAngVelCreate (line 73 in turnInPlace.m) do? What are the
inputs to this function?
2. The left and right wheel velocities are limited to ±0.5m/s (observe what happens when you increase
cmdV and cmdW on lines 66–67 in turnInPlace.m). Edit the program limitCmds.m to appropriately
scale the forward and angular velocity commands so the wheel motors don’t saturate (the shape of
the trajectory should be maintained, i.e. do not simply use a max function without making sure the
trajectory shape remains the same). Submit this function in the autograded assignment Homework
1 limitCmds on Canvas.
3. Edit turnInPlace.m to use limitCmds.m in the appropriate place, with wheel2Center=0.13 and maxV
a value that is less than 0.5. From now on, you should always call limitCmds before sending commands
to the robot via SetFwdVelAngVelCreate.m.
4. Write a function backupBump.m to have the robot drive forward at constant velocity until it bumps into
something. If a bump sensor is triggered, command the robot to back up 0.25m and turn clockwise
30◦
, before continuing to drive forward again. Run the simulator and plot the robot trajectory.
Note: To access the sensor data collected during simulation, type global dataStore in the MATLAB
Command Window. The ground truth pose of the robot is available in dataStore.truthPose. The
sensor data is collected by readStoreSensorData.m; we recommend you take a look at the function
to see what sensor information is stored.
5. Write a control function, properly named, to make the robot do something new. What does the
function do? Run the simulator using your new control function and plot the robot trajectory.
Feedback Linearization (30 Points)
The iRobot Create is a non-holonomic vehicle, so we can’t send arbitrary Vx and Vy commands to the robot.
Instead we must send V and ω commands, as with SetFwdVelAngVelCreate.m. Feedback linearization approximately converts [Vx, Vy] → [V, ω] via a linear transformation.
1. Write a function feedbackLin.m to transform Vx and Vy commands into corresponding V and ω
commands using the feedback linearization technique presented in class for a differential drive robot.
(In other words, do not use a turn-then-move control strategy.). Assume the Vx and Vy commands
are given with respect to the inertial frame. Submit this function in the autograded assignment
Homework 1 feedbackLin on Canvas.
2. In the SimulatorGUI, place the robot near [0, 0] by clicking the Set Position button. For a desired
local (body-fixed) control input [Vx, Vy]B = [1, 1], plot the robot trajectory for different values of .
3. Write a function visitWaypoints.m to calculate the desired [V, ω] controls to drive the robot along a
series of waypoints (using feedbackLin.m appropriately). The waypoints are given as a nx2 matrix
where each row is the (x,y) coordinate of a waypoint. Keep track of which waypoint is being driven
2
toward using the index gotopt, and consider the waypoint reached if the robot is within closeEnough
of the waypoint location.
4. Using visitWaypoints.m as the control function with = r (the robot radius, 0.2m) and closeEnough=
0.1, plot the robot trajectories for the following sets of waypoints: [-3 0; 0 -3; 3 0; 0 3] and [-1 0; 1 0 ].
Dead Reckoning (45 Points)
The iRobot Create encoder information can be read from the distance and angle packets (using functions
DistanceSensorRoomba and AngleSensorRoomba, respectively). These packets provide the distance the
robot traveled and the angle it rotated since the sensor was last read.
1. Given a known initial configuration (x, y, θ) of the robot within a global frame, the distance traveled d
and the angle the robot turned φ, compute the new configuration of the robot. Explain your calculation.
Note that you cannot assume a “turn then move” scheme. (Hint: Assume that the robot’s wheels
turn at a constant rate between sensor readings.)
2. Edit the function integrateOdom.m to integrate the robot odometry as calculated in part 1 (this
is known as “dead reckoning”). Submit this function in the autograded assignment Homework 2
integrateOdom.m on Canvas
3. Generate a trajectory and gather data: No need to submit the control code or the simulator
map and config files.
(a) Define a map for the simulator. You may use the map from Homework 1 or define a new one.
(b) Write a control program that will drive the robot in the simulator. The robot motion should
include simultaneous non-zero forward and angular velocity (meaning the robot motion should include arcs). The program should read the odometry, either by periodically calling DistanceSensorRoomba
and AngleSensorRoomba and storing the data or by periodically calling the function readStoreSensorData.
Make sure the robot runs into a wall at least once. The program should be deterministic, that is
the trajectory should be the same in repeated runs.
(c) Using ConfigMakerGUI, create a config file that defines errors on the odometry.
(d) Run the simulation without a config file (no noise is added to the sensors). Save the data.
(e) From the same initial pose as the previous run, run the simulation with a config file. Save the
data.
4. Plot in the same figure the true trajectory as captured by the overhead localization, the trajectory for the integrated noiseless odometry (using the function integrateOdom.m with data from the
first run) and the trajectory for the integrated odometry which contains errors (using the function
integrateOdom.m with data from the second run). Specify what error parameters were used.
5. Did the integrated trajectory from the first run match the overhead localization? Did you expect it
to? Explain.
6. How does sensor noise affect the quality of the localization?
7. What might cause errors in odometry measurements?
Expected depth measurement (30 points)
In this section you will write a function that given the robot pose and the map, predicts the depth measurements. Note that the realsense sensor provides a depth image that corresponds to the depth of obstacles
along the sensor-fixed x-axis, as shown in Fig 1 .
create
object
plane of
camera
x
y
range depth
Figure 1: Depth information
1. Edit the file depthPredict.m to calculate the expected depth measurements (based on the pose) for a
robot operating in a known map. These depth measurement correspond to the distance between the
sensor and the closest obstacle. Assume the sensor reference frame is not rotated with respect to the
robot-fixed reference frame. You may want to use the provided function, intersectPoint.m, which
calculates the intersection point of two line segments. Hint: why do we need to provide the sensor-fixed
frame origin?
Note that the Field of View (FOV) of the depth image is centered around the sensor-fixed x-axis, the
first element in the vector is the leftmost point and the last element in the vector is the rightmost
point. We recommend looking at the documentation of the RealSenseDist function regarding the
format of the depth measurements. Submit this function in the autograded assignment Homework 2
depthPredict.m on Canvas.
2. You may continue to work with the same control program and config file from the previous section, or
create new ones. Your config file should have non-zero values for the realsense noise (in the “St. Dev.”
column). Make sure you are collecting and saving the following data (in dataStore):
• truthPose (measurements from overhead localization)
• rsdepth
2
3. In the simulator, load a map (you may use box map), load the config file and run your control function.
Save the data.
4. Based on the map and dataStore.truthPose, and given that the sensor (in the simulator) is positioned
0.16m along the X-axis of the robot-fixed frame, calculate the expected depth measurements. Plot the
actual (simulated) depth measurement and the expected depth measurements on the same graph. Are
they identical? Do you expect them to be? Explain.
Localizing a stationary robot (80 Points)
A point robot is standing in place in the map described in HW3map.mat which contains the coordinates of
all walls in the environment: [x1, y1, x2, y2]. The file stationary.mat contains the sensor measurements for
the robot. Each line contains the range measurements in the global North, East, South and West directions.
The maximum range of the sensor is less than 3 meters (a measurement of NaN indicates no obstacle within
the sensing range).
Assume that there is no uncertainty in the angle and that the North and South ranges have noise that
is distributed N(0,0.1) and the East and West ranges have noise that is distributed N(0,0.3). All range
measurements are independent of each other.
Grid localization (30 points)
1. Write the function gridLocalizationStationary.m that returns the pdf representing the location of
the robot. This function should accept as input the size of the grid (n × m) where n is the number
of cells along the X dimension and m the number fo cells along the Y dimension. Explain how you
addressed the measurement issue discussed in class (with respect to where is the measurement?).
2. Assume a 10×10 grid. What is your initial distribution?
3. Plot the pdf – the initial one and the final one (after incorporating all measurements). You may find
the function plotGridBelief.m useful.
4. Calculate and plot a pdf for a grid size of 40×22. Plot the final pdf. Do you get the same possible
robot location(s) as in part 3? explain.
Kalman filter (30 points)
1. Write the function KFStationary.m that estimates the position of the robot using a Kalman Filter.
2. What is your prediction step. Do you need one?
3. How is the update performed? (write the equation)
4. What are you choosing as your initial distribution? why those values?
5. Plot the initial and final position and covariance estimates (1 σ) on the map. (Hint: for plotting the
ellipse, do not forget the functions posted at the beginning of the semester on Canvas)
6. Try a different initial distribution. How does that affect the position estimate?
7. Would an EKF be more appropriate for this problem? explain.
Test function (autograded) (10 points)
1. Edit the test function on Canvas TestFunHW3.m look for comments that begin with ‘STUDENTS’ and
follow the directions. Course staff will use this function to test your code with different inputs and we
are using the autograder as a way to ensure the test function runs. We are NOT testing for correctness
of the algorithm.
Make sure the zip file you submit on Canvas contains this function and all of the functions
necessary to run it.
Location estimate (10 points)
1. For both filters, would you be able to estimate the location exactly given perfect range information?
Explain. (5 points)
2. You are told that the robot’s y coordinate is less than 8. Based on the various pdfs found in the
previous sections, can you estimate the true position of the robot? Explain (no need to run the filters)
(5 points)
Dynamics and measurement functions (40 points)
In lab 2, you will be localizing the robot using different filters and different measurements. In this section
you will set up the required dynamics and measurement functions.
1. Given that the control action/inputs are the odometry information (distance traveled, angle turned),
what function would you use as g(xt−1, ut)? Explain. (Hint: you already wrote the function in
Homework 2).
2. Given that the measurement is a depth measurement, what function would you use as h(xt) to predict
the measurement? Explain. (Hint: you already wrote the function in Homework 2).
3. Given that the measurement is “GPS” like information (given by the overhead localization/true pose,
i.e. [x, y, θ]), explain and write a function hGPS.m that predict the measurement (This should be very
simple). Submit this function in the autograded assignment Homework4 hGPS.m on Canvas.
4. In order to use an EKF, one needs to calculate the Jacobian of the update and measurement functions at a given state. Write out the Jacobian Gt =
∂g
∂xt−1
. Write the function GjacDiffDrive.m
to output Gt for a particular pose x. Submit this function in the autograded assignment Homework4
GjacDiffDrive.m on Canvas.
5. Write the file HjacDepth.m to output Hdeptht at a particular xt. Explain how you calculate this.
(Hint: differentiating the function is difficult, think about using finite differences instead). Submit this
function in the autograded assignment Homework4 HjacDepth.m on Canvas.
6. Write out the Jacobian HGP St =
∂h
∂x . Write the function HjacGPS.m to output HGP St at a particular
xt (Again, this should be VERY simple). Submit this function in the autograded assignment Homework4
HjacGPS.m on Canvas.
Extended Kalman Filter(15 points)
In the lab, you will be using the EKF with different measurements. To make switching between measurement
types easy, in this section you will write a generic EKF that takes as input the measurement and update
functions and as such, can be reused.
Write the file EKF.m to perform one prediction & update step of the EKF. To make this file very generic,
the functions from the previous section should be inputs to this function and should be passed as anonymous
functions (details at the end of the problem set).
What are the inputs to the function EKF? what are the outputs?
This function is evaluated in the autograded assignment Homework4 testEKF.m on Canvas, where you will
have to call your filter to produce the estimated location at time t given all the information in time t − 1
(i.e. one time step).
Particle Filter(15 points)
Same as the EKF, it is best to write a generic particle filter that can be reused later on.
Write a function PF.m that takes in an initial particle set, odometry and depth measurements, prediction
and update functions and outputs a new particle set.
Assume there is an infinite wall at y = 1, the robot moved one meter forward (d = 1, φ = 0), and all
9 depth measurements are 2 meters. Create an initial particle set of size 30 with uniform distribution in
x ∈ [−1, 1], y ∈ [−3, 1] and θ = 0, plot the wall, the initial particle set (x,y position only for each particle),
and the particle set after one iteration of the particle filter on the same plot. Use one marker shape for the
initial particles and another marker shape for the final particles.
Running the filters in the simulator (115 points)
1. Write a function motionControl.m. This function should drive the robot in different directions while
reacting to the bump sensor (so that the robot does not try to drive through a wall). The control
should be independent of the location and deterministic. Furthermore, this function should:
(a) Call the sensor information (using the function readStoreSensorData.m provided in HW 1)
(b) Call integrateOdom.m and store the dead reckoning information in dataStore.deadReck
(c) Have a clearly identifiable and well commented section describing how to switch between
the three methods described below (EKF with GPS data, EKF with depth data, PF with depth
data). You will need to be able to switch between these three methods for lab 2. See Questions
4, 5, and 6 for more details.
2. In motionControl.m, create new fields datastore.ekfMu and datastore.ekfSigma (robot pose mean
and covariance, respectively).
3. Initialize dataStore.deadReck and datastore.ekfMu to the true initial pose (from dataStore.truthPose)
and datastore.ekfSigma to [2, 0, 0; 0, 2, 0; 0, 0, 0.1]. Define a new variable R (process noise covariance
matrix) and set it to 0.01I. Define a new variable Q (measurement noise covariance matrix) and set it
to 0.001I. What are the dimensions of R and Q?
4. EKF with GPS data:
(a) Create noisy GPS measurement by adding gaussian noise to the true pose. What should the noise
distribution be? Record this data in dataStore.GPS.
(b) In the simulator, load cornerMap and place the robot somewhere near [−4; 2; 0]. Create and load
a config file with noise on the depth measurements and the odomerty. What values did you choose
for the noise? Make sure the values make sense with respect to R and Q.
(c) In motionControl.m, call the EKF with the noisy GPS data and the appropriate functions
(dynamics and measurement).
2
(d) Run motionControl.m in the simulator. On the same figure, plot the map and three final trajectories (x,y): dataStore.truthPose (the data from overhead localization), dataStore.deadReck
(from integrating the odometry only), and datastore.ekfMu (estimate from the EKF). Also plot
the 1-Σ ellipses from datastore.ekfSigma. Note, you only need to plot the x,y position and
not the orientation.(You may use the provided function plotCovEllipse.m or any other function
that plots ellipses. If you do, make sure to cite the source).
(e) Place the robot somewhere near [−4; 2; 0] again. Repeat 4(b–d) using a different initialization
for the filter (we want to see what happens to the estimate if the initial belief is different):
µ0 = [−2; −1.5; π/2] and Σ0 = [4, 0, 0; 0, 4, 0; 0, 0, 0.02]. (Initialize datastore.deadreckon to the
same initial pose estimate µ0 here in (e).)
(f) Compare the results of 4(d) and 4(e) – How does the initial estimate affect the robot’s estimate?
5. EKF with depth data: In motionControl.m, call the EKF with the depth data and the appropriate
functions (dynamics and measurement). Make sure to write comments in motionControl.m on how
to switch between using the GPS and depth measurements.
(a) Repeat 3 and 4(b,d). Make sure the dimension of R,Q are correct.
(b) Repeat 5(a) using R= 0.001I and Q= 0.01I.
(c) Compare the results from 5(a) and 5(b). How do the process and measurement noise covariances
affect the robot’s estimate?
(d) Does the filter behave as you’d expect? Why or why not?
(e) What are some of the problems with using an EKF with depth measurements?
(f) What errors do you expect to see when you run the EKF on the physical robot in the lab?
6. PF with depth data: In motionControl.m, create new field datastore.particles. Generate an
initial particle set of size 20, with x-location sampled from a uniform distribution between [−5, 0],
y-location sampled from a uniform distribution between [−5, 5], and θ sampled from a uniform distribution between [−0.2, 0.2]. Initialize the particle weights to be uniform. Call the PF with the depth
data and the appropriate functions (dynamics and measurement). Make sure to write comments in
motionControl.m on how to switch between using the EKF and using the PF.
(a) In the simulator, load cornerMap and place the robot somewhere near [−4; 2; 0]. Load your noisy
config file and run motionControl.m.
(b) Plot the initial particles on the map (plot the x,y positions as well as headings).
(c) Plot the final particles on the map (x,y position only).
(d) For each time step, choose the particle with the highest weight and plot it. You will get the “best”
trajectory. Do this for 5 additional particles and plot the 5 trajectories on the same map. Also
plot the true trajectories (from datastore.truthPose).
(e) Repeat 6(a–d) using a particle set of size 500.
(f) Does the filter behave as you’d expect? Why or why not?
(g) How does the size of the particle set affect the robot’s estimate?
(h) How else could you represent the robot’s estimate (other than just looking at the highest weight
particle)? Plot the trajectory of that estimate. Is it closer to the truth?
Comparing Filters (30 points)
Fill out the following table:
3
Filter Assumptions about This filter is Advantages Disadvantages
the system appropriate when
Kalman filter
Extended Kalman filter
Particle filter
Using functions as inputs
As previously mentioned, for re-usability, we can send the dynamics, measurement and Jacobian functions
as parameters to our filter. There are two ways to do this, but only one that is accepted with Autograder
on Canvas. Below is a simple example and link for extra information (assume robot2global is defined
elsewhere):
Anonymous functions –
r2g = @(pose, p_xy) robot2global(pose, p_xy);
MyComplicatedFilter(r2g)
function MyComplicatedFilter(converter)
% some calculations to get my_pose, my_xyR
y = converter(my_pose, my_xyR);
return y
More information: https://www.mathworks.com/help/matlab/matlab_prog/anonymous-functions.html
Introduction and Setup
In the Simulator, load loopMap and run a control program that will drive the robot into walls (you may use
the function backupBump.m or write a new function). Make sure you are collecting the following sensor data
in dataStore: truthPose, bump and depth. Allow the robot to move around the environment and bump into
most (if not all) walls. Access the sensor data in the MATLAB workspace by typing global dataStore, and
make sure the data was collected properly. You will use this data in the two mapping sections. Remember
that in the simulator, the robot radius is .16, and the sensor origin is [.13 0].
As you have seen, each bump sensor of the Create returns a value of 1 if the sensor is triggered, and 0
otherwise (with no false alarms). Initialize a 25 × 25 occupancy grid of the environment with p0(occ) = 0.5
as the prior occupancy probability for all cells (i.e. `0 = 0). The boundaries of the environment in the x and
y directions are [−2.5, 2.5]. You may find the MATLAB function meshgrid.m helpful.
Mapping – Occupancy Grid with Bump Sensor (45 points)
1. Plot the occupancy grid boundaries and the robot’s trajectory. Indicate with a marker (e.g. star,
triangle) the locations where a bump sensor was triggered.
2. Write a function logOddsBump.m to calculate the log-odds `t(occ) of cells in the grid using only bump
sensor measurements. Make sure you allow non-square grids (n × m, where n is the number of cells
along the X dimension and m the number of cells along the Y dimension). What are the inputs? what
are the outputs?
3. Explain the sensor model you used, assumptions you made in logOddsBump.m, and why they make
sense.
4. Write a function plotOccupancyGrid.m that given the log odds and the grid, plots unoccupied cells in
white, occupied cells in black, and uncertain cells in shades of gray. You may use colormap(flipud(gray)),
pcolor or image to help plot the grid cells. Plot the occupancy grid at three different time steps using
the log odds obtained from datastore.truthPose and datastore.bump.
5. If you had to choose for each cell whether it is occupied or free (for example, to use the map in a
planning algorithm), what would you do? Apply this method to the grid at the last timestep and plot
the result (each cell is assigned either 0 or 1).
6. Repeat (4,5) with a 50 × 50 grid. What do you observe?
7. What differences might you expect to observe between the map generated using simulated bump sensors
and one using real bump sensor data? Why?
Mapping – Occupancy Grid with Depth Information (45 points)
1. Write a function logOddsDepth.m to calculate the log-odds `t(occ) of cells in the grid using only depth
sensor measurements. What are the inputs? what are the outputs?
2. Explain the sensor model you used, assumptions you made in logOddsDepth.m, and why they make
sense.
3. Plot the occupancy grid at three different timesteps using the log odds obtained from datastore.truthPose
and datastore.rsdepth.
4. Repeat (2,3) with a 50 × 50 grid. What do you observe?
5. What are the pros and cons of using a finer grid resolution?
6. Compare the maps generated using depth with those using the bump sensor. Is one map “better” than
the other? Why or why not?
7. What differences might you expect to observe between the map generated using simulated depth sensors
and one using real depth sensor data? Why?
Test function (10 points)
Edit the function TestOccupancyGrid.m that takes as input,
• dataStore
• `0 (scalar value for the prior occupancy probability of grid cells)
• NumCellsX (number of cells in the X axis)
• NumCellsY (number of cells in the Y axis)
• boundaryX (1×2 vector representing the interval of possible values for X, for example [-2.5 2.5])
• boundaryY (1×2 vector representing the interval of possible values for Y, for example [-2.5 2.5])
outputs,
• lFinalBump (final occupancy grid using bump sensor)
• lFinalDepth (final occupancy grid using depth sensor)
and plots final occupancy grids, one for the bump (as in part 4 of the mapping with bump sensor section) and
one for the depth information (as in part 3 of the mapping with depth information section). Plots generated
should have meaningful titles and labels.
Submit this function in the autograded assignment Homework 5 TestOccupancyGrid.m on Canvas. Course
staff will use this function to test your code with different inputs, and we are using the autograder as a way
to ensure that the test function runs. We are NOT testing for correctness of the algorithm. Make sure the
zip file you submit on Canvas contains this function and all of the functions necessary to run it.
Reviews
There are no reviews yet.