Development of path planning, controls and Gazebo simulation for autonomous parking

1.0) Background Information
Automatic parking or called as autonomous parking is a car manoeuvring system that move a car from traffic lane into parking slot autonomously, but some car still required the driver to step on the brake for safety reason, and some car is able to park by itself just by pressing a Park button in the car or using application in mobile device.
A car is a non holonomic system which means that the number of control commands available is less than the number of coordinates that represent its position and orientation. Which means a non holonomic robot cannot move laterally.
Automatic parking requires the car to use sensors to get the information and measurement about its environment during auto parking, example the measuring the car park size.
The first prototypes of automatic parallel parking was developed on an electric car Ligier at INRIA in mid 1990. Nowadays many car company have develop their own automatic parking system, company that have develop this system are Toyota, Volkswagen and Volvo.
2.0) Introduction
2.1) Introduction to Path Planning
Path planning is producing a path from one state to another. When path planning task is being requested, path planning need to know the start position and goal position in order to plan a path. Path planning have to produce a path that avoid obstacle, finding a shortest and most optimal path from the start position to goal position. Some example of path planning algorithm are Dijkstra, A* and Rapidly Exploring Random Trees.
2.2) Introduction to Auto Parking
Auto parking is an autonomous car manoeuvring system, which this system can move a vehicle from a traffic lane into a parking spot. The auto parking can perform 3 types of parking which are parallel, perpendicular and angle parking.
Objective of auto parking is to enhance the comfort, safety and convenient of driving, especially in constrained environment. Example in urban cities where parking slot spaces are small. Auto parking requires the use of sensors and these sensors have names like Park Distance Control, Park Assist and EPS.
Sensors are placed at the front and back of the car to detect objects around the car.
These sensors send signals that will bounce off the object and the signal will come back to the car, which the system will calculate time taken for signal to return, this allows the system to calculate the object around the car. With these information received by the sensors, the computer is able to control the steering wheel and the brake pedals of the car.
2.3) Introduction to Types of Sensor

There are 2 types of sensors which are Ultrasonic Sensor and Electromagnetic Sensor.
The Ultrasonic Sensor will emit radio wave or ultrasonic wave to detect object around the vehicle. But some objects that are too small or flat object like narrow pole, the sensor cannot detect it as the waves that are emitted cannot reflect back to the sensor.
The returned wave are registered and analysed by a computer that is inside the car. The computer will note 2 factors to determine the size and proximity of the object that is behind the car. This 2 factors are the time taken for the wave to return to the sensor and the changes in length of the wave. If wave returned quickly it means that the car is close to an object, the computer will produce an audio alarm signal to inform the driver when car is near to an object.
Ultrasonic Sensor system required at least 4 to 6 individual sensor mounted on outside of the car bumper, this means that the sensor are placed at different part of the car. But the Ultrasonic Sensor accuracy and capability of detecting objects will be affected if there is dirt and grime on the sensor as it disrupts the Ultrasonic Sensor signals. And there are blind spots that Ultrasonic Sensor may not be able to detect.
Electromagnetic Sensors use electromagnetic wave. Electromagnetic Sensors uses a transceiver strip which will generate an elliptical field behind the car. When an object that meet a certain mass requirement, it will disturb the elliptical field and the detector will pick up the voltage change and send this information to the computer in the car.
The computer will analyse this data and determine the distance of the object, and when the car is near to the object the computer will produce an audio alarm to alert the driver that the car is close to an object, the computer will produce a louder audio alarm tone when the car is nearer to the object.
Electromagnetic Sensor are mounted inside of the car bumper instead of outside of the car bumper, which means there is no dirt that will interfere the system accuracy and capability on the detection of sensor.
Both Ultrasonic Sensor and Electromagnetic Sensor are able to detect moving object as well. If a moving object is near to the sensory area, these 2 sensor will produce a continuous audio alarm, and the audio alarm will stop when the object is outside of the sensory area. Moving objects are like human, but if there is a tow bar in the sensory area the audio alarm will not sound.
In conclusion, I think that by using both Ultrasonic Sensor and Electromagnetic Sensor will be better as it can solve the problem of dirt and grime that is on the sensor. By using 2 types of sensor it provides grater and better detection capability and is more accurate.
But for the NYP car we are not using Ultrasonic Sensor and Electromagnetic Sensor, we are actually using 2D Lidar and 3D Velodyne to detect obstacles around the surrounding.

2.4) Introduction to Parallel and Perpendicular Parking
This picture actually show the movement of parallel parking. Parallel parking is harder as compared to perpendicular parking, as parallel parking manoeuvre the car in a S shape movement. Where a car first have to be in parallel with a parked car in the carpark and the car must not be too close to the parked car, which means both car are separated by distance y, this distance y is to prevent both car to get scratches during parking.
This picture actually show the movement of parallel parking.There are two ways of perpendicular parking. One I call it L-shape parking as the below left picture shows, another way I call it vertical parking as the below right picture shows. There is no right way for perpendicular parking but the vertical parking is just how the human will park the car in the daily life.

2.5) Introduction to path planning algorithms
There are two types of path planing which are dynamic path planing and static path plaining. Dynamic path planing is aimed to avoid unexpected obstacles along the robot’s trajectory to reach the goal. There are some algorithms like R*, Bug Algorithms, APF and so on if need can search online to see how does it work. However, for my auto-parking project, static path planning is the most suitable which is base on a map and goal location, finding a geometric path. The method i using in my project is geometric method what requires more mathematical calculation. https://www.slideshare.net/dare2kreate/dynamic- path-planning
2.6) Introduction to Geometric method
Geometric path planning is a robust alternative algorithm for computing a collision free path connecting the initial and final configurations of a robot with a minimal number of waypoints in 2D and 3D environments. It provides a geometric description of the robot motion given a mapping and a description of the obstacles in the workspace. The algorithm outputs the (x,y,z) coordinates of the path in the workspace which is then passed to a custom inverse kinematics block to compute the revolute and prismatic joint variables in the configuration space. I illustrate the application of this algorithm by designing fixed path for perpendicularly parking and parallel parking. In my project I enable the car follow the path I designed and can park into the parking slot.
3.0) Simulators
3.1 Stage Simulator
The Stage Simulator is a 2D simulator, I choose to use Stage Simulator because many of the online users actually suggested beginners to use Stage Simulator instead of Gazebo Simulator for beginner level as it is easy to use.
With the Stage Simulator we are able to move a robot by use teleop or by sending a 2D Navigation Goal. The Stage Simulator is also able to be visualized in Rviz, and the map used in Stage Simulator can be changed by drawing or change the configuration file (world file).
Please refer to the right picture below, as we can see the robot in Stage Simulator detect the obstacle in front of it and it can be visualized in Rviz (Zoom In Version). The teleop panel allow us to move the robot by pressing the keyboard.
I think that the Stage Simulator is a very good platform for beginners to learn what is a 2D simulator looks like, as in Stage Simulator they have a teleop panel that allows beginner to move the robot around the map, the map can also be changed which this let beginners to get a feel of what a 2D simulator is like.

3.2 Gazebo Simulator
The Gazebo Simulator that I used was found in the Learning ROS For Robotics Programming book, Robot simulation is an essential tool in every roboticist’s toolbox. A well-designed simulator makes it possible to rapidly test algorithms, design robots, perform regression testing, and train AI system using realistic scenarios. Gazebo offers the ability to accurately and efficiently simulate populations of robots in complex indoor and outdoor environments. At your fingertips is a robust physics engine, high-quality graphics, and convenient programmatic and graphical interfaces. Best of all, Gazebo is free with a vibrant community. The Gazebo Simulator is which company that do path planning use this simulator as they provide a more realistic environment.

3.3) Stage Simulator VS Gazebo Simulator
I have tried both Stage Simulator and Gazebo Simulator. I think that the Stage Simulator which is a 2D simulator is a good platform for beginner to start on, if the Stage Simulator is familiarize already then we can upgrade to the Gazebo Simulator, as Gazebo Simulator is a 3D simulator which provide a more realistic environment which is good for visualization. Example Gazebo Simulator car robot and car park environment are in 3D as compared to Stage Simulator which only provide a car robot and car park environment in 2D.
I tried to compare Stage Simulator and Gazebo Simulator, and I have found out that the Stage Simulator which is a 2D simulator takes 2 second to open up when I run it and the processor wake up per seconds is 2000. The Gazebo Simulator which is a 3D simulator takes 6 seconds to open up and the processor wake up per seconds is 7000.
Stage Simulator is able to open up at a fast timing and it does not crash that often, even if we were to add more obstacle to the 2D map in Stage Simulator, it will not crash and it will not increase computation time. I would recommend beginner to use Stage Simulator as it is easier to use and learn and their tutorial can be easily understand.
Gazebo Simulator requires more time to open up, and sometimes when I run Gazebo Simulator it will crash, if we were to add in more obstacle in the map it will definitely require more computation time as it is a 3D simulator, and it will crash more often which means that we need to use a more better PC which require more money.
I would recommend to use Gazebo Simulator if we want to simulate a more realistic environment, to add obstacle in Gazebo Simulator it cannot be drawn like in Stage Simulator. The obstacle must be added in the simulator itself, which means that we have to go through their tutorial in order to do so, which it will take some time to learn.
In conclusion we have decide that we will be using Stage Simulator for my project. As Stage Simulator is an easier version to be use, it requires lesser computation time and does not crash that often. Most importantly the map in Stage Simulator can be drawn to simulate obstacle.

4.0) Path planning
4.1) Parallel parking
First, Setup your ros environment install from this website and follow the steps: http://wiki.ros.org/indigo/Installation/Ubuntu
After that, create your own workspace folder, follow this website
http://wiki.ros.org/catkin/Tutorials/create_a_workspace
After that before start this project I advise you can go through all beginning level rod tutorials and read through the chapter2 and chapter9 of the book.
This is the paper where can get reference to develop the codes for parallel parking using Geometry Method.(below left)
http://ac.els-cdn.com/S1474667016436852/1-s2.0-S1474667016436852-main.pdf?
_tid=a0d3f6c0-7598-11e7-8b2b-00000aab0f26&acdnat=1501468585_14c9cf64aa26c892e872aa4 74d4d474f

According to what the previous students had done I already get such a path as the picture shown upper right which is exactly the same as the reference. My first job is to modify this path to do some enhancement.
As you can see the turning angle for his path now is about 59 degrees. The turning angle is defined as the angle between the tangent line of the intersection point of the two circle and the horizontal line.
Normally for human driving this turning angle should be 45 degrees because if the angle is too large it is not easy for the vehicle to turn such fast and it need to stop at the intersection point to make another direction full turn which is not a continuous process. To reduce the turning angle there is one way to reduce the y distance shown in the graph.

The y distance is the safe distance between two cars. Image below shows the distance before modification is 2.4m.
To modify this distance or any other parameters we can use parameter configuration which will be very convenient than we zoom into the code to change these and recompile again every time. To use parameter reconfiguration firstly we need a yaml file which contains all parameters you want to modify later. In my case I just put two parameters inside that are radius and safety distance as the left picture below. There is one thing should be very careful that the format of the yaml is very important. Like there must have at least one space before each parameter name. Meanwhile the first character of each parameter name have to be aligned very well otherwise there will be some strange error.

After we generated the yaml file I should add these two lines in the source code. You can find my code in /catkin_workspace/src/FYP/geometric/src/geometric.cpp
These two line make the parameters inside the yaml file work. You may find there are two names within the bracket. The first name is the name I defined in the yaml file while the second name is same as the name I defined in the header file shown below and the followed number is the default value for these two parameters.

Finally to test this parameter configuration just add this line at the end of source code to see any difference happen.
Now, just modify the value in yaml file and click save then just
roslaunch teb_local_planner_tutorials geometryic.launch
no need catkin_make again you will see the value of radius is shown on the terminal as the picture below which means the parameter reconfiguration is working.

With the help of parameter reconfiguration now I can easily change the value. I found that in order to modify the angle to 45 degrees the safety distance should be 1.46m.
However this doesn’t make sense as the distance between two cars are only 0.06m now which is impossible.
Hence, I come out with another solution. With reference to the paper, we know that parking a car into the parallel parking lot requires 2 circle. Now I separate these two circles and connect them with one straight line which is in 45 degrees as the picture shown below
Graph 1

To generate this path there are some key factors you must calculate out first. I suggest you better draw this path on a paper and finish all calculation as i did here.
Before starting doing calculation you may required to know what is Parametric equation for circles and straight lines.In mathematics, parametric equations define a group of quantities as functions of one or more independent variables called parameters.[1] Parametric equations are commonly used to express the coordinates of the points that make up a geometric object such as a curve or surface, in which case the equations are collectively called a parametric representation or parameterization(alternatively spelled
as parametrization) of the object. http://doubleroot.in/lessons/straight-line/parametric-form/ #.WhruE7SZ0dU https://www.mathopenref.com/coordparamcircle.html
For circles:
A circle can be defined as the locus of all points that satisfy the equations
x=rcos(t) y=rsin(t)
where x,y are the coordinates of any point on the circle, r is the radius of the circle and
t is the parameter - the angle subtended by the point at the circle’s center.
For straight lines:
The parametric equation of a straight line passing
through (x1, y1) and making an angle θ with the
positive X-axis is given by (x-x1)/cosθ=(y-y1)/
sinθ=r,where r is a parameter, which denotes the
distance between (x,y) and (x1, y1).

The key factors are intersection point a and b, the center of both circle, the constant of the
straight line, the starting point position and the starting and ending angle(To determine
when the car needs to get into the circle and when it needs to leave the circle).
As the turning angle now is 45 degrees so all triangles now are isosceles right triangles hence, The intersection point a: Xa = x1+R/sqrt(2) , Ya = y1-R/sqrt(2).
The centre of the left circle is depends on the goal position which is given by us every time. The center of the left circle: X1= goalX - left_adj , Y1=goalY + left_opp.(The left_adj and left_opp is define as shown in the graph below.)

The straight line is in 45 degrees so we can know the equation should be in the form of y=x+C (C is a constant) point a and b are both on this straight line so we can get these two relationship:C = Ya - Xa, Xb = Yb - C. Hence, the right center is: X2 = Xb - R/sqrt(2) , Y2 = goalY - (R-2.4). Because point b is also on right circle, so Yb = y2 + R*sin(0.75PI). From the Graph to make the turning angle to be 45 degrees we can get the starting angle is 0.75PI(when the car gets into the circle) and the ending angle is 1.75PI.(when the car leave the circle) The codes is shown below.

Base on all the information we got above we can generate the equations now. The codes is shown below.
Right Circle:
Straight line:
Left Circle:
Don’t forget to define the orientation of each equation otherwise the model would not follow the path well. You may ask what is the for loop do? This part actually has been done by last student.

For turtlebot/cars, we only make use of Yaw so we set Roll and Pitch to 0 and then calculate the yaw angle. See below image for explanation on how to get the yaw angle:

This is the example code for the left circle the orientation for it is :
while the orientation for the straight line is 0.25PI.
After define the orientation the codes at the end is to rotate the path related to the orientation of the goal we set. This idea is to use rotation matrix. Rotation matrix means to set a pivot point which is the goal position in the case and then rotate the whole trajectory with respect to the pivot point. Below image shown explain the formula for rotation matrix and how to rotate a point.

The rotation matrix formula is from this website:
https://stackoverflow.com/questions/2259476/rotating-a-point-about-another-
point-2d/15109215#15109215
To start, we need to store the position x,y,z and the orientations to a vector which is also called an array.So we need to declare vector first. Below image shows how to declare the vector
Position.x = x coordinate points of the circle
Position.y = y coordinate points of the circle
Tangent_to_circle_angle = orientation of the car
After that is done, we need to create for loops to declare what information we want inside the vector/array. Each equation must have one for loop as the code shown above.
After we put all the positions and orientations into an array, next we need to apply rotation matrix to each and every points. So we create a for loop and include in the rotation matrix formula. Refer to page 60 for the formula. Below image shows the example of how to add in the formulas.

Now the path become like this
After that I did an enhancement which is to straighten the car before entering the path and after leaving the path. As you can see from this below left graph these two blu lines are designed to straighten the car. The equation for these two lines are also shown here. The code is shown below right.
The path planning for parallel parking ends here you can try this code by using “roslaunch teb_local_planner geometric.launch” And you will see the path like:

4.2) Perpendicular parking
Base on the theorem of parallel parking we already have a good understanding about geometric path planning now. Next we can continue working on perpendicular parking with same idea but different path. For perpendicular parking I generated two different path, one is L-shape parking while another one is called vertical parking. There is no right or wrong for these two path but just two different style to park.
4.3) L-shape parking
Firstly, Let’s have a look at the L-shape path. As the picture shown below left this path is consist of two straight lines and one circle path. There is one thing very Important that because the orientation of perpendicular parking is 90 degrees different from parallel parking so when we doing the calculations we need to consider the path 90 degrees different with the real condition just as the picture shown below right.
The key factor is the position of the center of the circle. The position of this circle depends on the goal we set :x3=x2 + Distance of line2 , y3=y2 - Distance of line1. Once we know the center of the circle we can get the equation of it. After this we need to determine how long are these two straight lines to get the equations of them. For the straight line1 can just assume it is 2-4m long, this one is not that important. However, the second line needs to be careful as it required to have a minimum safety distance to avoid collision when car is parking. Normally, we will set at least 1m long away between each two cars, base on this I set the length of the second line as 4m long. With all of the information we can generate the equations now as the picture shown above. The code is shown below. You can find this code under /catkin_workspace/src/geometric/L-shape.cpp. You just need to copy this and substitute the geometric.cpp file with this code then catkin_make and launch.

4.4) Vertical parking
Vertical parking is another way I designed for perpendicular parking. Why shall I do this is because this is actually the way to park for human drivers normally do. It is easier to adjust the turning angle and distance for parking. This path is more complex which contains three straight lines, two circle paths. Just the same as what we have done before, we also need to rotate the path 90 degrees when doing the calculation like the pictures shown below.
The key factors for this path are:
1.Intersection point a which is in the middle of these two circle, hence Xa = (x1+x2) / 2, Ya = (y1+y2) / 2.
2.The centre of left circle which is depends on the goal position: x2 = goalX + 4, y2 = goalY - R.
3.The centre of right circle can get from the position of left circle: x1 = x2 + 2 * R/sqrt(2), y1 = y2 + 2 * R/sqrt(2)
4. The intersection point is on the straight line hence Xa+Ya=C.
5.The starting angle for left circle is 0.25PI, while for the right circle is PI.
6.The ending angle for right circle is 0.5PI, while for the right circle is1.25PI
Base on that we can generate the equations for circles and straight lines, which is shown on the picture above left. The distance for both straight lines I set to be 4m long as the minimum safety distance to avoid collision happen.
The code is shown below. You can find this code under /catkin_workspace/src/geometric/ L-shape.cpp. You just need to copy this and substitute the geometric.cpp file with this code then catkin_make and launch. The code is shown below:

5.0) Local planner control
5.1) Introduction to local planner control
Only generate global planner is not enough because if the car cannot follow the path we generated then it dose not make sense. Local planner control is the key for the car to follow the path and the local planner will find a shortest path to the goal position which means that local planner may not follow the global plan produced by the global planner.Local planner is also useful to take dynamic obstacles into account as early as possible. All the things we have done till now is base on a local planner called pose follower, which is also the most suitable one within all the local planners I have tried. Actually there are another local Planner I also tried before that is DWA local planner
5.2) DWA Local Planner
The dwa_local_planner package provides a controller that drives a mobile base in the plane. This controller serves to connect the path planner to the robot. Using a map, the planner creates a kinematic trajectory for the robot to get from a start to a goal location. Along the way, the planner creates, at least locally around the robot, a value function, represented as a grid map. This value function encodes the costs of traversing through the grid cells. The controller’s job is to use this value function to determine dx,dy,dtheta velocities to send to the robot. The basic idea of the Dynamic Window Approach (DWA) algorithm is as follows:

  1. Discretely sample in the robot’s control space (dx,dy,dtheta)
  2. For each sampled velocity, perform forward simulation from the robot’s current state
    to predict what would happen if the sampled velocity were applied for some (short)
    period of time.
  3. Evaluate (score) each trajectory resulting from the forward simulation, using a
    metric that incorporates characteristics such as: proximity to obstacles, proximity to the goal, proximity to the global path, and speed. Discard illegal trajectories (those that collide with obstacles).
  4. Pick the highest-scoring trajectory and send the associated velocity to the mobile base.
  5. Rinse and repeat.
    Parameters

There are a large number of ROS Parameters that can be set to customize the behavior of the dwa_local_planner::DWAPlannerROS wrapper. These parameters are grouped into several categories: robot configuration, goal tolerance, forward simulation, trajectory scoring, oscillation prevention, and global plan. Most of these parameters can also be changed using dynamic_reconfigure to facilitate tuning the local planner in a running system.
Robot Configuration Parameters ~/acc_lim_x (double, default: 2.5)
• The x acceleration limit of the robot in meters/sec^2 ~/acc_lim_y (double, default: 2.5)
• The y acceleration limit of the robot in meters/sec^2 ~/acc_lim_th (double, default: 3.2)
• The rotational acceleration limit of the robot in radians/sec^2 ~/max_trans_vel (double, default: 0.55)
• The absolute value of the maximum translational velocity for the robot in m/s ~/min_trans_vel (double, default: 0.1)
• The absolute value of the minimum translational velocity for the robot in m/s ~/max_vel_x (double, default: 0.55)
• The maximum x velocity for the robot in m/s. ~/min_vel_x (double, default: 0.0)
• The minimum x velocity for the robot in m/s, negative for backwards motion. ~/max_vel_y (double, default: 0.1)
• The maximum y velocity for the robot in m/s ~/min_vel_y (double, default: -0.1)
• The minimum y velocity for the robot in m/s ~/max_rot_vel (double, default: 1.0)
• The absolute value of the maximum rotational velocity for the robot in rad/s ~/min_rot_vel (double, default: 0.4)
• The absolute value of the minimum rotational velocity for the robot in rad/s
Goal Tolerance Parameters ~/yaw_goal_tolerance (double, default: 0.05)
• The tolerance in radians for the controller in yaw/rotation when achieving its goal ~/xy_goal_tolerance (double, default: 0.10)
• The tolerance in meters for the controller in the x & y distance when achieving a goal
~/latch_xy_goal_tolerance (bool, default: false)
• If goal tolerance is latched, if the robot ever reaches the goal xy location it will
simply rotate in place, even if it ends up outside the goal tolerance while it is doing so.
Forward Simulation Parameters ~/sim_time (double, default: 1.7)

• The amount of time to forward-simulate trajectories in seconds ~/sim_granularity (double, default: 0.025)
• The step size, in meters, to take between points on a given trajectory ~/vx_samples (integer, default: 3)
• The number of samples to use when exploring the x velocity space ~/vy_samples (integer, default: 10)
• The number of samples to use when exploring the y velocity space ~/vth_samples (integer, default: 20)
• The number of samples to use when exploring the theta velocity space ~/controller_frequency (double, default: 20.0)
• The frequency at which this controller will be called in Hz. Uses searchParam to read the parameter from parent namespaces if not set in the namespace of the controller. For use with move_base, this means that you only need to set its “controller_frequency” parameter and can safely leave this one unset.
Trajectory Scoring Parameters
The cost function used to score each trajectory is in the following form:
~/path_distance_bias (double, default: 32.0)
• The weighting for how much the controller should stay close to the path it was given ~/goal_distance_bias (double, default: 24.0)
• The weighting for how much the controller should attempt to reach its local goal, also controls speed
~/occdist_scale (double, default: 0.01)
• The weighting for how much the controller should attempt to avoid obstacles
~/forward_point_distance (double, default: 0.325)
• The distance from the center point of the robot to place an additional scoring point,
in meters
~/stop_time_buffer (double, default: 0.2)
• The amount of time that the robot must stop before a collision in order for a trajectory to be considered valid in seconds
~/scaling_speed (double, default: 0.25)
• The absolute value of the velocity at which to start scaling the robot’s footprint, in m/
s
~/max_scaling_factor (double, default: 0.2)
• The maximum factor to scale the robot’s footprint by Parameters
There are a large number of ROS Parameters that can be set to customize the behavior of the dwa_local_planner::DWAPlannerROS wrapper. These parameters are grouped into several categories: robot configuration, goal tolerance, forward simulation, trajectory scoring, oscillation prevention, and global plan. Most of these parameters can also be
cost =
path_distance_bias * (distance to path from the endpoint of the trajectory in meters)

  • goal_distance_bias * (distance to local goal from the endpoint of the trajectory in
    meters)
  • occdist_scale * (maximum obstacle cost along the trajectory in obstacle cost (0-254))

changed using dynamic_reconfigure to facilitate tuning the local planner in a running system.
Robot Configuration Parameters ~/acc_lim_x (double, default: 2.5)
• The x acceleration limit of the robot in meters/sec^2 ~/acc_lim_y (double, default: 2.5)
• The y acceleration limit of the robot in meters/sec^2 ~/acc_lim_th (double, default: 3.2)
• The rotational acceleration limit of the robot in radians/sec^2 ~/max_trans_vel (double, default: 0.55)
• The absolute value of the maximum translational velocity for the robot in m/s ~/min_trans_vel (double, default: 0.1)
• The absolute value of the minimum translational velocity for the robot in m/s ~/max_vel_x (double, default: 0.55)
• The maximum x velocity for the robot in m/s. ~/min_vel_x (double, default: 0.0)
• The minimum x velocity for the robot in m/s, negative for backwards motion. ~/max_vel_y (double, default: 0.1)
• The maximum y velocity for the robot in m/s ~/min_vel_y (double, default: -0.1)
• The minimum y velocity for the robot in m/s ~/max_rot_vel (double, default: 1.0)
• The absolute value of the maximum rotational velocity for the robot in rad/s ~/min_rot_vel (double, default: 0.4)
• The absolute value of the minimum rotational velocity for the robot in rad/s
Goal Tolerance Parameters ~/yaw_goal_tolerance (double, default: 0.05)
• The tolerance in radians for the controller in yaw/rotation when achieving its goal ~/xy_goal_tolerance (double, default: 0.10)
• The tolerance in meters for the controller in the x & y distance when achieving a goal
~/latch_xy_goal_tolerance (bool, default: false)
• If goal tolerance is latched, if the robot ever reaches the goal xy location it will
simply rotate in place, even if it ends up outside the goal tolerance while it is doing so.
Forward Simulation Parameters ~/sim_time (double, default: 1.7)
• The amount of time to forward-simulate trajectories in seconds ~/sim_granularity (double, default: 0.025)
• The step size, in meters, to take between points on a given trajectory ~/vx_samples (integer, default: 3)

• The number of samples to use when exploring the x velocity space ~/vy_samples (integer, default: 10)
• The number of samples to use when exploring the y velocity space ~/vth_samples (integer, default: 20)
• The number of samples to use when exploring the theta velocity space ~/controller_frequency (double, default: 20.0)
• The frequency at which this controller will be called in Hz. Uses searchParam to read the parameter from parent namespaces if not set in the namespace of the controller. For use with move_base, this means that you only need to set its “controller_frequency” parameter and can safely leave this one unset.
Trajectory Scoring Parameters
The cost function used to score each trajectory is in the following form:
~/path_distance_bias (double, default: 32.0)
• The weighting for how much the controller should stay close to the path it was given ~/goal_distance_bias (double, default: 24.0)
• The weighting for how much the controller should attempt to reach its local goal, also controls speed
~/occdist_scale (double, default: 0.01)
• The weighting for how much the controller should attempt to avoid obstacles
~/forward_point_distance (double, default: 0.325)
• The distance from the center point of the robot to place an additional scoring point,
in meters
~/stop_time_buffer (double, default: 0.2)
• The amount of time that the robot must stop before a collision in order for a trajectory to be considered valid in seconds
~/scaling_speed (double, default: 0.25)
• The absolute value of the velocity at which to start scaling the robot’s footprint, in m/
s
~/max_scaling_factor (double, default: 0.2)
• The maximum factor to scale the robot’s footprint by
cost =
path_distance_bias * (distance to path from the endpoint of the trajectory in meters)

  • goal_distance_bias * (distance to local goal from the endpoint of the trajectory in
    meters)
  • occdist_scale * (maximum obstacle cost along the trajectory in obstacle cost (0-254))

5.3) Pose Follower Local Planner
Pose follower is the only suitable local planner for our geometric global planner, hence I zoom into the theorem of it. As the picture shown below there are so many waypoints on the global path. We extract out two points as an example. You can see both the yellow and purple points have their own orientation and position.The angle difference between the two orientational arrays I all it “error angle” while the position difference dz hereI call it “error distance” which is equal to sqrt(dx2+dy2). The transitional velocity is equal to error distance * a constant called k_trans while the angular velocity is equal to error angle * a constant called k_rot. Hence, you can control the velocity by modifying these two constant using parameter reconfiguration method we mentioned before in the yaml file. Of course there are maximum and minimum limitations for velocity once the velocity is greater or lesser than the limitations then the velocity will be set to the limitation value itself.
You can download the pose follower local planner source code from GitHub here to look into the code properly:
https://github.com/ros-planning/navigation_experimental

6.0)Gazebo Simulation 6.1)Installation of gazebo
For Gazebo there are also multiple options for installation. As I use an Ubuntu I selected the installation with precompiled binaries. Make sure you can launch “gzserver” and “gzclient” after the installation of Gazebo.
Gazebo is split up in two parts. The server part computes all the physics and world, while the client is the graphical frontent for gazebo. So if you want to save performance on your computer you could also execute all tests without the graphical interface. Although it looks very nice, it consumes a lot of resources.
Your Gazebo should be installed in:
or
if you installed Gazebo from sources.
After Gazebo and ROS have been installed it is time to install the bridge between them. With this bridge you can launch gazebo within ROS and dynamically add models to Gazebo. Depending on your Gazebo installation, there are different methods to continue.
If you have ROS Hydro you probably want to follow this guide to install the ROS Packages for Gazebo and look at the ‘Install Pre-Built Debians’ section.
If you do not have the ROS Version “Hydro” installed, you have to manually “git clone” the “gazebo_ros_pkgs”. The git url can be found on http://www.ros.org/wiki/gazebo_ros_pkgs. If git is not installed:
sudo apt-get install git
If you have some missing dependencies, the following two packages may help
If the cmake_modules are missing, “git clone” them in the sources of your catkin directory.
git clone https://github.com/ros/cmake_modules
If everything worked you should be able to start Gazebo and ROS with (remember to source your environment):
/usr/bin/gzserver
/usb/bin/gzclient
/usr/local/bin/gzserver
/usr/local/bin/gzclient
sudo apt-get install ros-hydro-pcl-conversions
sudo apt-get install ros-hydro-control-msgs
roscore & rosrun gazebo_ros gazebo

You could also start them individually with gzserver and gzclient. If Gazebo is properly connected to ROS you should be able to the some published topics. Just type
“rostopic list”
in one of your favorite terminals to see, if there are some gazebo topics if the gzserver is running.
6.2)Structure of gazebo
In my case to use gazebo simulation there are so many files will be use at the same time. However, we can classify them into three stacks. The first one is description stack that include URDF file which is to generate the 3D model and the g mapping file to build the map and Rviz which is for visualization. The second stack is gazebo, this stark is to set up the gazebo 3D simulation environment. Finally, we need the navigation stack to move the model. There are 2 types of navigation one is using Teleop package to control the model by using the keyboard manually while another type is using move_base stack to navigate the model autonomously.
/gazebo/link_states
/gazebo/model_states
/gazebo/parameter_descriptions
/gazebo/parameter_updates
/gazebo/set_link_state
/gazebo/set_model_state

6.3)Introduction of URDF
The Universal Robotic Description Format (URDF) is an XML file format used in ROS to describe all elements of a robot. To use a URDF file in Gazebo, some additional simulation-specific tags must be added to work properly with Gazebo. This tutorial explains the necessary steps to successfully use your URDF-based robot in Gazebo, saving you from having to create a separate SDF file from scratch and duplicating description formats. Under the hood, Gazebo will then convert the URDF to SDF automatically.
Background
While URDFs are a useful and standardized format in ROS, they are lacking many features and have not been updated to deal with the evolving needs of robotics. URDF can only specify the kinematic and dynamic properties of a single robot in isolation. URDF can not specify the pose of the robot itself within a world. It is also not a universal description format since it cannot specify joint loops (parallel linkages), and it lacks friction and other properties. Additionally, it cannot specify things that are not robots, such as lights, heightmaps, etc.
On the implementation side, the URDF syntax breaks proper formatting with heavy use of XML attributes, which in turn makes URDF more inflexible. There is also no mechanism for backward compatibility.
To deal with this issue, a new format called the Simulation Description Format (SDF) was created for use in Gazebo to solve the shortcomings of URDF. SDF is a complete description for everything from the world level down to the robot level. It is scalable, and makes it easy to add and modify elements. The SDF format is itself described using XML, which facilitates a simple upgrade tool to migrate old versions to new versions. It is also self-descriptive.
It is the intention of this author to make URDFs as fully documented and supported in Gazebo as possible, but it is relevant to the reader to understand why the two formats exist and the shortcomings of both. It would be nice if more work was put into URDFs to update them to the current needs of robotics.
Overview of Converting to Gazebo
There are several steps to get a URDF robot properly working in Gazebo. The following is an overview of steps, which are then elaborated on in the rest of this tutorial:
Required
• An element within each element must be properly specified and configured.
Optional
• Add a element for every
◦ Convert visual colors to Gazebo format
◦ Convert stl files to dae files for better textures
◦ Add sensor plugins
• Add a element for every
◦ Set proper damping dynamics

◦ Add a
Add actuator control plugins
element for the element
• •
The
The
properties needed for simulation purposes in Gazebo. It allows you to specify the properties found in the SDF format that are not included in the URDF format. None of the elements within a element are required because default values will be automatically included. There are three different types of elements - one for
the tag, one for tags, and one for tags. We will discuss the attributes and elements within each type of element throughout this tutorial.
Header of a URDF File
There have been many API changes in Gazebo and the required URDF format, one of which that Gazebo xml-schema namespaces are no longer needed. If your URDF has something like:

You can remove them. All you need in your root element tag is the name of the robot and optionally the xml namespace for xacro if you are using that:

Rigidly Fixing A Model to the World
If you would like your URDF model to be permanently attached to the world frame (the ground plane), you must create a “world” link and a joint that fixes it to the base of your model. RRBot accomplishes this with the following:




If however you have a mobile base or some other moving robot, you do not need this link or joint.

Add a base_link
Element
link if the robot should be rigidly attached to the world/

element is an extension to the URDF used for specifying additional

Links
The link element describes a rigid body with an inertia, visual features.
There could be multiple elements of links:
(optional)
• The inertial properties of the link. (optional: defaults to identity if not
specified)
◦ This is the pose of the inertial reference frame, relative to the link reference frame.
The origin of the inertial reference frame needs to be at the center of gravity. The axes of the inertial reference frame do not need to be aligned with the principal axes of the inertia.
◦ xyz (optional: defaults to zero vector) ▪ Represents the offset.
◦ rpy (optional: defaults to identity if not specified)
▪ Represents the fixed axis roll, pitch and yaw angles in radians.

◦ The mass of the link is represented by the value attribute of this element

◦ The 3x3 rotational inertia matrix, represented in the inertia frame. Because the
rotational inertia matrix is symmetric, only 6 above-diagonal elements of this matrix are specified here, using the attributes ixx, ixy, ixz, iyy, iyz, izz.
(optional)
• The visual properties of the link. This element specifies the shape of the object (box, cylinder, etc.) for visualization purposes. Note: multiple instances of tags can exist for the same link. The union of the geometry they define forms the visual representation of the link.
name (optional)
◦ Specifies a name for a part of a link’s geometry. This is useful to be able to refer to specific bits of the geometry of a link.
• (optional: defaults to identity if not specified)
◦ The reference frame of the visual element with respect to the reference frame of the link.xyz (optional: defaults to zero vector)
▪ Represents the offset.

rpy (optional: defaults to identity if not specified)
Represents the fixed axis roll, pitch and yaw angles in radians. (required)
The shape of the visual object. This can be one of the following:

size attribute contains the three side lengths of the box. The origin of the box is in

Specify the radius and length. The origin of the cylinder is in its






its center.
◦ ▪
center.
◦ ▪
◦ ▪
• (optional)
◦ The material of the visual element. It is allowed to specify a material element outside of the ‘link’ object, in the top level ‘robot’ element. From within a link element you can then reference the material by name.
◦ name name of the material
◦ (optional)
▪ rgba The color of a material specified by set of four numbers representing red/ green/blue/alpha, each in the range of [0,1].
◦ (optional)
▪ The texture of a material is specified by a filename
(optional)
• The collision properties of a link. Note that this can be different from the visual
properties of a link, for example, simpler collision models are often used to reduce computation time. Note: multiple instances of tags can exist for the same link. The union of the

Specify the radius. The origin of the sphere is in its center. A trimesh element specified by a filename, and an optional scale that scales the mesh's axis-aligned-bounding-box. The recommended format for best texture and color support is Collada .dae files, though .stl files are also supported. The mesh file is not transferred between machines referencing the same model. It must be a local file.

geometry they define forms the collision representation of the link.
name (optional)
◦ Specifies a name for a part of a link’s geometry. This is useful to be able to refer to
specific bits of the geometry of a link.
• (optional: defaults to identity if not specified)
◦ The reference frame of the collision element, relative to the reference frame of the link.xyz (optional: defaults to zero vector)
▪ Represents the offset.
◦ rpy (optional: defaults to identity if not specified)
▪ Represents the fixed axis roll, pitch and yaw angles in radians. •
◦ See the geometry description in the above visual element.
Joints
The joint element describes the kinematics and dynamics of the joint and also specifies the safety limits of the joint.

The joint element has two attributes: • name (required)
◦ Specifies a unique name of the joint
• type (required)
◦ Specifies the type of joint, where type can be one of the following:
▪ revolute - a hinge joint that rotates along the axis and has a limited range
specified by the upper and lower limits.
▪ continuous - a continuous hinge joint that rotates around the axis and has no
upper and lower limits
▪ prismatic - a sliding joint that slides along the axis, and has a limited range
specified by the upper and lower limits.
▪ fixed - This is not really a joint because it cannot move. All degrees of
freedom are locked. This type of joint does not require the axis, calibration, dynamics, limits or safety_controller.
▪ floating - This joint allows motion for all 6 degrees of freedom.
▪ planar - This joint allows motion in a plane perpendicular to the axis. The joint element has following elements:
• (optional: defaults to identity if not specified)
◦ This is the transform from the parent link to the child link. The joint is located at the origin of the child link, as shown in the figure above.xyz (optional: defaults to zero vector)
▪ Represents the offset.
◦ rpy (optional: defaults 'to zero vector 'if not specified)
▪ Represents the rotation around fixed axis: first roll around x, then pitch around y and finally yaw around z. All angles are specified in radians.
• (required)
◦ Parent link name with mandatory attribute:link
▪ The name of the link that is the parent of this link in the robot tree structure. • (required)
◦ Child link name with mandatory attribute:link
▪ The name of the link that is the child link.
• (optional: defaults to (1,0,0))
◦ The joint axis specified in the joint frame. This is the axis of rotation for
revolute joints, the axis of translation for prismatic joints, and the surface normal for planar joints. The axis is specified in the joint frame of reference. Fixed and floating joints do not use the axis field.xyz (required)
▪ Represents the components of a vector. The vector should be normalized.
• (optional)
◦ The reference positions of the joint, used to calibrate the absolute position of the joint.rising (optional)
▪ When the joint moves in a positive direction, this reference position will trigger a rising edge.
◦ falling (optional)
▪ When the joint moves in a positive direction, this reference position will trigger a falling edge.
• (optional)

◦ An element specifying physical properties of the joint. These values are used to specify modeling properties of the joint, particularly useful for
simulation.damping (optional, defaults to 0)
▪ joints,

The physical damping value of the joint ( for revolute joints).
friction (optional, defaults to 0)
for prismatic

joints, for revolute joints).


The physical static friction value of the joint ( for prismatic
(required only for revolute and prismatic joint)
An element can contain the following attributes:lower (optional, defaults to 0) An attribute specifying the lower joint limit (radians for revolute joints, meters

for prismatic joints). Omit if joint is continuous.
◦ upper (optional, defaults to 0)
▪ An attribute specifying the upper joint limit (radians for revolute joints, meters for prismatic joints). Omit if joint is continuous.
◦ effort (required)
See safety limits.
◦ velocity (required)
▪ An attribute for enforcing the maximum joint velocity. See safety limits.
• (optional) (New with ROS Groovy. See issue)
◦ This tag is used to specify that the defined joint mimics another existing joint. The value of this joint can be computed as value = multiplier * other_joint_value + offset. Expected and optional attributes:
joint (required)
▪ This specifies the name of the joint to mimic.
◦ multiplier (optional)
▪ Specifies the multiplicative factor in the formula above. Defaults to 1.
◦ offset (optional)
▪ Specifies the offset to add in the formula above. Defaults to 0.
(optional)
◦ An element can contain the following attributes:soft_lower_limit (optional,
defaults to 0)
▪ An attribute specifying the lower joint boundary where the safety controller starts limiting the position of the joint. This limit needs to be larger than the lower joint limit (see above). See See safety limits for more details.
◦ soft_upper_limit (optional, defaults to 0)
▪ An attribute specifying the upper joint boundary where the safety controller starts limiting the position of the joint. This limit needs to be smaller than the upper joint limit (see above). See See safety limits for more details.
◦ k_position (optional, defaults to 0)

▪ An attribute specifying the relation between position and velocity limits. See See safety limits for more details.
◦ k_velocity (required)
For any more information about links and joints you can find in this two websites http://wiki.ros.org/urdf/XML/link
http://wiki.ros.org/urdf/XML/joint
6.4)Create own model
Firstly, we can use a simple bi-wheel model to achieve these functions. The model you can download by type git clone -b navigation https://github.com/richardw05/mybot_ws.git on the terminal. The URDF file is shown below:

The model is looks like the picture shown below left it with two cylinder wheels, two sphere casters, one box body, one camera and one 3D laser scan.
6.5)Build the map
After we already have this model we can start build the map. Firstly, let’s open the gazebo simulator in terminal 1 by roslaunch mybot_gazebo mybot_world.launch you will see some obstacles in the gazebo 3D environment like the picture shown below:
In Terminal 2, start map building by roslaunch mybot_navigation gmapping_demo.launch

In Terminal 3, launch rviz and set the following parameters by roslaunch mybot_description mybot_rviz_gmapping.launch
Then you can see the model on the RVIZ as shown below. You may see the view of camera on the left below corner which is exactly what in front of the model in the gazebo environment and you can also see the laser scan around the car. If there is no camera or laser scan when you at the beginning you can add by topic of “camera” and “laser scan” then you will see them.
In Terminal 4, start teleop by roslaunch mybot_navigation mybot_teleop.launch and now you can move the model using the keyboard but you have to open the terminal 4 in front when you try to move. Moving this model around to scan this whole 3D map until you see this proper map. Please be careful when you scan the map if the map you scan is not perfect there may be some errors when you save the map.

6.6)Navigate the model
After that we need to save the map,firstly need to create a folder called map. Then typing rosrun map_server map_saver -f ~/mybot_ws/src/mybot_navigation/maps/test_map to save the png and yaml file. Can also save the .rviz file by click file>>save config as to description/rviz file. Finally we open the mybot_ws/src/mybot_description/launch mybot_rviz.launch and change this line to your created rviz file just now.
The last step is loading the map.
In Terminal 1, launch the Gazebo world
roslaunch mybot_gazebo mybot_world.launch
In Terminal 2, start map building
roslaunch mybot_navigation amcl_demo.launch
In Terminal 3, launch rviz
roslaunch mybot_description mybot_rviz_amcl.launch
After that we already added the navigation stack now which is amcl_demo.launch and we can navigate the model autonomously.
You may ask what is navigation stacks?

The navigation stack is a set of algorithm that allows the robot to move. Please refer to the image above, as we can see that the navigation stack contains Global Planner, Local Planner, Global Costmap, Local Costmap, Recovery Behaviour, AMCL and Map Server. The navigation stack uses external information like Sensor Transform, Odometry Source, Sensor Source and Base Controller to make a robot move. The 4 important thing for a real robot to move the robot will require Sensor, Transform, Odometry and Base Controller. The Local Costmap is just like the Turtlebot Stage Simulator local costmap (colourful box) whereby it allows the robot to avoid and detect obstacle. The AMCL is the Adaptive Monte Carlo Localization, AMCL is a localization method used for robot moving in 2D and AMCL uses particle filter to track the pose of the robot in a map.
The Map Server is not required in the navigation stack but for us to visualize the robot in a map we will use the Map Server to publish a map so that Rviz can subscribe to the map and we can visualize the robot and map in Rviz.
The Sensor Transform which is also known as transform, transform publishes information about the change in coordinate frame of a robot, this is important as it allows the robot to know its estimated position in a map.
The Sensor Source send sensor information to the navigation stack so that robot can avoid obstacles. Sensor Source only publishes 2 type of message which is LaserScan or PointCloud. If laser sensor is used it will publish LaserScan messages. An example of LaserScan sensor used in ROS is Hokuyo Laser Device.
The Odometry Source is publishing information to navigation stack using both transform and nav_msgs/Odometry message. But Odometry is to let the robot estimate its velocity and to allow robot to know its position in a map.
The Base Controller allows navigation stack to send velocity commands to it, and this velocity command is the geometry_msgs/Twist message. The Base Controller will convert the velocity command into motor command which this allows the robot to move
This was actually one of the navigation stack in the gazebo example. The navigation stack is in a launch file. The launch file is able to run multiple node at a time, normally in ROS

we use a $ rosrun [package] [node name], but with a launch file we can run multiple node just by using one command $ roslaunch [package] [launch file name].
This navigation stack in the gazebo example consists of 5 things as the move_base file shown above which are map_server, move_base, common costmap, local costmap, global costmap and base_local_planner.
The map_server is something that allow us to publish a map and Rviz will subscribe to this map_server, which we can see the map in Rviz. In the launch file the map_server have an argument args=”$(find chapter9_tutorials)/maps/map.yaml” this means that the map_server is taking the map from the package name chapter9_tutorials under maps file with a node name map.yaml, which this is important if we want to publish a map, as we need to point to the map file.
The common costmap is used to store configuration for both local costmap and global costmap. The things that are in the common costmap are,

  • Obstacle_range indicate that the maximum distance that the sensor will read and introduce new information in the costmap. With a obstacle_range of 2.5 it means that when robot is about 2.5 meters near the obstacle it will update to common costmap.

  • Raytrace_range will update free space given that is provided by the sensor reading, a raytrace_range of 3.0 means that when the robot move and the sensor senses that there is a free space of 3 meter around the robot it will update information to the common costmap.

  • Footprint is to indicate the geometry of the robot to the navigation stack and if it is a circular robot this numbers can determine the radius of the robot. For the footprint of 2.0 meters it means that the robot is a 2.0 meter by 2.0 meter robot size.

  • Inflation_radius provide value to keep a minimal distance between the robot and the obstacle, which the robot will treat the inflation radius as obstacle. For inflation_radius of 0.5 meters it means that the robot will keep away from the inflated obstacle when it is 0.5 meters near to it.

  • Cost_scaling_factor is just to modify the behaviour of the robot when it is around the obstacle. For cost_scaling_factor the number of 10.0 will determine the behaviour of robot to be aggressive.

  • Observation_sources is a list of sensors that are going to pass information to the costmap. For observation_sources: scan it means that the only sensor used is called scan. The scan:{} is to provide parameters on the sensor name scan. The sensor_frame: base_link means that the coordinate frame of the sensor is base_link. The data_type: LaserScan, this means that the sensor is publishing the LaserScan message, it can also be PointCloud as sensor publishes 2 kind of message which is LaserScan and PointCloud. The topic: /scan means that the sensor is publishing a topic called /scan. The marking: true means the sensor will add obstacle to costmap and send information to the costmap. The clearing: true means that the sensor will clear obstacle and send information to the costmap.
    The global costmap have 2 important thing which is the global frame, robot base frame and static map.

  • Global frame determine where the global costmap will be running in, for global_frame: / map means that our global costmap will run in the /map frame.

  • Robot base frame: /base_footprint means that our global costmap is link to the / base_footprint.
    The local costmap have 3 important things which is the rolling window, width and height.

  • Rolling_window: true means that the local costmap is at the center of the robot. This is something important so that ensure that the local costmap is always at the center of the robot and it looks nicer.

  • Width and height is to set the size of the local costmap (colourful box) just like what we see in Turtlebot Stage Simulator.
    The Base Local Planner 1 important thing which is the holonomic robot. In Base Local Planner configuration file we can set the velocity and the acceleration of the robot.

  • The holonomic_robot: false means that we are using a non holonomic robot.
    I think that the Learning ROS For Robotics Programming book is a very good platform for beginners to learn what is a URDF robot model and what is a navigation stack.
    For URDF robot model it is in Chapter 7 of the book where they have code and example to teach beginner how to create a 3D robot model using code, if the future we are upgrading to Gazebo Simulator for a more realistic auto parking environment, the robot model have to be created by using URDF, which Chapter 7 will teach us how to create a URDF robot model.
    For navigation stack is Chapter 8 and 9 where they teaches what is in a navigation stack about, gmapping, publishing goal using code and using 2D navigation goal in Rviz. This 2 chapter is recommended to beginner that are learning how to move a robot in Rviz, as they provide many examples on navigation. After reading this 2 chapter beginner will be able to create our own navigation stack which allows the robot to move in a map.
    Any more information about move_base stack you can visit this website: http:// wiki.ros.org/move_base
    After we all have down above you can set the goal position in the RVIZ and make the model move.

By adding the geometric.cpp as the global panner and the pose follower as the local planner inside the amcl_ demo.launch file and choose correspond topic name in the rviz you can see the path we designed before, as the picture shown below the vertical parking path is there and the car can start to follow it. Even though you may find the control in 3D environment is not good but it is closer to real life condition and actually my work is more focus on path planning but not on controls, however this may be able to solve in the future.
7.0) Ackermann Steering 7.1)Introduction to Ackermann steering

The two wheel model actually is not exactly the same as our nyp car hence I tried to modify the model to Ackermann steering. Ackermann steering geometry is a geometric arrangement of linkages in the steering of a car or other vehicledesigned to solve the problem of wheels on the inside and outside of a turn needing to trace out circles of different radii.
It was invented by the German carriage builder Georg Lankensperger in Munich in 1817, then patented by his agent in England, Rudolph Ackermann (1764–1834) in 1818 for horse-drawn carriages. Erasmus Darwin may have a prior claim as the inventor dating from 1758.
Advantages
The intention of Ackermann geometry is to avoid the need for tyres to slip sideways when following the path around a curve. The geometrical solution to this is for all wheels to have their axles arranged as radii of circles with a common centre point. As the rear wheels are fixed, this centre point must be on a line extended from the rear axle. Intersecting the axes of the front wheels on this line as well requires that the inside front wheel is turned, when steering, through a greater angle than the outside wheel.
Rather than the preceding “turntable” steering, where both front wheels turned around a common pivot, each wheel gained its own pivot, close to its own hub. While more complex, this arrangement enhances controllability by avoiding large inputs from road surface variations being applied to the end of a long lever arm, as well as greatly reducing the fore- and-aft travel of the steered wheels. A linkage between these hubs pivots the two wheels together, and by careful arrangement of the linkage dimensions the Ackermann geometry could be approximated. This was achieved by making the linkage not a simple parallelogram, but by making the length of the track rod (the moving link between the hubs) shorter than that of the axle, so that the steering arms of the hubs appeared to
“toe out”. As the steering moved, the wheels turned according to Ackermann, with the inner wheel turning further. If the track rod is placed ahead of the axle, it should instead be longer in comparison, thus preserving this same “toe out”.
7.2)Design my own Ackermann steering model
To design an Ackermann steering model we need two things, one is the URDF file while another thing is a correspond controller. I tried to figure it out how ackermann steering work and write the URDF file. While there is a problem that each link cannot have two joints at the same time as the picture shown below. I also did not have a correspond controller for ackermann steering cause the two-wheel model’s controller is not suitable.

7.3)Searching for Ackermann steering models
I tried to search existing ackermann models online, some examples like golf_car, forklift didn’t work on my computer at all. Finally I found a model called Erle-Rover. These are the steps to use it. We recommended to install Ubuntu 14.04 64 bits.
First at all, install the base packages
sudo apt-get update
sudo apt-get install gawk make git curl cmake -y
Then install the dependencies for MAVProxy
sudo apt-get install g++ python-pip python-matplotlib python-serial python-wxgtk2.8 python-scipy python-opencv python-numpy python-pyparsing ccache realpath libopencv-dev -y
Now, install MAVProxy
sudo pip install future
sudo apt-get install libxml2-dev libxslt1-dev -y sudo pip2 install pymavlink catkin_pkg --upgrade sudo pip install MAVProxy==1.5.2
Download and install ArUco.

  1. Download ArUco 1.3.0 from here
  2. Install ArUco
    cd ~/Downloads # Replace this with your Download directory tar -xvzf aruco-1.3.0.tgz
    cd aruco-1.3.0/
    mkdir build && cd build
    cmake …
    make
    sudo make install
    APM/Ardupilot
    The ArduPilot project is an open source autopilot for drones. We’ll be using its code to simulate the UAVs:
    Compile a specific branch of ardupilot
    mkdir -p ~/simulation; cd ~/simulation
    git clone https://github.com/erlerobot/ardupilot -b gazebo
    Getting latest version of JSBSim (optional step) cd ~/simulation
    git clone git://github.com/tridge/jsbsim.git

Additional dependencies required

sudo apt-get install libtool automake autoconf libexpat1-dev cd jsbsim
./autogen.sh --enable-libraries
make -j2
sudo make install

Install ROS Indigo
Setup your computer to accept software from packages.ros.org, setup your keys and install (make sure your Debian package index is up-to-date):
sudo sh -c ‘echo “deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main” > /etc/apt/sources.list.d/ros-latest.list’
sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net --recv-key 0xB01FA116
sudo apt-get update
Install, ROS package, build, and communication libraries. No GUI tools.: sudo apt-get install ros-indigo-ros-base -y
Initialize rosdep, before you can use ROS, you will need to initialize rosdep. rosdep enables you to easily install system dependencies for source you want to compile and is required to run some core components in ROS.
sudo rosdep init
rosdep update
It’s convenient if the ROS environment variables are automatically added to your bash session every time a new shell is launched:
echo “source /opt/ros/indigo/setup.bash” >> ~/.bashrc
source ~/.bashrc
Get rosinstall and some additional dependencies
sudo apt-get
install python-rosinstall \ ros-indigo-octomap-msgs \ ros-indigo-joy \ ros-indigo-geodesy ros-indigo-octomap-ros \ ros-indigo-mavlink \ ros-indigo-control-toolbox \ ros-indigo-transmission-interface \ ros-indigo-joint-limits-interface \ unzip -y
Create ROS workspace
mkdir -p ~/simulation/ros_catkin_ws/src
Initialize the workspace
cd ~/simulation/ros_catkin_ws/src catkin_init_workspace
cd ~/simulation/ros_catkin_ws catkin_make
source devel/setup.bash
Download these repositories in src: cd src/
\

git clone https://github.com/erlerobot/ardupilot_sitl_gazebo_plugin
git clone https://github.com/tu-darmstadt-ros-pkg/hector_gazebo/
git clone https://github.com/erlerobot/rotors_simulator -b sonar_plugin
git clone https://github.com/PX4/mav_comm.git
git clone https://github.com/ethz-asl/glog_catkin.git
git clone https://github.com/catkin/catkin_simple.git
git clone https://github.com/erlerobot/mavros.git
git clone https://github.com/ros-simulation/gazebo_ros_pkgs.git -b indigo-devel #Add Python and C++ examples
git clone https://github.com/erlerobot/gazebo_cpp_examples
git clone https://github.com/erlerobot/gazebo_python_examples
Install Gazebo
Option 1: Install Gazebo using Ubuntu packages
Setup your computer to accept software from packages.osrfoundation.org sudo sh -c ‘echo “deb http://packages.osrfoundation.org/gazebo/ubuntu-stable lsb_release -cs main” > /etc/apt/sources.list.d/gazebo-stable.list’
Setup keys
wget http://packages.osrfoundation.org/gazebo.key -O - | sudo apt-key add -
Install gazebo7
sudo apt-get update
sudo apt-get remove .gazebo. ‘.sdformat.’ ‘.ignition-math.’ && sudo apt-get update && sudo apt-get install gazebo7 libgazebo7-dev drcsim7 -y
Option 2: Install Gazebo from source
Follow this instructions to install Gazebo from source. Compile the workspace
Then compile everything together:
cd ~/simulation/ros_catkin_ws
catkin_make --pkg mav_msgs mavros_msgs gazebo_msgs source devel/setup.bash
catkin_make -j 4
Download Gazebo models
mkdir -p ~/.gazebo/models
git clone https://github.com/erlerobot/erle_gazebo_models mv erle_gazebo_models/* ~/.gazebo/models
Executing APMrover2
source ~/simulation/ros_catkin_ws/devel/setup.bash
cd ~/simulation/ardupilot/APMrover2
…/Tools/autotest/sim_vehicle.sh -j 4 -f Gazebo

once MAVProxy has launched completely, load the parameters

param load /[path_to_your_home_directory]/simulation/ardupilot/Tools/Frame_params/ 3DR_Rover.param

NOTE: replace [path_to_your_home_directory] with the actual path to your home directory.

Example: param load /home/john/simulation/ardupilot/Tools/Frame_params/ 3DR_Rover.param

In another terminal
source ~/simulation/ros_catkin_ws/devel/setup.bash roslaunch ardupilot_sitl_gazebo_plugin rover_spawn.launch
Erle-Rover model in Gazebo simulator
Controlling Erle-Rover using MAVProxy
Make the rover move forward. In the first terminal execute: # in the MAVProxy prompt:
mode MANUAL
param set SYSID_MYGCS 255
rc 3 1900
Or backwards:

in the MAVProxy prompt: rc 3 1200

What we are doing here is override the 3rd channel of the RC, which corresponds to the throttle. Values go from 1100 to 1900. 1500 is to stop the throttle; so values above 1500 will make the rover move forward, and values above 1500 backwards. The same principle applies to the yaw, which is in the 1st channel of the RC. Values above 1500 will make it turn right, and below 1500 left. For instance:

in the MAVProxy prompt: rc 1 1400

I only control the car manually so far, but it should be able to be autonomous also. You can continue working on it by following the guidance here: http://docs.erlerobotics.com/ simulation/vehicles/erle_rover/tutorial_2

8.0) Future work
1.Can continue improving and completing the path planning. Since the control part on gazebo and real life condition is not good for geometric planning.
2.Implement local planner for actual implementation on NYP car. If got time you can also working on design the model to make it more and more closer to our real NYP car model.
9.0) Conclusion
This project is a very good applicable and interesting project that I learnt a lot on programming. To be honest I am a little bit afraid to do programming because I did not study so many things about it before this project. However, I started learning about ROS and linux during these 3 months. I spent 2-3 weeks looked through so many different websites to do different tutorials and tried to make myself to be familiar with it as fast as possible. Since that I know programming can be learnt in many ways like it can be from either books or internet.
Before I am used to solve problems on papers but after these 3 months I actually know how to translate the mathematical calculation into computer language and codes which can make the static values on paper to become dynamic robots.
I really like robotics now and I hope I can learn more and become an expert in the future.
10.0) Reference
http://docs.erlerobotics.com/simulation/vehicles/erle_rover/tutorial_2 (ackermann model)
http://moorerobots.com/blog(gazebo tutorial) http://gazebosim.org/tutorials/?tut=ros_urdf(urdf) https://www.generationrobots.com/blog/en/2015/02/robotic-simulation-scenarios-with- gazebo-and-ros/(gazebo installation)
http://wiki.ros.org/urdf/XML/link(link)
http://wiki.ros.org/urdf/XML/joint(joint)
http://wiki.ros.org/move_base(move base) https://en.wikipedia.org/wiki/Ackermann_steering_geometry(ackermann steering) https://www.google.com.sg/search? q=car+parallel+parking&source=lnms&tbm=isch&sa=X&ved=0ahUKEwjthbL9wYrVAhXBrI 8KHbZrAfAQ_AUIBigB&biw=1093&bih=530#imgrc=WiNHnZtl-_J8KM: (Parallel parking) https://www.google.com.sg/search? q=car+perpendicular+parking&source=lnms&tbm=isch&sa=X&ved=0ahUKEwihi_qUworVA hXJsY8KHQjtC2wQ_AUIBigB&biw=1093&bih=530#tbm=isch&q=car+perpendicular+parki ng+step&imgrc=hDON42RErnbC4M: (Perpendicular parking) https://www.youtube.com/watch?v=199QsWyBhpo (Parallel parking) https://www.youtube.com/watch?v=HYXjDyFtBJw (Perpendicular parking) http://www.iri.upc.edu/files/scidoc/1658-Autonomous-navigation-framework-for-a-car-like- robot.pdf (DWA)

你可能感兴趣的:(Development of path planning, controls and Gazebo simulation for autonomous parking)