Scientific Background

 

Obstacle Avoidance and Path Planning

Obstacle avoidance and path planning are hot topics in autonomous and intelligent robots field. Obstacle avoidance refers to an ongoing process by which a robot takes care of anticipated and unanticipated obstacles on the route of its goal and alters its path to avoid collision while continuing toward target point. This process ranges from simple steps (stopping the robot before collision) to sophisticated levels (dodging the obstacles without reducing robot’s forward speed). Obviously, advanced levels attract more attention and treating obstacles in maximum speed is in the limelight nowadays. Since process of obstacle avoidance varies from one environment to another, it’s one of the serious challenges in design and construction of robots. If an autonomous robot has an imperfect obstacle avoidance algorithm, it will challenges robot’s total operation capability.

In marine projects diverse sensors and transducers such as RADAR, LIDAR, Stereo Vision, etc. are used to detect obstacles. Obstacles can be divided into two subdivision; static (such as walls) and dynamic (such as other boats and robots). Design and evaluation of an obstacle voidance algorithm which can fuse data from various sources and process them with an acceptable rate is the primary purpose of new projects. This algorithm must be capable of neglecting obstacles that would not affect the robot’s trajectory, i.e., the robot would not collide with them. There are different choices for avoiding collision such as fast stop, increasing or decreasing speed, changing direction, backward movement, etc. Take a dynamic obstacle (like a moving boat) as an example. If the algorithm, after estimating speed of the obstacle based on its own speed, anticipates a probable collision, according to the procedure, it starts to change its direction. After a change in direction, if there was not any intersection in trajectories of the robot and the obstacle, the robot will continue forward movement by taking the goal and also the obstacle into account. However, there are some situations that an object is approaching the robot. In this case robot would stop and if the object continue its movement, the robot will start backward movement in order to avoid crash.

 


 

Kalman Filter

Rudolf (Rudi) Emil Kalman (Hungarian: Kalman Rudolf Emil; May 19, 1930) was a Hungarian-born American electrical engineer, mathematician, and inventor. He was most noted for his co-invention and development of the Kalman filter, a mathematical algorithm that is widely used in signal processing, control systems, and guidance, navigation and control. Kalman was an electrical engineer by his undergraduate and graduate education at M.I.T. and Columbia University, and he was noted for his co-invention of the Kalman filter (or Kalman-Bucy Filter), which is a mathematical technique widely used in the digital computers of control systems, navigation systems, avionics, and outer-space vehicles to extract a signal from a long sequence of noisy and/or incomplete technical measurements, usually those done by electronic and gyroscopic systems.

Kalman's ideas on filtering were initially met with vast skepticism, so much so that he was forced to do the first publication of his results in mechanical engineering, rather than in electrical engineering or systems engineering. Kalman had more success in presenting his ideas and This led to the use of Kalman filters during the Apollo program, and furthermore, in the NASA Space Shuttle, in Navy submarines, and in unmanned aerospace vehicles. For this work, U.S. President Barack Obama awarded Kálmán the National Medal of Science on October 7, 2009.

The Kalman filter is a powerful recursive data processing algorithm, for controlling noisy systems. The basic idea of a Kalman filter is: Noisy data in, hopefully less noisy data out. For linear systems and white Gaussian errors, Kalman filter is best estimate based on all previous measurements. For non-linear system optimality is qualified. It doesn’t need to store all previous measurements and reprocess all data each time step. Many physical processes, such as an autonomous vehicle driving is a linear system. A Kalman filter application is would be considered in some steps: a) understand the situation-break the problem down to the mathematical basics, b) model the state process, c) model the measurement process (The measurement space may not be in the same space as the state- e.g., using an electrical diode to measure weight), d) model the noise-the base Kalman filter assumes Gaussian (white) noise, e) test the filter and f) refine filter- change the noise parameters. A linear system can be described by the following two equations. Where Equ. (1) is state equation and Equ. (2) is output equation:

 

x(k+1)=Axk+Buk+wk                     (1)

yk=Cxk+zk                                     (2)

 

In the above equations A, B, and C are matrices that derived from system’s model, k is the time index, x is called the state of the system, u is a known input to the system, y is the measured output, w and z are respectively process noise and the measurement noise. Each of these quantities are vectors and therefore contain more than one element. The vector x contains all of the information about the present state of the system (e.g., in a vehicle navigation the state would be a vector of position, velocity and acceleration), but it is not possible to measure x directly. Instead y is measure, which is a function of x that is corrupted by the noise z. It has further assumed that no correlation exists between w and z. That is, at any time k, wk, and zk are independent random variables.

Then the process noise covariance matrices, Sw and measurement noise covariance, Sz are defined as:

Sw=E(wk wkT )                                 (3)

Sz=E(zk zkT)                                     (4)

 

where a T superscript indicates matrix transposition, the Kalman filter equations given as follows.

Kk=APk CT (CPk CT+Sz ) (-1)                               (5)

x (k+1)=(Axk+Buk )+Kk (y (k+1)-Cxk )                  (6)

P (k+1)=APk AT+ Sw - APk CT Sz (-1) CPk AT     (7)

 

In estimation theory, the extended Kalman filter (EKF) is the nonlinear version of the Kalman filter which linearizes about an estimate of the current mean and covariance.

 

 


 

Stereo Vision

Computer stereo vision is the extraction of 3D information from digital images, such as obtained by a CCD camera. By comparing information about a scene from two vantage points, 3D information can be extracted by examination of the relative positions of objects in the two panels. This is similar to the biological process Stereopsis.

In traditional stereo vision, two cameras, displaced horizontally from one another are used to obtain two differing views on a scene, in a manner similar to human binocular vision. By comparing these two images, the relative depth information can be obtained in the form of a disparity map, which encodes the difference in horizontal coordinates of corresponding image points. The values in this disparity map are inversely proportional to the scene depth at the corresponding pixel location.

For a human to compare the two images, they must be superimposed in a stereoscopic device, with the image from the right camera being shown to the observer's right eye and from the left one to the left eye.

In a computer vision system, several pre-processing steps are required.

1.The image must first be undistorted, such that barrel distortion and tangential distortion are removed. This ensures that the observed image matches the projection of an ideal pinhole camera.

2.The image must be projected back to a common plane to allow comparison of the image pairs, known as image rectification.

3.An information measure which compares the two images is minimized. This gives the best estimate of the position of features in the two images, and creates a disparity map.

4.Optionally, the received disparity map is projected into a 3d point cloud. By utilising the cameras' projective parameters, the point cloud can be computed such that it provides measurements at a known scale
Stereo vision

Computer stereo vision is the extraction of 3D information from digital images, such as obtained by a CCD camera. By comparing information about a scene from two vantage points, 3D information can be extracted by examination of the relative positions of objects in the two panels. This is similar to the biological process Stereopsis.

In traditional stereo vision, two cameras, displaced horizontally from one another are used to obtain two differing views on a scene, in a manner similar to human binocular vision. By comparing these two images, the relative depth information can be obtained in the form of a disparity map, which encodes the difference in horizontal coordinates of corresponding image points. The values in this disparity map are inversely proportional to the scene depth at the corresponding pixel location.

For a human to compare the two images, they must be superimposed in a stereoscopic device, with the image from the right camera being shown to the observer's right eye and from the left one to the left eye.

In a computer vision system, several pre-processing steps are required.

1.The image must first be undistorted, such that barrel distortion and tangential distortion are removed. This ensures that the observed image matches the projection of an ideal pinhole camera.

2.The image must be projected back to a common plane to allow comparison of the image pairs, known as image rectification.

3.An information measure which compares the two images is minimized. This gives the best estimate of the position of features in the two images, and creates a disparity map.

4.Optionally, the received disparity map is projected into a 3d point cloud. By utilising the cameras' projective parameters, the point cloud can be computed such that it provides measurements at a known scale.

 


 

Autonomous Boats


Over the past decade, there has been intense scientific work on autonomous boats. As hardware becomes smaller, cheaper and also better performing, the possibilities of autonomous vessels become higher.
Autonomous boats can easily be equipped with several sensors measuring all kinds of data. As they are energy self-sufficient their operation time is not limited. Therefore, it is very cost efficient to use them for surveys, mappings and ecological studies of oceans and lakes.
Ocean sampling and marine mammal research are two research area which already use robot boats to facilitate the research.
In the World’s Robot Championship, boats are divided into three classes from length viewpoint:
Microtransat class for boats up to 4 m long
Bot class for boats up to 2 m long
MicroMagic class (0.53 m long)
Embedded intelligence in the principal component of any autonomous boat. Depending on the available space in the boat this component can either be a microcontroller, a PDA or an x86 computer. For communication, several different components can be used. In short distances wireless LAN controllers, for medium distances near the coast and on lakes GSM transmitter and for very large distances satellite communication are available. Depending on purpose of the boat one or more of these communication systems can be used.
The energy supply can be as simple as a battery pack in small boats or as sophisticated as solar panels and backup fuel cells in boats with energy self-sustaining ability.
GPS systems are widely used to determine the position and speed of the boat.

Image Gallery

Contact Us

Address:Department of Mechanical Engineering of biosystem-College of Agriculture and Natural Resources, University of Tehran - Karaj - Iran
Telephone: +98-2632801011
FAX: +98(26)32808138
E-mail: hmousazade@ut.ac.ir