One of the important features for building an autonomous driving cars is capability to recognize objects. Object such as other cars, people and signs. This kind of vision will be build on camera vision. We need to see color of sign if it is just sign of recommendation, warning or permission. YOLO startup company developed one of the best open source algorithms to detect object even though it is in moving status.
To define good characteristic of a good image recognition is the ability to detect salient regions, that means we need to capture most important part of view for car. For example, if the car detects red color it needs to diagnose the given information which car should get from it. The decision process calculates weight of sum inputs. The weight determines how influential a particular feature is. This process can be successful when we add deep learning systems into it. It can make process much more complex and more abstract by arranging them in layers where each layer feeds from a layer bellow and send output to layer with higher index.
For some of applications it is necessary to speed up processing by salience detection phase. Definition of saliency can be described like more importation parts of image will be chosen first. For example, region with sharply different kind of shape than neighborhood region. Deep learning features also end-up detecting salient features from training samples. They are important for both object recognition and localization.
This algorithm is mostly filled from deep learning but we also can some of the features handcraft to it. First feature could be to describe area around key point such as stand for a traffic sign or its shape. This has many solutions and has to encode many informations around. That’s reason why we would add some indexing structure to it such as a tree-based algorithm or sensitive hashing for fast retrieval and matching. Finally, we need to add process which will basically just recognize object by its geometrical shape be fitting it into samples. We can use transformation of matrix to analyze inliers and outliers to compute the probability of object presence.
Nowadays more likely the deep learning methods will be used to perform object recognition. The trained feature detectors will adapt automatically to training data. This will result in very well performing recognition system as the features are adapted to the given problem unlikely hand-crafted feature that maybe good for one of few problems and then breaks for others. We need to find a great balance between these to and implement them in self-driving cars to gain faster processing time and successful recognition for maximal safety on roads.
What’s the first step to make a fully autonomous vehicle? No doubt that it is line detection. It has made some great steps forward in past few years but competition still remains open. We are facing some great challenges like shadows, varying light condition and worn out lane lines.
Lane line detection is divided into these 4 key steps. First of all, lane line pixels are pooled with a ridge detector. After that effective no is filtering mechanism goes trough and remove them to a large extent. Modified version of sequential script ensure that each lane line is captured correctly. In the end if the lane lines exist on each side of the road a parallelism reinforcement technique is imposed to improve the model accuracy. This kind of model is able to generate lane lines with high success rate. It is also possible to generate precise and consistent vehicle localization with respect of road lane lines.
Most of the lane line detection systems start with extraction of different images features such as edges and color changes. Through machine learning methods, vector machine and boost classification the road lane lines are getting build. Model is getting nice straight lines or parametric curves. Some of algorithms may also include third party check of built lane line such as Kalman or particle filter.
The noise filtering mechanisms is not robust and effective. It is mainly done though thresholding gradient magnitude. Nevertheless, some of higher gradient values are there due to shadows and object surrounding them and lower gradient values are caused by poor lighting condition or worn out road lane lines. The neural networks should help with detection. It can build a training set that has enough samples under variety of scenarios. If the common situation in which one the algorithm failed. It will result that the same mistake will not be done any time in the future.
To conclude we described key algorithms for lane line detection. These algorithms should provide a reliance vision-based road line detection which should work even in challenging road situation thanks to rigidness and noise cancellation. To improve these algorithms, we can add the particle filter. This will make the image even more consistent and smooth. Some of the similar algorithms can bring us stereo vision to get the depth of information. The road curvatures estimation will be fused from map and GPS module in our car to improve even better consistency of the road lane line detection.
Here is video how particle filter changes the raw neural networks for road lane line detection.
Vehicle autonomy and driving assistance rely on many technologies. One of them is LiDAR (Light Detection And Ranging). LiDAR is key technology in an autonomous driving car nevertheless it is not the only one in car which keeps you on a road safe and sound.
LiDAR sensors measure the distance to an object by calculating the time taken by a pulse of light to travel to an object and back to the sensor. This simple equation shows what is needs to be calculated for every single beam from transmitter to receiver.
The 3D LiDAR take place on the top of a car because it can provide 360° 3D view of obstacles that vehicle must avoid or take notice of being there. 3D LiDAR use 905 nm wavelength that can give us sight for almost 100 m range in field-of-view. Some companies are building 3D LiDAR with wavelength of 1550 nm to provide longer range and much higher accuracy.
Using LiDARs need some special requirements like remove sensitivity to ambient light and to prevent spoofing from other LiDARs. Next one and really important is that it sends a beam and that can be dangerous for our eyes so every single transmitter has to be “eye-safe”. Lately companies have been using 3D LiDARs which are physically spin around and they get 360° field-of-view. These sensors are expensive so the trend is Solid State 3D LiDARs due to their long-term reliability, price and size. SSLs currently have lower field-of-view coverage but they are much cheaper. It is possible to place multiple sensors on car to cover a larger area and still spend less money than using spinning 3D LiDARs.
There are some disadvantages and weaknesses of using 3D LiDAR. They are large and expensive and must be mounted outside on the roof of the vehicle. That will not give us information about close objects near the vehicle. The system which Google is using weight 80 kg and costs $70,000.
There is also the happy medium what is multi beam 2D LiDAR. They are giving us just 2D scans of the surrounding world (with one or few beams i.e. scan planes) and often in smaller (like 180°) field of view. Most likely these LiDARs contains 4 – 6 beams transmitters to be precise as possible. They are much cheaper and more start-up companies are using them to subsite classic 3D LiDAR.
LiDAR works well in all light condition but when it starts to rain, snow or when the air is filled with dust. Due to use of light spectrum of wavelength no beams are reflected from actual road or objects around. It will lose a vison and can fail the self-driving. Snow is a big issue for autonomous cars because beams are absorbed in it. The company that will be able to drive a car in these conditions will be successful on the self-driving cars
market. LiDAR cannot detect colour or contrast and cannot provide optical character recognition capabilities. We will need some information to drive. For example, about color of signs etc. We will need to add one more system in a car. On the other side, LiDAR is one of the most reliable and simple to use system to keeps us on track and recognize people and objects to avoid them when driving.
In conclusion, here is table with different types of LiDARs + prices and links.
Is it possible to build a car on camera vision only? This system basically just follows the human model. All you need for driving are just your eyes. Will it be enough to build fully autonomous driving car or is Elon Musk and AImotive wrong?
Cameras are inexpensive. It is not complicated hardware and you can have them lots on a car. We have a lot of different type of cameras. Visible light cameras use reflected light, they can see arbitrary distance in the daytime. During night, they must use emitted light (like headlights) to see objects in distance. Cameras can see colors what gives them big advantage compared to LiDARs. We can use them to recognize signs next to route to adopt car driving abilities. Some of them have high resolution what can give us more lines of surfaces to detect than from LIDAR. This helps with recognition of distant objects.
Huge part in autonomous cars and cameras plays computer vision. We can divide computer vision in two categories: machine vision and computer vision. Machine vision basically just analyze single digital image. This include finding edges, detecting motion and motion parallax. This would be used for detection and reading signs on a road. Computer vision refers to harder set of problems. It is object recognition. It is necessary to recognize human body during all light conditions and it needs to be done fast. Algorithms are not yet set up for this. The problem with computer vision can be variability of lightning and shadows conditions. Object can be lit from any direction what can really make them harder to recognize.
There is some special camera caller thermal camera. It uses emitted light rather than reflect light from objects. This will not solve problem with shadows but they are not moving. They always stay at the same place. They can work well to spot human body thanks to our body temperature. Unfortunately, they price of thermal cameras is much higher than normal ones. There are no reports of anyone using this type of cameras on self-driving car.
This kind of vision will need special now popular computer help called neural networks. These networks should mimic many capabilities of biological brain. This technology needs extreme accuracy for self-driving cars and it is still some distance away. It has to be trained to fix its mistakes but you will never really know if the mistake is really fixed or not. Same with human brain. Some people say that we don’t know how this computer works so it is not safe for us to let it drive our car.
No matter what technology we will be using LiDARs or cameras, both need to be extremely accurate and safe. Additionally, we will still need more than just this. For safety, it will be necessary to add more sensors to car such as blind spot sensor working on Doppler effect, parking sensor or camera which will be addition to self-driving features of a car.