Robot Roadway Vision: How Autonomous Cars See The World

Self Driving Cars Are Speeding Into The Present

Driverless vehicles used to be “futuristic” as it’s something you only get to see in sci-fi movies. However, times have changed, and as autonomous car technology continues to improve so do the ways that humans interact with their environment.

Even car companies are beginning to see what the future holds, and how autonomous vehicles would be the primary mode of transportation.

They are now collaborating with tech giants, such as Uber, Google, and other famous start-ups to develop the next generation of autonomous vehicles.

autonomous car technology self driving tech
Image Source: https://www.nytimes.com/interactive/2016/12/14/technology/how-self-driving-cars-work.html?_r=0

These companies are competing to build the first fully autonomous vehicle that could not only alter the roads but also help in making smart cities safer and more advanced.

Instead of dreaming about the future of autonomous car technologies, companies like Uber and Google are working to apply advanced technology on modern cars to allow cars to drive themselves.

Even as more news stories feature advancements and mishaps involving self-driving cars, many people still wonder how humans train robots to spot and interpret dangers in their environment.

The Issue with On-Board Devices

For autonomous cars to work smoothly, it’s essential that car manufacturers work on and evaluate various approaches centered on mapping, on-board sensors, and other technologies to allow autonomous cars the ability to navigate public roads safely.

To understand their environment, autonomous cars are equipped with highly specialized sonar, radar, and camera systems.

The issue with this is that, although they can record and interpret the general attributes of an environment, these technologies are not capable of assessing their environment on a granular level.

This means that even with an array of on-board devices, self-driving cars are not able to fully understand their surroundings and can miss details that can lead to injury and deaths.

While the technologies are not perfect, technologists and engineers are working together to find ways to make on-board systems support each other.

Here are the primary on-board sensors self-driving cars use to understand their surroundings.

Lidar

Lidar calls for more data evaluation and procedural power than traditional sensors like cameras. However, this technology also produces more accurate results, especially during traffic congestion or hazardous weather.

According to experts, although Lidar may not be the most energy-saving solution to autonomous car problems, Lidar continues to be the leading technology because of its high degree of accuracy.

Spacial Mapping Technologies

When auto companies put a self-driving car on the road, they must build real-time 3-Dimensional maps for that vehicle to sit in.

Mapping technology is among the most critical system for an autonomous vehicle to navigate the road because it provides a spatial context for the car to reference input from other sensors.

Engineers have the choice between two primary mapping systems for their on-board navigation processors. Here are the two primary options, and how they impact autonomous car technologies.

1. Highly Detailed HD 3D Mapping

HD maps are made using Lidar and different cameras that are equipped with vehicles.

Wherein, they have to travel along specific roads to create 3D maps that come with 360-degree data– which includes depth information regarding its environment.

2. Static 2D Feature Mapping

Unlike granular HD maps, feature mapping doesn’t need Lidar and instead use advanced cameras to map specific road feature that allows easy navigation. For instance, lane marking can easily be captured, as well as bridges, road and other traffic signs present on the way.

The only drawback with this is that it doesn’t offer much granularity, but this mapping solution updates and processes faster than 3D mapping with Lidar.

GPS Systems For Precise Positioning

Another method of precision mapping uses GPS to obtain the approximate location, and the driverless vehicle’s sensors are used to evaluate spatial changes in the surrounding environment.

GPS location information is used with images taken by on-board cameras, and each frame makes it possible to reduce the mistakes committed with the GPS signal.

Engineers believe this approach can be the best solution for built-in mapping since it layers in spatial data to help align cars in all conditions, while also taking in data about the immediate surroundings.

Both approaches depend on the use of inertial navigation methods, as well as odometer data.

Based on research, preliminary results have shown to be much more significant as it offers precise localization, while the other one is more convenient to follow the use of 3D maps are not mandatory.

Sharing Data To Help Improve Autonomous Car Performance

Since autonomous cars are a connected group of computers talking to each other, engineers and automakers can harness the collective data from different vehicles to help strengthen the abilities of the system as a whole.

Leveraging a network approach to machine learning, automakers can help each self-driving car “learn” from the experiences of other vehicles.

As more autonomous cars are used on public roads, an increasing amount of data will be collected and interpreted. With wireless technology, autonomous car companies can bring together data sources from various environments to benefit the entire autonomous car experience.

Here are a few ways that automakers are leveraging machine learning to improve the capabilities and safety of autonomous cars.

Shared Data Networks

Using advanced computing techniques, car companies can simulate situations across their network of self-driving cars with neural networks. This system of data transfer and simulations is complicated, but it allows the collective

To be able to determine specific scenarios and come up with informed decisions, the use of neural networks is necessary. Since systems can be quite complicated, it can be challenging to interpret the main reason or logic of specific actions

Rule-based Choices

To ensure the safety of everyone, engineers make it a point to develop several “if-then” rules and the system vehicles consequently using these various approaches. However, this can be quite challenging because of the time and effort needed.

Hybrid Approach

The combination of both neural and rule-based encoding might be the best way to go. Through this, developers would be able to resolve the underlying issues of these networks as redundancy would be introduced to particular neural networks for specific processes. The “if-then” approach can be used in this.

When accompanied statistical-inference models, the hybrid approach is also considered to be the most favored one due to the ability to control and alter how data is interpreted and reacted to. 

Addressing the Most Common Issues Faced by Driverless Vehicles

There’s no denying that driverless vehicles would be the new era of transportation, but there’s still a lot needed to be improved.

One of the main concerns of safety workers is that semi-autonomous vehicles might even trigger accidents because drivers would pursue activities, such as reading and texting, thinking it’s alright.

As drivers re-engage, it’s imperative that they learn how to reevaluate their surroundings to ensure overall safety. Law makers and engineers will need to work together to ensure autonomous cars are produced with the safety of the general public in mind, and on-board technology will need to be fine-tuned to provide this level of detail needed for ongoing success.

Featured Image