Autonomous/Robotic Car

   A self-driving car sometimes called an autonomous car or driverless car is a vehicle that uses a combination of sensors, cameras, radar and artificial intelligence (AI) to travel between destinations without a human operator. To qualify as fully autonomous, a vehicle must be able to navigate without human intervention to a predetermined destination over roads that have not been adapted for its use.


Audi, BMW, Ford, Google, General Motors, Tesla, Volkswagen, and Volvo are some companies that are developing and testing driverless car.
Six levels of automation have been set-
• Level 1:  Advanced driver assistance system (ADAS) aid the human driver with either steering, braking or accelerating, though not simultaneously. ADAS includes rearview cameras and features like a vibrating seat warning to alert drivers when they drift out of the traveling lane.
• Level 2:  An ADAS that can steer and either brake or accelerate simultaneously while the driver remains fully aware behind the wheel and continues to act as the driver.
• Level 3:  An automated driving system (ADS) can perform all driving tasks under certain circumstances, such as parking the car. In these circumstances, the human driver must be ready to re-take control and is still required to be the main driver of the vehicle.
• Level 4: ADS is able to perform all driving tasks and monitor the driving environment in certain circumstances. In those circumstances, the ADS is reliable enough that the human driver needn't pay attention to.
• Level 5:  The vehicle's ADS acts as a virtual chauffeur and does all the driving in all circumstances. The human occupants are passengers and are never expected to drive the vehicle.


As of now, car manufacturers have reached level 3 but a number of technical and other issues must be taken concern of before autonomous vehicles can be purchased and used on roads.
Working of self-driving cars
AI technologies power self-driving car systems. Developers of self-driving cars use vast amounts of data from  HYPERLINK "https://searchenterpriseai.techtarget.com/definition/image-recognition" image recognition systems, along with  HYPERLINK "https://searchenterpriseai.techtarget.com/definition/machine-learning-ML" machine learning and HYPERLINK "https://searchenterpriseai.techtarget.com/definition/neural-network" neural networks, to build systems that can drive autonomously.
The neural networks identify patterns in the data, which is fed to the machine learning algorithms. That data includes images from cameras on self-driving cars from which the neural network learns to identify as traffic lights, trees, curbs, pedestrians, street signs and other parts of any given driving environment.
Breaking Down the Technicals
Mapping and Localization
Prior to making any navigation decisions, the vehicle must first build a map of its environment and precisely localize itself within that map. The most frequently used sensors for map building are laser rangefinders and cameras. A laser rangefinder scans the environment using swaths of laser beams and calculates the distance to nearby objects by measuring the time it takes for each laser beam to travel to the object and back. Where video from the camera is ideal for extracting scene color, an advantage of laser rangefinders is that depth information is readily available to the vehicle for building a three-dimensional map. The vehicle filters and discretizes data collected from each sensor and often aggregates the information to create a comprehensive map, which can then be used for path planning.
Obstacle Avoidance
A vehicle’s internal map includes the current and predicted location of all static (e.g. buildings, traffic lights, stop signs) and moving (e.g. other vehicles and pedestrians) obstacles in its vicinity. Obstacles are categorized depending on how well they match up with a library of pre-determined shape and motion descriptors. The vehicle uses a probabilistic model to track the predicted future path of moving objects based on its shape and prior trajectory.
 This process allows the vehicle to make more intelligent decisions when approaching crosswalks or busy intersections. The previous, current and predicted future locations of all obstacles in the vehicle’s vicinity are incorporated into its internal map, which the vehicle then uses to plan its path.


Path Planning
The goal of path planning is to use the information captured in the vehicle’s map to safely direct the vehicle to its destination while avoiding obstacles and following the rules of the road.
Manufacturers’ planning algorithms a rough long-range plan for the vehicle to follow while continuously refining a short-range plan (e.g. change lanes, drive forward 10m, turn right). It starts from a set of short-range paths that the vehicle would be dynamically capable of completing given its speed, direction, and angular position, and removes all those that would either cross an obstacle or come too close to the predicted path of a moving one.  Once the best path has been identified, a set of throttle, brake and steering commands, are passed on to the vehicle’s on-board processors and actuators. Altogether, this process takes on average 50ms, although it can be longer or shorter depending on the amount of collected data, available processing power, and complexity of the path planning algorithm.
To make the travel a success, this process of localization, mapping, obstacle detection, and path planning is repeated until the vehicle reaches its destination.
Google Waymo
Waymo LLC is a  HYPERLINK "https://en.wikipedia.org/wiki/Autonomous_car" \o "Autonomous car" self-driving technology development company. 
In April 2017, Waymo started a limited trial of a  HYPERLINK "https://en.wikipedia.org/wiki/Self-driving_car" \o "Self-driving car" self-driving taxi service in  HYPERLINK "https://en.wikipedia.org/wiki/Phoenix,_Arizona" \o "Phoenix, Arizona" Phoenix, Arizona.
As of 2018, Waymo had tested its system in six states and 25 cities across the U.S over a span of more than 9 years.
 
How Google Waymo vehicles work:
• The driver (or passenger) sets a destination. The car's software calculates a route.
• A rotating, roof-mounted Lidar sensor monitors a 60-meter range around the car and creates a dynamic 3D map of the car's current environment.
• A sensor on the left rear wheel monitors sideways movement to detect the car's position relative to the 3D map.
• Radar systems in the front and rear bumpers calculate distances to obstacles.
• AI software in the car is connected to all the sensors and collects input from  HYPERLINK "https://whatis.techtarget.com/definition/Google-Street-View" Google Street View and video cameras inside the car.
• The AI simulates human perceptual and decision-making processes and controls actions in driver-control systems such as steering and brakes.
• The car's software consults  HYPERLINK "https://whatis.techtarget.com/definition/Google-Maps" Google Maps for advance notice of things like landmarks and traffic signs and lights.
• An override function is available to allow a human to take control of the vehicle.
In June 2015, Google confirmed that there had been 12 collisions as of that time, eight of which involved being rear-ended by another driver at a stop sign or traffic light, two in which the vehicle was side-swiped by another driver, one of which involved another driver rolling through a stop sign, and one where a Google employee was manually driving the car.
The Intelligent Grid Vehicle Cloud
“Intelligent grid” of autonomous cars was pioneered by the Google car.
 
The grid goes beyond the improvements autonomous vehicles bring to the table and brings to light the idea of a distributed transport fabric capable of making its own decisions about driving customers and goods to their destinations. The idea is similar to the Internet of Things (IoT), hinging on a vehicle network (vehicle cloud) capable of communications, storage, intelligence and learning capabilities (AI and Big Data).
 
In addition, sharing and “gig” economies applied to vehicle usage can be paired with all the advantages of autonomous vehicle technology to create economic value and streamline the transportation system that exists already in road infrastructure.
 
The “vehicle cloud” adds another dimension to this, made possible by networked intelligence where communication and self-organizing are possible between vehicles. Increased freeway utilization through, for example, “car platooning” where vehicles can cooperate and form groups while driving, minimizing the distance between vehicles in a platoon while matching speeds automatically and using less space while traveling predictably in ways that human drivers never could.
 
Inter-vehicle communication will also be able to help with minimizing fuel used finding parking spaces and give the vehicle cloud the ability to learn from data gathered about driving patterns.


The road ahead…
There are various barriers in way of fully automated vehicles. GPS can be unreliable, computer vision systems have  HYPERLINK "http://www.cs.cmu.edu/%7Ezkolter/pubs/levinson-iv2011.pdf" \t "_blank" limitations to understanding road scenes, and variable weather conditions can adversely affect the ability of on-board processors to adequately identify or track moving objects. Self-driving vehicles have also yet to demonstrate the same capability as human drivers in understanding and navigating unstructured environments such as construction zones and accident areas.
These barriers though are not insurmountable. The amount of road and traffic data available to these vehicles is increasing, newer range sensors are capturing more data, and the algorithms for interpreting road scenes are evolving. The transition from human-operated vehicles to fully self-driving cars will be gradual, with vehicles at first performing only a subset of driving tasks such as parking and driving in stop-and-go traffic autonomously. As technology improves, more driving tasks can be reliably outsourced to the vehicle.
Fairly saying, the technology for self-driving cars is not quite ready at this present moment.