LCD Display Inverter

Display Inverter / VGA Board / LCD Controller

“Autopilot” when there is nothing, but “assisted driving” when there is something wrong?

Correspondingly, on August 20, 2021, the Ministry of Industry and Information Technology of China also issued the “Automotive Driving Automation Classification” standard, which will be implemented from March 1, 2022. The detailed classification is shown in Figure 1.

  

Figure 1 National Standard “Automotive Driving Automation Classification”

According to these two standards, the scenario of “sit in the car and don’t care about anything” imagined by the public can only be realized at the L5 level. However, at present, even Tesla’s AutoPilot (automatic assisted driving) function, the industry pioneer, only belongs to the L2 level, domestic NIO’s NOP (pilot assisted driving), ideal AD (advanced assisted driving) and Xiaopeng’s NGP (automatic assisted driving) Navigation assisted driving) belong to the L2 level. At this level, whenever driving assistance functions are used, the driver must be in the driving position and must always be aware of the situation. In fact, this is also the technical upper limit that can currently be carried on mass-produced cars.

So, on the basis of the current technology, how to further iteratively upgrade intelligent driving to achieve “vehicle-road coordination” in the true sense, so as to better ensure safety?

Scenario-based Characteristics of Digital Transportation

Traffic problems are complex system problems, which is already an industry consensus. Whether it is the rapidly developing intelligent transportation business, the current booming Internet of Vehicles or vehicle-road collaboration, including digital transportation, it cannot be covered or solved by a single product or technology. So, is there no single solution to complex traffic problems? According to our more than ten years of project implementation experience in the field of intelligent transportation, although there is no product or technology that can defeat the enemy with one move, we can extract an inseparable analysis method-scenario.

First of all, video is an important technology and method for building traffic scene perception, but there is no camera product that can adapt to all environments, nor pan-intelligence that can recognize all target behaviors. However, it is completely possible to refine business objectives and environmental models for segmented scenarios. With appropriate products and intelligent modeling, the expected objectives can be achieved, but at the same time, non-target intelligence or business must be abandoned. .

Second, the complexity of traffic lies in the complexity of the composition of the environment, as well as the uncertainty of traffic participants and expected traffic behavior. Multi-level subdivision and classification of traffic scenes can successfully abstract a model in which the environment is relatively deterministic and the traffic participants and expected behaviors can be solidified.

We subdivided the current traffic scenarios to be digitalized into thirteen categories and 45 models (incomplete sets) according to four categories (preliminary): urban roads, urban parking, bridges and tunnels, and highways. For each scene model, take full-time, all-domain, and all-element perception as the construction goal, and take the necessary, effective, and intensive as the construction principles, analyze the traffic problems under the corresponding scene model, and build a traffic evaluation index system (see Figure 2).

  

Figure 2 Digital transportation system architecture

“Roadside situation” perception and multi-dimensional perception to build a digital traffic base

Several components in the vehicle-road collaboration system include intelligent vehicles, high-precision maps, and roadside perception, supplemented by V2X communication, edge computing and cloud computing decision-making to work together to complete business landing. I abstract it as the five-character formula of “vehicle-road-condition-trust-policy”.

Vehicle: The intelligent vehicle itself has a power control system and an environmental perception system. The position, speed, and dynamic parameters of the vehicle during its running operation are called “vehicle state”, and the vehicle-mounted sensing and AI complete the organization and construction of the environment in which the vehicle is located. For example, the position of the front and rear vehicles, obstacles, traffic signs, and signal light status lights are called “the scene where the vehicle is located”, or “vehicle scene” for short.

Road: In simple terms, it is a high-precision map, which corresponds to the road in our daily communication and is the basis for solidified traffic for a period of time, and the digital road is the basis for the presentation and decision-making of the vehicle-road coordination system.

Condition: The traffic participant information and the overall traffic situation description obtained by the roadside perception system at a single moment and scene are the digital mapping of the instantaneous situation in the actual traffic process. The overall comprehensive description of different airspace scenarios at the same time constitutes the utilization degree of global traffic resources. The same scene can dynamically present the traffic efficiency in the cycle in the situation sequence in the continuous time domain.

Letter: Communication technology, complete the necessary data transmission between V2V, V2I, V2X.

Policy: The cloud system can collect all kinds of information all-time and all-domain, comprehensively organize it, and use it for traffic situation assessment. The edge system can make semi-real-time decisions and push traffic instructions.

Situational awareness is the first input of the vehicle-road coordination system, and it is also the only real-time dynamic data. The real-time, completeness and accuracy of its data will affect the accuracy of the final decision-making result output of the vehicle-road coordination system. Therefore, we call the situational awareness system. A base for digital traffic.

And multi-dimensional integration can improve the effectiveness of system operation.

In order to achieve the maximum utility of the digital transportation system, the construction principle of the roadside situation awareness system must require “all elements, all airspaces, and all time periods”.

The composition of the roadside situation awareness system at any scene point may include video, microwave, millimeter wave, laser, RFID, meteorological environment and other sensors. Among them, video cameras, millimeter-wave radars, and lidars should be the main perception devices for constructing traffic situations: video can identify and determine the characteristics of traffic participants, especially the perception of color information, but video has a limited range and is easily affected by Interference of light conditions; millimeter-wave radar can determine the main traffic participating targets, and can measure speed, distance, and determine traffic behavior in a wider area; lidar can realize more accurate (centimeter-level) target perception, but the cost higher.

Both millimeter-wave radar and lidar technologies cannot realize the identification of vehicle identity characteristics, and various sensing technologies are complementary or mutually exclusive in terms of target detection, motion tracking, identification and calibration. For multiple roadside situation awareness systems with adjacent scene points in the airspace, in order to avoid the loss of tracking of traffic objects, a certain overlapping perception coverage is required during construction.

Only by realizing more accurate traffic situation perception in the widest possible range, can more accurate digital twin data be constructed for the vehicle-road coordination system. Only by fully understanding the differences in the purpose of traffic behavior of traffic participants in different scenarios can we provide more reasonable auxiliary decision-making for autonomous vehicles.

The Links:   LM215WF3-SLZ2 LM190E08-TLG3