Li Deyi, an academician of the Chinese Academy of Engineering: How does automatic driving go from L2 to L3?

At the China Artificial Intelligence Conference held on July 21st, Li Deyi, an academician of the Chinese Academy of Engineering and chairman of the China Artificial Intelligence Society, raised an important question: “If you can’t even do automatic parking, what about achieving Level 3 autonomous driving?” This sparked a deeper discussion on the challenges and future of self-driving technology. The concept of automated parking was first introduced by Citroën in 2005, known as the "City Park" system. Initially, it only released the steering wheel, but over time, it evolved into a fully automated parking solution that helps even novice or female drivers park with ease. Today, such systems are commonly referred to as "automatic parking hoes." According to the Society of Automotive Engineers (SAE), autonomous driving is classified into five levels: L1 (driver assistance), L2 (partial automation), L3 (conditional automation), L4 (high automation), and L5 (full automation). In recent years, the development of assisted driving has advanced rapidly, from L1 to L2 and now aiming for L3. However, the transition from L2 to L3 remains a significant challenge. Li Deyi pointed out that most self-driving cars on the road today, including Tesla models, are still at the L2 level and require human supervision. According to a 2016 report by the California Department of Motor Vehicles, the average intervention frequency for these vehicles is 0.2 times per thousand miles, indicating that full autonomy is still far off. The shift from L2 to L3 involves transferring control from the driver to the vehicle, which raises several critical questions: How do we define the point of transition? How do we measure the handover of control? And how do we handle accidents during this process? Currently, Audi is set to launch its new L3-level A8 model by the end of the year, marking a major milestone in the industry. However, a key issue remains—will traffic authorities issue licenses for such vehicles? And will ordinary drivers without traditional licenses feel confident using them? Li Deyi highlighted that L3 faces challenges not only from technological hurdles but also from regulatory and societal acceptance. The real breakthrough from L2 requires strong artificial intelligence integration. Once the conditions for autonomous driving—such as geofences, climate limits, and human interaction constraints—are exceeded, the system must hand over control immediately. Such transitions could be riskier than manual driving. He further questioned whether the focus of L3 should be on improving the car itself or on replacing the driver’s cognitive abilities. The "car" refers to the software-defined machine capable of performing driving tasks, while the "human" involves replicating the driver’s memory, decision-making, and behavioral skills through AI. During the driving process, the driver’s ability to predict and react in real-time is still irreplaceable. This led to the development of the "driving brain"—a system that goes beyond sensors like radar and lidar. It must handle memory, computation, and interaction, becoming a crucial component in the smart car industry. Li Deyi emphasized the need for microelectronics technology, using architectures like CPU + GPU + FPGA + ASIC to develop dedicated chips and boards for the driving brain. These systems must handle the uncertainties of real-world driving environments. Different driving brains may have varying levels of cognition, skills, and experience, but they all share the fundamental ability to drive. They are essentially "licensed" to operate autonomously. In conclusion, Li Deyi revealed that his team is collaborating with companies like Yutong, Chery, and SAIC to develop and industrialize the driving brain based on real-world autonomous driving tests. As the industry moves forward, the path from L2 to L3 remains complex, but with continued innovation, the future of autonomous driving looks promising. For more updates on automotive electronics and intelligent driving, stay tuned for more detailed and comprehensive coverage.

Industrial Touch Screen

Industrial touch screens are specialized display devices that combine advanced touch-sensitive technology with rugged construction to meet the demanding requirements of industrial environments. These touch screens are designed to provide intuitive human-machine interfaces (HMIs), enabling operators to interact with industrial control systems, machinery, and processes more efficiently.

Key Features of Industrial Touch Screens:

Durability and Ruggedness. Industrial touch screens are built to withstand harsh environmental conditions such as extreme temperatures, dust, moisture, and vibration. They are often housed in rugged enclosures made of materials like metal or industrial-grade plastics to protect against physical damage.

High Sensitivity and Responsiveness. These touch screens offer high sensitivity and responsiveness, ensuring accurate and quick touch detection. They can handle multiple touch points simultaneously, allowing for gestures such as pinch-to-zoom and swipe, which are common in modern touch interfaces.

Wide Viewing Angles. Industrial touch screens are designed to provide wide viewing angles, ensuring that operators can view the display clearly from different positions. This is particularly important in industrial settings where multiple operators may need to view the same screen.

Customization Options. Many industrial touch screens can be customized to meet the specific needs of different applications. This includes options for different screen sizes, resolutions, touch technologies, and mounting configurations.

Touch Screen For Industrial,Industrial Touch Screen,Fanless Industrial Touch Screen

Shenzhen Innovative Cloud Computer Co., Ltd. , https://www.xcypc.com

Posted on