Shaoshan Liu

2019-2021 Distinguished Speaker
Share this on:

Dr. Shaoshan Liu is the Founder and Chairman of PerceptIn (www.perceptin.io), a company focused on providing visual perception solutions for robotics and autonomous driving. Since its inception, PerceptIn has attracted over 11 million USD of funding from top-notch venture capital firms such as Walden International, Matrix Partners, and Samsung Ventures.

Prior to founding PerceptIn, Dr. Shaoshan Liu was a founding member of Baidu U.S.A. as well as the Baidu Autonomous Driving Unit where he led the company’s system integration of autonomous driving systems. Dr. Shaoshan Liu received his Ph.D. in Computer Engineering from the University of California, Irvine and executive education from Harvard Business School.

His research focuses on Computer Architecture, Deep Learning Infrastructure, Robotics, and Autonomous Driving (www.linkedin.com/in/shaoshanliu). Dr. Shaoshan Liu has published over 40 research papers and holds over 150 U.S. international patents on robotics and autonomous driving. He is also the lead author of the best selling textbook “Creating Autonomous Vehicle Systems,” which is the first technical overview of autonomous vehicles written for a general computing and engineering audience.

In addition, as a senior member of IEEE, Dr. Shaoshan Liu co-founded the IEEE Special Technical Community on Autonomous Driving Technologies and served as its Founding Vice President. Dr. Shaoshan Liu’s research work has made a major impact on the robotics and autonomous driving industry. His patented DragonFly visual perception technology, the “people’s autonomous vehicle,” is the world’s first safe, affordable, and reliable autonomous vehicle. The DragonFly enables reliable and low-speed autonomous driving and costs under $10,000 USD when mass-produced. The vehicle represents a breakthrough step towards the ubiquitous deployment of autonomous driving globally. Dr. Shaoshan Liu’s work has received international recognition both within and outside the technology community. Select media coverage includes Forbes, the L.A. Times, IEEE Spectrum, TechCrunch, ReadWrite, China Daily, Science and Technology Daily (in Chinese), Nikkei Robotics (in Japanese), and Wedge (in Japanese).

PerceptIn

Phone: 626 278 8145

Email: shaoshan.liu@perceptin.io

DVP term expires December 2021


Presentations

Edge Computing for Autonomous Driving: Opportunities and Challenges

Safety is the utmost important requirement for autonomous vehicles, hence the ultimate challenge of designing an edge computing ecosystem for autonomous vehicles is to deliver enough computing power, redundancy, and security so as to guarantee the safety of autonomous vehicles. Specifically, autonomous driving systems are extremely complex systems that tightly integrate many technologies, including sensing, localization, perception, decision making, as well as the smooth interactions with cloud platforms for high-definition (HD) map generation and data storage. These complexities impose numerous challenges for the design of autonomous vehicle edge computing systems. First, the edge computing systems for autonomous driving need to process an enormous amount of data in real-time, and often the incoming data from different sensors is highly heterogeneous. Since autonomous vehicle edge computing systems are mobile systems, they often have very strict energy consumption restrictions. Hence, it is imperative to deliver sufficient computing power with reasonable energy consumption, to guarantee the safety of autonomous vehicles, even at high speed. Second, in addition to the edge system design, vehicle-to-everything (V2X) provides redundancy for autonomous driving workloads, and alleviates stringent performance and energy constraints on the edge side. With V2X, more research is required to define how vehicles can cooperate with each other and with the infrastructure. Last but not least, safety cannot be guaranteed with security compromised. Thus, protecting autonomous driving edge computing systems against attacks at different layers of the sensing and computing stack is of paramount concern. In this talk, we review state-of-the-art approaches in these areas as well as to explore the potential solutions to address these challenges.

 

DragonFly+: An FPGA-based quad-camera visual SLAM system for autonomous vehicles

In recent years, autonomous driving has become quite a popular topic in the research community, industry, and even the press. Nonetheless, the large-scale adoption of autonomous vehicles is meeting affordability problems. The major contributors to the high cost of autonomous vehicles include LIDAR sensors, which cost over $80,000 per unit, and computing systems, which cost over $20,000 each.

Shaoshan Liu explains how PerceptIn built a reliable autonomous vehicle, the DragonFly car, for under $10,000. The car was built for low-speed scenarios, such as university campuses, industrial parks, and areas with limited traffic. PerceptIn’s approach starts with low-speed to ensure safety, thus allowing immediate deployment. With technology improvements and with the benefit of accumulated experience, high-speed scenarios will be envisioned, ultimately having the vehicle’s performance equal that of a human driver in any driving scenario.

Instead of lidar, the DragonFly system utilizes computer vision-based sensor fusion to achieve reliable localization. Specifically, DragonFly integrates four cameras (with 720p resolution) into one hardware module, such that a pair of cameras faces the front of the vehicle and another pair of cameras faces the rear. Each pair of cameras functions like human eyes to capture spatial information of the environment from left and right two-dimensional images. The combination of the two pairs of cameras creates a 360-degree panoramic view of the environment. With this design, visual odometry should never fail since at any moment in time, you can always extract 360-degree spatial information from the environment, and there are always enough overlapping spatial regions between consecutive frames.

To achieve affordability and reliability, PerceptIn had four basic requirements for the DragonFly system design: It must be modular, with an independent hardware module for computer-vision-based localization and map generation. It must be SLAM-ready, with hardware synchronization of four cameras and IMU. It must be low power: the total power budget for this system is less than 10 W. It must be high performance: DragonFly needs to process four-way 720p YUV images with > 30 fps. Note that, with this design, at 30 fps, it generates more than 100 MB of raw image data per second and thus imposes tremendous stress on the computing system. After initial profiling, PerceptIn found out that the image processing frontend (e.g., image feature extraction) accounts for > 80% of the processing time.

To achieve the aforementioned design goals, PerceptIn designed and implemented DragonFly+, an FPGA-based real-time localization module. The DragonFly+ system includes hardware synchronizations among the four image channels as well as the IMU; a direct I/O architecture to reduce off-chip memory communication; and a fully pipelined architecture to accelerate the image processing frontend of the localization system. In addition, it employs parallel and multiplexing processing techniques to achieve a good balance between bandwidth and hardware resource consumption.

PerceptIn has thoroughly evaluated the performance and power consumption of the proposed hardware and compared it against an NVIDIA TX1 GPU SoC and an Intel Core i7 processor. The results demonstrate that, for processing four-way 720p images, DragonFly+ achieves 42 fps performance while consuming only 2.3 W of power, exceeding the design goals. By comparison, the NVIDIA Jetson TX1 GPU SoC achieves 9 fps at 7 W, and the Intel Core i7 achieves 15 fps at 80 W. Therefore, DragonFly+ is 3x more power efficient and delivers 5x of computing power compared to the NVIDIA TX1 and 34x more power efficient and delivers 3x of computing power compared to the Intel Core i7.

 

 

Enabling Computer-vision-based Autonomous Driving with Affordable and Reliable sensors

 

Autonomous driving technology consists of three major subsystems: algorithms, including sensing, perception, and decision; the client system, including the robotics operating system and hardware platform; and the cloud platform, including data storage, simulation, high-definition (HD) mapping, and deep learning model training. The algorithm subsystem extracts meaningful information from raw sensor data to understand its environment and make decisions about its actions. The client subsystem integrates these algorithms to meet real-time and reliability requirements. The cloud platform provides offline computing and storage capabilities for autonomous cars and can be used to test new algorithms, update the HD map, and train better recognition, tracking, and decision models.

Autonomous cars, like humans, need good eyes and a good brain to drive safely. Traditionally, LiDAR is the main sensor in autonomous driving, and it is the critical piece in both localization and obstacle-recognition scenarios. However, LiDAR has several major drawbacks, including extremely high cost (over US$80,000), lack of information (even 64-line LiDAR only captures a relatively sparse representation of the space), inconsistency in changing weather conditions, etc. As a result, PerceptIn investigated whether cars could drive themselves with computer vision.

The argument against this concept is that the camera does not provide accurate localization or a good obstacle-detection mechanism, especially when the object is far away (more than 30 meters). But do we actually need centimeter-accurate localization all the time? RTK and PPP GPS already provide centimeter-accurate localization, and if humans can drive cars with meter-accurate GPS, we should be able to do the same with driverless cars. If this is achievable, high-definition maps may not be needed for localization. Google Maps and Google Street View may suffice—a leap forward in autonomous driving development. And a combination of stereo vision, sonar, and millimeter radar could be to achieve high-fidelity obstacle avoidance.

Shaoshan Liu explains how PerceptIn designed and implemented its high-definition, stereo 360-degree camera sensors targeted for computer-vision-based autonomous driving. This sensor has an effective range of over 30 meters with no blind spots and can be use for obstacle detection as well as localization. Shaoshan discusses the sensor as well as the obstacle detection algorithm and the localization algorithm that come with this hardware.

 

π-BA: Bundle Adjustment Acceleration on Embedded FPGAs with Co-Observation Optimization

Bundle adjustment (BA) is a fundamental optimization technique used in many crucial applications, including 3D scene reconstruction, robotic localization, camera calibration, autonomous driving, space exploration, street view map generation etc. Essentially, BA is a joint non-linear optimization problem, and one which can consume a significant amount of time and power, especially for large optimization problems. Previous approaches of optimizing BA performance heavily rely on parallel processing or distributed computing, which trade higher power consumption for higher performance. In this talk we introduce π-BA, the first hardware-software co-designed BA engine on an embedded FPGA-SoC that exploits custom hard- ware for higher performance and power efficiency. Specifically, based on our key observation that not all points appear on all images in a BA problem, we designed and implemented a Co- Observation Optimization technique to accelerate BA operations with optimized usage of memory and computation resources. Experimental results confirm that π-BA outperforms the existing software implementations in terms of performance and power consumption.

 

 

Π-RT: A Runtime Framework to Enable Energy-Efficient Real-Time Robotic Applications on Heterogeneous Architectures

Enabling full robotic workloads with diverse behaviors on mobile systems with stringent resource and energy constraints remains a challenge. In recent years, attempts have been made to deploy single-accelerator-based computing platforms (such as GPU, DSP, or FPGA) to address this challenge, but with little success. The core problems are two-fold: first, different robotic tasks require different accelerators; second, managing multiple accelerators simultaneously is overwhelming for developers. In this paper, we propose Π-RT, the first robotic runtime framework to efficiently manage dynamic task executions on mobile systems with multiple accelerators as well as on the cloud to achieve better performance and energy savings. With Π-RT, we enable a robot to simultaneously perform autonomous navigation with 25 FPS of localization, obstacle detection with 3 FPS, route planning, large map generation, and scene understanding, traveling at a max speed of 5 miles per hour, all within an 11W computing power envelope.

Presentations

  • Edge Computing for Autonomous Driving: Opportunities and Challenges
  • DragonFly+: An FPGA-based quad-camera visual SLAM system for autonomous vehicles
  • Enabling Computer-vision-based Autonomous Driving with Affordable and Reliable sensors
  • π-BA: Bundle Adjustment Acceleration on Embedded FPGAs with Co-Observation Optimization
  • Π-RT: A Runtime Framework to Enable Energy-Efficient Real-Time Robotic Applications on Heterogeneous Architectures

Read the abstracts for each of these presentations