Bing Li, Ph.D.

Bing Li, Ph.D.

Associate Professor

Clemson University, USA

Dr. Bing Li is an Associate Professor at Clemson University. Li joined the Department of Automotive Engineering in August 2018. In Clemson University International Center for Automotive Research (CU-ICAR), he founded and directs the AutoAI Lab research group.

The central focus of Li’s research is Spatial and Embodied AI for dynamic and interactive environments, including topics such as sensing and perception, 3D vision/mapping/SLAM, visual recognition, robot/agentic learning, and human-centered AI. His team also develops intelligent visual (language) navigation technologies to assist individuals in need of mobility/wayfinding assistance. His research has broad applications across automotive, robotics, transportation, agriculture and manufacturing.

Li received his Ph.D. in Electrical Engineering from The City College (CCNY), The City University of New York (CUNY) in 2018. Drawing on his professional experience with IBM and HERE North America, Li has also been translating his fundamental research into applied innovations through industry partnerships.

Currently, the lab has an opening for a self-motivated PhD student with full scholarship! Please send your application package to Dr. Li if you are interested in.

Selected Publications

View All Publications at: All Publications, or Google Scholar.

SAM-Guided Masked Token Prediction for 3D Scene Understanding

Conference on Neural Information Processing Systems (NeurIPS), 2024, DOI: 10.5555/3737916.3740554, Code

An Update on International Robotic Wheelchair Development

International Conference on Applied Human Factors and Ergonomics (AHFE), 2024, DOI: 10.54941/ahfe1005007
Best Paper Award

Bridging the Domain Gap: Self-Supervised 3D Scene Understanding with Foundation Models

Conference on Neural Information Processing Systems (NeurIPS), 2023, DOI: 10.5555/3666122.3669589, Code

Rethinking 3D Geometric Feature Learning for Neural Reconstruction

International Conference on Computer Vision (ICCV), 2023, DOI: 10.1109/ICCV51070.2023.01627, Code

Disentangling Object Motion and Occlusion for Unsupervised Multi-frame Monocular Depth

European Conference on Computer Vision (ECCV), 2022, DOI: 10.1007/978-3-031-19824-3_14, Code

Advancing Self-Supervised Monocular Depth Learning with Sparse LiDAR

Conference on Robot Learning (CoRL), 2021, PDF, Code

Teaching

AuE 3990/4990 CI:Embodied AI for Smart Manufacturing

This Creative Inquiry research project investigates how Embodied AI technologies can innovate and enhance manufacturing through intelligent collaboration with human workers. Embodied AI agents—also known as interactive agents—are smart systems that perceive, learn, and act within their environment through a physical form, such as robots or augmented reality devices. The students gain hands-on experience in researching and developing innovative technologies that are shaping the future of smart manufacturing.

AuE 4930/6930 Agentic AI for Robot & Auto

This course introduces the fundamental concepts of Agentic AI computing and its integration into robotic and automotive systems. Key topics include agentic frameworks, memory, context engineering, tool utilization, and interaction with environments and humans in simulated and real-world settings. The course emphasizes practical methodologies for integrating, developing and deploying AI agents for robot and automotive applications.

  • Robot Learning
  • Agentic AI Frameworks
  • Agent Evaluation
  • Vibe Coding and Tool Use
  • Interaction with Environments&Human
  • Simulation and Applications with Robot/Automotive Systems

AuE 8200 Machine Perception and Intelligence

This course will introduce the fundamental technologies for autonomous vehicle sensors, perception, and machine learning, from electromagnetic spectrum characteristics and signal acquisition, vehicle extrospective sensor data analysis, perspective geometry models, image and point cloud processing, to machine/deep learning approaches. We will also have hands-on programming experience in vehicle perception problems through homework and class projects.

  • Electromagnetic spectrum characteristics and 1D Radar signal processing;
  • Visual perception using 2D image processing and machine learning recognition;
  • 3D LiDAR and point cloud data representation and processing;
  • Visual/LiDAR/IMU for vehicle simultaneous localization and mapping (SLAM);
  • Deep learning for vehicle perceptual sensor data processing.

Contact