Building upon a lifelong fascination with making and innovating, my research centers on advancing manufacturing to be Smart, Scalable, and Sustainable through the integration of cutting-edge digital technologies.
My core interests encompass:
Figure 1. An explainable and physics-informed AI framework for fault diagnosis in robotic spot-welding (RSW) process.
Figure 2. An interpretable AI framework for metal powder flow state monitoring in cold spray additive manufacturing.
The growing reliance on Artificial Intelligence (AI) in manufacturing underscores the need for reliable and adaptable solutions, as the complexity of AI models often limits their direct application in dynamic environments. Explainable AI (XAI) addresses these challenges by enhancing the interpretability of AI systems, enabling engineers to understand decision-making processes for tasks like fault diagnosis and predictive maintenance. Physics-Informed AI complements this by integrating human knowledge and domain-specific principles, ensuring robust and accurate performance across diverse manufacturing scenarios.
One innovative approach in my research is Fuzzy-based Energy Pattern Imaging (FEPI), a novel sensor data processing technique that intuitively visualizes critical energy dynamics in a target system in real-time (Figure 1). This method enabled robust and explainable fault diagnosis in processes such as Robotic Spot-Welding (RSW) and Direct Energy Deposition (DED) metal 3D printing by incorporating Gradient-weighted Class Activation Mapping (Grad-CAM) for model interpretation and Fault Tree Analysis (FTA) for failure causality modeling.
Similarly, I introduced a highly reliable AI framework that interprets deep learning models through frequency spectrum analytics, significantly enhancing predictive robustness in diagnosing metal powder flow states in Cold Spray Additive Manufacturing (Figure 2). These efforts highlight my commitment to advancing manufacturing AI solutions that are both interpretable and practically applicable, bridging the gap between theoretical models and industrial applications.
Video 1. A vision-guided autonomous robotic system for fabric handling in garment manufacturing.
Figure 3. Demonstration of an optimization algorithm for 6-D Object Pose Estimation (OPE) in manufacturing environment
In advanced manufacturing, Robotic Autonomous System is critical to addressing the increasing demand for precision and efficiency across diverse materials and production environments. A key challenge lies in enabling robots to handle complex and variable tasks with the adaptability of human operators.
A notable advancement in my research is the development of an adaptive robotic fabric gripper system capable of dynamically adjusting its grasping methods based on visual recognition to address the challenges of handling flexible, permeable fabrics in garment manufacturing (Video 1). This system, enhanced with AI-modeled grasping characteristics tailored to material properties, achieved exceptional performance, including a transfer time of ≤ 3 seconds and 97.16% accuracy in handling flexible fabrics without deformation or damage, far surpassing traditional human methods.
Further innovations include optimization algorithms for 6-DOF Object Pose Estimation (OPE) that integrate RGB-D vision data with 3D CAD-based geometry information. Leveraging open-source visual AI models such as Mask R-CNN and Segment Anything (SAM), these systems deliver robust performance in noisy manufacturing environments, accurately estimating the positions and orientations of machined or 3D-printed parts with up to 85.7% accuracy (Figure 3). These advancements highlight the transformative role of Visual AI in enabling autonomous decision-making and advancing robotic manufacturing toward fully automated and versatile solutions for diverse and complex production challenges.
Video 2. Digital twin-augmented real-time monitoring interface for robotic spot-welding (RSW) system.
Video 3. Digital twin synchronization for direct energy deposition (DED) metal 3D printing process.
Integrating Digital Twin (DT) with smart manufacturing enables immersive and interactive cyber-physical system (I2CPS), allowing operators to engage directly with digital technologies. I have focused on developing real-time synchronized DT implementations to intuitively visualize information that is inaccessible in physical space. A dedicated data pipeline was established to seamlessly integrate machine and sensor data into a DT system, ensuring live synchronization. Using Unity 3D, the full-scale twin models are capable of synchronizing component-level operations with physical systems. The DT framework employs the scalable middleware to create a platform that integrates multiple data-driven applications, allowing for flexible addition or removal of AI modules.
One key innovation is an DT-augmented real-time monitoring interface where the views of a physical camera and the Unity camera are aligned. The interface is applied to the Robotic Spot-Welding (RSW) system, enabling users to selectively monitor real-time information (Video 2). Additionally, a DT of Direct Energy Deposition (DED) system not only synchronizes the machine operations but also visualizes the additive manufactured geometries in real time by integrating an AI model that predicts the geometry of deposited material (Video 3). These efforts highlight the effectiveness of DT systems tailored to manufacturing machines and processes, emphasizing real-time synchronization, interactivity, and scalability.