Driverless cars are an increasingly popular topic these days. And the traffic sign recognition system, which enables the automatic identification of traffic signs for driverless car system, also generates major discussion. As a matter of fact, the system may be able to save the lives of drivers if their make mistakes in identifying of traffic signs. Therefore, the traffic sign recognition system can be used not only in driverless cars, but also as an aid to help drivers identify traffic signs on the road. To achieve this goal, the deep learning technology, convolutional neural networks is used to identify traffic signs for drivers, and it is required to be equipped with the quality of high accuracy and high efficiency. Therefore, the lightweight convolutional neural network model, MobileNetV2 is used to be developed as the core in this project since it has the advantage of less computation and reducing latency. Also, to better fit the dataset, the attached classifiers are added. The aim of this article is to explore the process of generating a lightweight CNN model for traffic sign identification, which is taken from the Chinese traffic sign dataset, and to design the graphical user interface for the model to be better understood and operated. As a result, the model achieves the training accuracy of 88.10% and the validation accuracy of 93.75%. And the training loss of 40.78%, validation loss of 17.05.
The project demonstrates that a lightweight MobileNetV2-based CNN can effectively classify traffic signs with high validation accuracy (93.75%) and moderate training accuracy (88.10%). The designed GUI enhances accessibility and usability for non-technical users, supporting real-world applications in autonomous and assisted driving systems. The approach provides a practical and computationally efficient solution to real-time TSR challenges, making it suitable for embedded systems and driver assistance applications.
Certain traffic sign classes (e.g., rare signs like “Speed Limit 5”) are underrepresented, affecting overall model generalizability. Although performance is high, it does not outperform more advanced models like YOLOv5 in detection tasks. MobileNetV2 sacrifices some accuracy for speed and lightness, which may limit its performance on more complex or cluttered scenes.
Explore hybrid models or attention mechanisms to further improve accuracy without compromising speed. Test the system in real driving conditions with video streams to evaluate real-time inference speed and reliability.