An ASL learning app that uses computer vision to help users learn and practice American Sign Language
Design
Designed and developed an iOS app that uses computer vision to help users learn and practice American Sign Language (ASL). The app utilizes a pre-trained YOLOv8 model to recognize and classify hand gestures in real-time, providing instant feedback to users on their signing accuracy. The app features a user-friendly interface with interactive lessons, quizzes, and progress tracking to enhance the learning experience. Integrated Swift and OpenCV for seamless image processing and model inference on iOS devices.
Tech Stack: