Vision-Based Motor Assessment in Autism: Deep Learning Methods for Detection, Classification, and Tracking


Student Name: Shailesh Pandey
Defense Date:
Location: Zoom defense, please email jgrisafe@ku.edu for defense information
Chair: Sumaiya Shomaji

Shima Fardad

Zijun Yao

Cuncong Zhong

Lisa Dieker

Abstract:

Motor difficulties show up in as many as 90% of people with autism, but surprisingly few, somewhere between 13% and 32%, ever get motor-focused help. A big part of the problem is that the tools we have for measuring motor skills either rely on a clinician's subjective judgment or require expensive lab equipment that most families will never have access to. This dissertation tries to close that gap with three projects, all built around the idea that a regular webcam and some well-designed deep learning models can do much of what costly motion-capture labs do today.

The first project asks a straightforward question: can a computer tell the difference between how someone with autism moves and how a typically developing person moves, just by watching a short video? The answer, it turns out, is yes. We built an ensemble of three neural networks, each one tuned to notice something different. One focuses on how joints coordinate with each other spatially, other zeroes in on the timing of movements, and the third learns which body-part relationships matter most for a given clip. We tested the system on 582 videos from 118 people (69 with ASD and 49 without) performing simple everyday actions like stirring or hammering. The ensemble correctly classifies 95.65% of cases. The timing-focused model on its own hits 92%, which is nearly 10 points better than a standard recurrent network baseline. And when all three models agree, accuracy climbs above 98%.

The second project deals with stimming, the repetitive behaviors like arm flapping, head banging, and spinning that are common in autism. Working with 302 publicly available videos, we trained a skeleton-based model that reaches 91% accuracy using body pose alone. That is more than double the 47% that previous work managed on the same benchmark. When we combine the pose information with what the raw video shows through a late fusion approach, accuracy jumps to 99.9%. Across the entire test set, only a single video was misclassified.

The third project is E-MotionSpec, a web platform designed for clinicians and researchers who want to track motor development over time. It runs in any browser, uses MediaPipe to estimate body pose in real time, and extracts 44 movement features grouped into seven domains covering things like how smoothly someone moves, how quickly they initiate actions, and how coordinated their limbs are. We validated the platform on the same 118-participant dataset and found 36 features with statistically significant differences between the ASD and typically developing groups. Smoothness and initiation timing stood out as the strongest discriminators. The platform also includes tools for comparing sessions over time using frequency analysis and dynamic time warping, so a clinician can actually see whether someone's motor patterns are changing across weeks or months.

Taken together, these three projects offer a practical path toward earlier identification and better ongoing monitoring of motor difficulties in autism. Everything runs on a webcam and a web browser. No motion-capture suits, no force plates, no specialized labs. That matters most for the families, schools, and clinics that need these tools the most and can least afford the alternatives.

Degree: PhD Comprehensive Defense (CS)
Degree Type: PhD Comprehensive Defense
Degree Field: Computer Science