doctor and patient

Center for Neuromusculoskeletal Research

The Center for Neuromusculoskeletal Research (CNMSR) at the University of South Florida focuses on understanding and treating chronic pain, movement disorders, and musculoskeletal dysfunction through advanced biomechanics, neuroscience, and rehabilitation science. Its researchers develop innovative diagnostic tools and personalized therapies using motion analysis, wearable technology, and translational clinical trials to improve long-term outcomes for patients.

Nathan Schilaty

DC, PhD

Director

Meet the Team

Latest News

Featured Publications

  • Madanian, Samaneh, Olayinka Adeleye, John Michael Templeton, Talen Chen, Christian Poellabauer, Enshi Zhang, and Sandra L Schneider. (2025) 2025. “A Multi-Dilated Convolution Network for Speech Emotion Recognition.”. Scientific Reports 15 (1): 8254. https://doi.org/10.1038/s41598-025-92640-2.

    Speech emotion recognition (SER) is an important application in Affective Computing and Artificial Intelligence. Recently, there has been a significant interest in Deep Neural Networks using speech spectrograms. As the two-dimensional representation of the spectrogram includes more speech characteristics, research interest in convolution neural networks (CNNs) or advanced image recognition models is leveraged to learn deep patterns in a spectrogram to effectively perform SER. Accordingly, in this study, we propose a novel SER model based on the learning of the utterance-level spectrogram. First, we use the Spatial Pyramid Pooling (SPP) strategy to remove the size constraint associated with the CNN-based image recognition task. Then, the SPP layer is deployed to extract both the global-level prominent feature vector and multi-local-level feature vector, followed by an attention model to weigh the feature vectors. Finally, we apply the ArcFace layer, typically used for face recognition, to the SER task, thereby obtaining improved SER performance. Our model achieved an unweighted accuracy of 67.9% on IEMOCAP and 77.6% on EMODB datasets.

  • De Silva, Upeka, Samaneh Madanian, Ajit Narayanan, John Michael Templeton, Christian Poellabauer, Sandra L Schneider, and Rahmina Rubaiat. (2025) 2025. “A Proof-of-Concept Development on Speech Analysis for Concussion Detection.”. Studies in Health Technology and Informatics 329: 1008-12. https://doi.org/10.3233/SHTI250991.

    Speech signal analysis to support objective clinical decision-making has gained immense interest, especially in neurological disorders. This research assessed the feasibility of speech analysis on the detection of concussions. Using a speech dataset from 82 concussed and 82 healthy participants, we extracted two speech feature sets focusing on Mel Frequency Cepstral Coefficients (MFCCs) to characterize speech articulation. A machine learning pipeline was developed to discriminate concussion speech from healthy speech by applying Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Decision Tree (DT) classifiers. All three classifiers trained on the MFCC-based feature set achieved Matthew's correlation coefficient score above 0.5 on the holdout data set. DT model achieved a 78% sensitivity and 75% specificity. The findings of this research serve as proof-of-concept for speech analysis of concussion detection.

Our Collaborators

We are thankful for the valuable collaborations that support and strengthen the lab’s research efforts. We gratefully acknowledge the following collaborators for their expertise, insight, and continued partnership with the lab:

arts-4-all-logo