Murtaza Bulut
Title
Cited by
Cited by
Year
IEMOCAP: Interactive emotional dyadic motion capture database
C Busso, M Bulut, CC Lee, A Kazemzadeh, E Mower, S Kim, JN Chang, ...
Language resources and evaluation 42 (4), 335-359, 2008
13032008
Analysis of emotion recognition using facial expressions, speech and multimodal information
C Busso, Z Deng, S Yildirim, M Bulut, CM Lee, A Kazemzadeh, S Lee, ...
Proceedings of the 6th international conference on Multimodal interfaces …, 2004
9952004
Emotion recognition based on phoneme classes
CM Lee, S Yildirim, M Bulut, A Kazemzadeh, C Busso, Z Deng, S Lee, ...
Eighth International Conference on Spoken Language Processing, 2004
2862004
An acoustic study of emotions expressed in speech
S Yildirim, M Bulut, CM Lee, A Kazemzadeh, Z Deng, S Lee, S Narayanan, ...
Eighth International Conference on Spoken Language Processing, 2004
2112004
Expressive speech synthesis using a concatenative synthesizer
M Bulut, SS Narayanan, AK Syrdal
Seventh International Conference on Spoken Language Processing, 2002
1562002
Expressive facial animation synthesis by learning speech coarticulation and expression spaces
Z Deng, U Neumann, JP Lewis, TY Kim, M Bulut, S Narayanan
IEEE transactions on visualization and computer graphics 12 (6), 1523-1534, 2006
1122006
Toward effective automatic recognition systems of emotion in speech
C Busso, M Bulut, S Narayanan, J Gratch, S Marsella
Social emotions in nature and artifact: emotions in human and human-computer …, 2013
852013
Limited domain synthesis of expressive military speech for animated characters
WL Johnson, S Narayanan, R Whitney, R Das, M Bulut, C LaBore
Proceedings of 2002 IEEE Workshop on Speech Synthesis, 2002., 163-166, 2002
642002
On the robustness of overall F0-only modifications to the perception of emotions in speech
M Bulut, S Narayanan
The Journal of the Acoustical Society of America 123 (6), 4547-4558, 2008
622008
Constructing emotional speech synthesizers with limited speech database
R Tsuzuki, H Zen, K Tokuda, T Kitamura, M Bulut, S Narayanan
Proc. ICSLP 2, 1185-1188, 2004
542004
Investigating the role of phoneme-level modifications in emotional speech resynthesis
M Bulut, C Busso, S Yildirim, A Kazemzadeh, CM Lee, S Lee, ...
Ninth European Conference on Speech Communication and Technology, 2005
412005
Method and system for assisting patients
RS Jasinschi, M Bulut, L Bellodi
US Patent 9,747,902, 2017
332017
Automatic dynamic expression synthesis for speech animation
Z Deng, M Bulut, U Neumann, S Narayanan
Proc. of IEEE Computer Animation and Social Agents 2004, 267-274, 2004
262004
Camera-based heart rate monitoring in highly dynamic light conditions
V Jeanne, M Asselman, B den Brinker, M Bulut
2013 International Conference on Connected Vehicles and Expo (ICCVE), 798-799, 2013
252013
A statistical approach for modeling prosody features using POS tags for emotional speech synthesis
M Bulut, S Lee, S Narayanan
2007 IEEE International Conference on Acoustics, Speech and Signal …, 2007
222007
Recognition for synthesis: Automatic parameter selection for resynthesis of emotional speech from neutral speech
M Bulut, S Lee, S Narayanan
2008 IEEE International Conference on Acoustics, Speech and Signal …, 2008
212008
Stress-measuring system
AAML Bruekers, M Bulut, V Mihajlovic, M Ouwerkerk, JHDM Westerink
US Patent 10,758,179, 2020
192020
Spoken translation system using meta information strings
S Narayanan, P Georgiou, M Bulut, D Wang
US Patent 8,032,356, 2011
172011
Speech recognition engineering issues in speech to speech translation system design for low resource languages and domains
S Narayanan, PG Georgiou, A Sethy, D Wang, M Bulut, S Sundaram, ...
2006 IEEE International Conference on Acoustics Speech and Signal Processing …, 2006
142006
Signal selection for obtaining a remote photoplethysmographic waveform
AC Den Brinker, M Bulut, V Jeanne
US Patent 9,907,474, 2018
122018
The system can't perform the operation now. Try again later.
Articles 1–20