Our experience of the world is multimodal – we see objects, hear sounds, sense texture, smell odors and taste flavors. Modality refers to the way in which something happens or is experienced. A research problem is identified as multimodal when it deals with multiple modalities. In order for Artificial Intelligent to understand things around us more completely, it needs to be able to capture and interpret such multimodal signals together. This talk provides a very basic knowledge about multimodal machine learning, followed by a practical example for us to have a deeper insight into the multimodal learning. It is well suited for people who want to start to explore the multimodal world around us.
Speaker: Dr. Pham Thi Viet Huong, AVITECH
Time: 15:30, Tuesday, April 28, 2020
Venue: Webinar – Microsoft Teams
Pham Thi Viet Huong obtained her B.Sc in Electrical Engineering from Hanoi University of Science and Technology in 2007. She got her MSc and PhD, both in Electrical Engineering, from University of Massachusetts Lowell in the United States, in 2010 and 2012. From 2012 to 2015, she was a researcher in the Manning School of Business, Lowell, Massachusetts. Since 2017, she has been the faculty of VNU University of Engineering and Technology, Vietnam (VNU-UET). She is interested in data mining and analytics, machine learning methodologies, with applications in biomedical engineering and cyber security.