SMART
Intranet Share Print

SMART office will be closed today as the PSI reading is >300. For enquiries, please email the respective person. SMART office will be open tomorrow should the PSI reading be below 300.

  • Home
  • Research
    • LEES
      • LEES Careers
    • AMR
      • AMR Careers
    • CAMP
      • CAMP Careers
    • DiSTAP
      • DiSTAP Careers
    • FM
      • FM Careers
    • BioSyM
      • BioSyM Careers
    • CENSAM
      • CENSAM Careers
    • ID
      • ID Careers
    • M3S
      • M3S Careers
  • Fellowships
    • Undergraduate (SMURF)
      • Apply for SMURF Fellowship
    • Graduate (SMART Graduate)
      • Apply for SMART Graduate Fellowship
    • Postdoctoral (SMART Scholars)
      • Apply for SMART Scholars Fellowship
  • News & Events
    • News
  • Careers
    • Career Opportunities
      • Job Application
    • Student Jobs
banner
Career Opportunities
  • Career Opportunities
  • Student Jobs
  • Home
  • Career Opportunities
  • Job Application
  • Research Engineer (Multi-Modal Interactions)

Processing

Please wait, form submission in progress

Research Engineer (Multi-Modal Interactions)

IRG_M3S_2023_010

<< Back to Job Listing
Posted on 25 August 2023
Group: M3S

Project Overview

We are seeking one or more Research Engineers in computer science or computer engineering to join the “Supporting Interactive Online Learning” project. This project is part of  Task “T4”, titled AI for Human Capital Development in Future of Work, of a five-year, ambitious research program on "Mens, Manus and Machina—How AI Empowers People, Institutions and Cities in Singapore (M3S)."

Successful applicants for this position will have the opportunity, in partnership with other post-doctoral colleagues and PhD students, to work on cutting-edge research that builds new capabilities for AI-enhanced interaction between assistive agents/chatbots and humans, with a special focus on supporting multi-modal natural interactions for learning tasks. The end goal is to build exciting new capabilities for scalable adult pedagogy and learning, that allow adult learners to avail of the advantages that self-paced, online learning offers while still benefit from the higher levels of engagement generated by the use of smart, multi-modal chatbots.

This research will specifically look at the integration of verbal queries, natural gestures (e.g., pointing to specific diagrams or content in a book or screen) and 2D (RGG) and 3D point (LIDAR) inputs, to provide richer contextualized understanding of doubts and queries issued by an adult learner. The work will build and optimize AI models that perform such integration of multi-sensory input to perform “immersive visual grounding of human queries”.   Such AI models will be integrated into prototypes of multi-modal chatbots, that can be deployed as plug-ins to a variety of laptop/mobile-based learning platforms. Later, as the work progresses, such chatbots can be integrated with state-of-the-are LLMs and generative AI technologies to generate, using a combination of text and images/video, responses to the issued queries.

The T4 team is led by distinguished scholars, including Profs. Jinhua Zhao and Alex (Sandy) Pentland from MIT; and Prof. Archan Misra from Singapore Management University. The proposed research described above will be spearheaded by Prof. Misra.

Job Description

  • Implement, train and test embedded AI models for spatially-aware multi-modal instruction comprehension.
  • Develop mobile/laptop-based prototypes for learning-driven chatbots, that integrate AI models and multi-modal sensory input.
  • Development, implement and evaluate experiments to characterize the performance of such prototypes
  • Assist the overall team to publish research results in top-tier conferences and journals, with focus on venues associated with  pervasive/ubiquitous computing applications, mobile/embedded systems, AI and education science.

Job Requirements

  • Bachelor’s or Master’s (preferred) degree in Computer Science/Engineering or Electronics Engineering
  • 1-2 years project-level experience with embedded AI implementations (e.g., for vision or speech applications) on platforms such as Raspberry Pi or Nvidia Jetson is essential.
  • High proficiency with ML libraries and frameworks (e.g., Tensorflow, PyTorch) is essential.
  • Experience with implementing signal processing algorithms over  data streams from sensors, such as IMU, microphone or LIDAR, is highly desired.
  • Record of co-authored publications in top-tier conferences and journals is a plus.
  • Strong communication and collaboration skills.

This role is primarily based in Singapore and appointment is on a two-year contract, with the possibility of extension.

If you want to find out more about the role, please contact Professor Archan Misra  (archanm@smu.edu.sg). 

Interested applicants are invited to send in their full CV/resume, cover letter and list of three references (to include reference names and contact information). We regret that only shortlisted candidates will be notified.

Apply

Please download and complete our SMART Job Application Form and upload in the field below. Thank you.

Personal Details

First Name*
Last Name*
Phone No*
Email Address*
How did you get to know of this job?
Notification on personal data protection

File Attachments

Please upload one of these formats PDF & Word Doc only. File size limited to 2MB

Upload Completed SMART Job Application Form *
Upload Your CV *
Upload Cover Letter
Any Other Required Documents

Back to Top
Last Updated 01/03/2024 Privacy Policy | Terms of Use | Open Access Articles | Sitemap | 2016 All Rights Reserved. Singapore-MIT Alliance for Research and Technology Last Updated 01/03/2024