SMART
Intranet Share Print

SMART office will be closed today as the PSI reading is >300. For enquiries, please email the respective person. SMART office will be open tomorrow should the PSI reading be below 300.

  • Home
  • Research
    • LEES
      • LEES Careers
    • AMR
      • AMR Careers
    • CAMP
      • CAMP Careers
    • DiSTAP
      • DiSTAP Careers
    • FM
      • FM Careers
    • BioSyM
      • BioSyM Careers
    • CENSAM
      • CENSAM Careers
    • ID
      • ID Careers
    • M3S
      • M3S Careers
  • Fellowships
    • Undergraduate (SMURF)
      • Apply for SMURF Fellowship
    • Graduate (SMART Graduate)
      • Apply for SMART Graduate Fellowship
    • Postdoctoral (SMART Scholars)
      • Apply for SMART Scholars Fellowship
  • News & Events
    • News
  • Careers
    • Career Opportunities
      • Job Application
    • Student Jobs
banner
Career Opportunities
  • Career Opportunities
  • Student Jobs
  • Home
  • Career Opportunities
  • Job Application
  • Postdoctoral Associate (Reflex and Habitual Motor Control for Embodied AI)

Processing

Please wait, form submission in progress

Postdoctoral Associate (Reflex and Habitual Motor Control for Embodied AI)

IRG_M3S_TCRP_2026_003

<< Back to Job Listing
Posted on 06 May 2026
Group: M3S

Project Overview

Mens, Manus and Machina (M3S), an Interdisciplinary Research Group (IRG) at the Singapore-MIT Alliance for Research and Technology (SMART), invites applications for a Postdoctoral Associate to join a newly funded research programme in Embodied Artificial Intelligence under the National Research Foundation Thematic Competitive Research Programme titled "Biologically Inspired Embodied AI Action Architecture."

The programme develops a new robotic foundation-model architecture that combines vision-language reasoning with high-frequency motor control grounded in force-torque and tactile feedback. The aim is to enable robots to act in real-world industrial settings — including aerospace MRO, semiconductor manufacturing, and offshore platform operations — even when continuous visual input is unavailable, and to learn the appropriate amount of force to exert for contact-rich tasks.

The programme is jointly led by the Institute for Infocomm Research (I²R) at A*STAR (Lead PI: Dr. Yau Wei Yun), the Singapore-MIT Alliance for Research and Technology (Team-PI for SMART: Dr. Alok Prakash), and Nanyang Technological University (Team-PI: Prof. Yang Jianfei). It is conducted in close collaboration with Prof. Daniela Rus (Director, MIT CSAIL; Co-Lead, M3S). The successful candidate will be based at SMART CREATE in Singapore and will work directly with all three partner institutions and with MIT CSAIL.

This position will contribute to the development of the high-frequency reflex and habitual motor-control layer of the new architecture working in close coordination with the foundation-model and integration teams at I²R and NTU.

The key focus areas of this position include:

1. Reflex and Habitual Action Generation

Develop closed-loop motor-control policies that map robot proprioception, force-torque, and tactile contact signals to motor commands at high control frequencies, decoupled from continuous visual input. The successful candidate will draw on contemporary methods in robot learning and control to design, implement, and evaluate compact policies suitable for real-time deployment on physical hardware.

2. Basic-Action Library and Composition Interface

Operationalize the project's basic-action set as a composable library with well-defined initiation and termination criteria grounded in force, contact, and joint-state feedback. Define and maintain the interface between this library and the higher-level planning and foundation-model components developed by partner teams at I²R and NTU, so that planning and reflex layers can run asynchronously and in parallel.

3. Real-Time Integration, Benchmarking, and Demonstration

Implement and validate the reflex/habitual stack on the team's primary hardware (industrial robot arm, with stretch goals on quadruped and humanoid platforms). Benchmark performance against state-of-the-art Vision-Language-Action (VLA) baselines on metrics including action latency, contact-force stability, data efficiency, and the ability to execute primitives without continuous visual input.

Responsibilities

  • Design, train, and deploy closed-loop manipulation or whole-body control policies on physical hardware, integrating force-torque, tactile, IMU, and proprioceptive signals.
  • Develop training data through teleoperation, simulation, and sim-to-real transfer, in close collaboration with the data-collection team at I²R.
  • Define and maintain the asynchronous interface between the reflex/habitual layer and the higher-level planning and foundation-model components developed in partnership with I²R and NTU.
  • Engage with Prof. Daniela Rus's group at MIT CSAIL and adjacent collaborators on the conceptual coherence of the reflex layer with current MIT robotics research; participate in joint MIT–SMART research discussions.
  • Publish in top robotics, learning, and AI venues; release reproducible code where appropriate.
  • Mentor graduate students and research engineers contributing to the project; assist with project reporting, manpower listing updates, and other administrative duties as required by the SMART PI.
  • Perform other duties as needed.

Requirements

  • Ph.D. in Robotics, Computer Science, Electrical Engineering, Mechanical Engineering, or a closely related field.
  • Demonstrated experience training and deploying closed-loop control or learned policies on physical robotic hardware.
  • Background in modern robot learning and policy learning methods, with hands-on experience in deep learning and reinforcement learning frameworks.
  • Working knowledge of real-time control concepts (e.g., compliance, impedance, hybrid force-position) sufficient to ensure safe contact behaviour on torque-controlled platforms.
  • Strong programming skills in Python and standard deep learning frameworks; fluency with at least one major robot simulation environment; and working knowledge of robot middleware.
  • Strong publication record in top-tier robotics, learning, or AI venues.
  • Excellent communication and collaboration skills, with the ability to work across institutions and time zones (Singapore–Boston).

Preferred experience in any of the following:

  • Sim-to-real transfer for contact-rich manipulation or legged/whole-body control.
  • Recent open Vision-Language-Action models and their fine-tuning, evaluation, or extension.
  • Multimodal sensor integration (force, tactile, vision) into policy learning pipelines.
  • GPU-parallel simulation, large-scale teleoperation, or differentiable physics.

Interested applicants are invited to send in their full CV/resume, cover letter and list of three references (to include reference names and contact information). We regret that only shortlisted candidates will be notified

Apply

Please download and complete our SMART Job Application Form and upload in the field below. Thank you.

Personal Details

First Name*
Last Name*
Phone No*
Email Address*
How did you get to know of this job?
Notification on personal data protection

File Attachments

Please upload one of these formats PDF & Word Doc only. File size limited to 2MB

Upload Completed SMART Job Application Form *
Upload Your CV *
Upload Cover Letter
Any Other Required Documents

Back to Top
Last Updated 01/03/2024 Privacy Policy | Terms of Use | Open Access Articles | Sitemap | 2016 All Rights Reserved. Singapore-MIT Alliance for Research and Technology Last Updated 01/03/2024