The perception and prediction team at Motional, authors of nuScenes (https://www.nuscenes.org), PointPillars (https://arxiv.org/abs/1812.05784), PointPainting (http://arxiv.org/abs/1911.10150), and CoverNet (https://arxiv.org/abs/1911.10298) is looking for engineers and researchers to join us in our mission to launch an autonomous taxi system.
The Motional global headquarters are located at 100 Northern Avenue in Boston, MA. Nestled in the busting Seaport district with sweeping views of Boston Harbor and downtown Boston, the offices are located close to major transit lines and a quick walk to various restaurants and popular attractions.
What Youll Be Doing:
- Research and develop neural networks for perception problems such as object detection, instance segmentation, sensor fusion, traffic light detection etc.
- Develop clean C++ software for the perception modules that sit at the core of our autonomous vehicle perception and interface with all other key modules such localization, mapping, and planning.
- Own the full life cycle of new features development from proof of concept stage to on-car software integration by coordinating with the downstream teams.
- Develop core deep learning codebase for efficient training and testing pipelines.
- Conduct deep learning experiments, write reports, and file patents.
- Mentor junior researchers by providing guidance on research projects and design document reviews.
What Were Looking For:
- Masters or PhD in Machine Learning, Computer Science, Applied Mathematics, Statistics, Physics or a related field.
- Experience designing, training, and analyzing neural networks for at least one of the following applications: Object detection, Image segmentation, Sensor fusion, Multitask learning, Traffic light detection, Active learning, and One shot learning.
- 5+ years of experience in C++ software development and Fluency in Python.
- Experience with PyTorch or other deep learning frameworks.
- Experience in automotive or other real-time and embedded systems.
- Proven track record of publications in relevant conferences (CVPR, ICML, NeurIPS, ICCV, ICLR)