Sr Software Development Engineer
Amazon.com
AWS AI is looking for world-class software developers to join the Deep Learning cross-framework team. In this organization, you will be responsible for contributing extensions to the TensorFlow and PyTorch machine learning frameworks and for developing cross-framework solutions to support training of Deep Learning models at scale, involving thousands of accelerators. You will be working in a fast-paced, cross-disciplinary team of engineers and researchers who are leaders in the field. You will take on challenging problems, elicit requirements, and deliver innovative solutions into production that consolidate the AI team as thought leaders in the space.
Key job responsibilities
As a Software Development Engineer in the SageMaker Engines team, you will be responsible for:
- Developing innovative solutions for supporting Large Language Model training in a cluster of nodes;
- Implementing model parallelism methods such as pipeline and tensor parallelism as extensions to the PyTorch framework;
- Implementing sharding of the model training state, activation checkpointing/offloading and other memory saving techniques;
- Optimizing distributed training by profiling, identifying bottlenecks and addressing them by improving compute and network performance, as well as finding opportunities for better compute/communication overlap;
- Optimizing communication collectives for the AWS network infrastructure;
About the team
The SageMaker Engines team develops technology for supporting training of Deep Learning models at large scale. This entails implementation of model parallelism and memory saving techniques to allow training of models across accelerators as well as implementation of network communication collectives optimized for the AWS infrastructure.
We are open to hiring candidates to work out of one of the following locations:
Santa Clara, CA, USA
Key job responsibilities
As a Software Development Engineer in the SageMaker Engines team, you will be responsible for:
- Developing innovative solutions for supporting Large Language Model training in a cluster of nodes;
- Implementing model parallelism methods such as pipeline and tensor parallelism as extensions to the PyTorch framework;
- Implementing sharding of the model training state, activation checkpointing/offloading and other memory saving techniques;
- Optimizing distributed training by profiling, identifying bottlenecks and addressing them by improving compute and network performance, as well as finding opportunities for better compute/communication overlap;
- Optimizing communication collectives for the AWS network infrastructure;
About the team
The SageMaker Engines team develops technology for supporting training of Deep Learning models at large scale. This entails implementation of model parallelism and memory saving techniques to allow training of models across accelerators as well as implementation of network communication collectives optimized for the AWS infrastructure.
We are open to hiring candidates to work out of one of the following locations:
Santa Clara, CA, USA
Confirm your E-mail: Send Email
All Jobs from Amazon.com