Machine Learning Engineer II, Annapurna ML
Amazon.com
Annapurna ML pathfinding team is a new function within the Annapurna ML go-to-market org that help customers accelerate their adoption of Annapurna ML products including AWS Trainium and AWS Inferentia. The team offers hands-on data science and coding services to our most strategic customer opportunities to launch their training and inference workloads on AWS purpose built ML silicon offerings.
Key job responsibilities
In this customer-facing role, you will be responsible for helping our most strategic customers port their models to the AWS Trainium & Inferentia platforms by delivering high-quality code and customizations to make the models functional and performant. You will use and provide feedback to the various Neuron SDK libraries and help prototype and develop new features based on the latest research findings and customer requests.
A day in the life
You will be required to assist our most strategic customers in porting their models to AWS Trainium and Inferentia.
You will work directly with customer data scientists and ML engineering teams and write code to have the models be performant on AWS purpose-built silicon solutions. It may require low-level coding in C++ and writing custom kernels to get the best performance possible.
You will also be responsible for porting the latest open-source models to AWS Trainium/Inferentia. You will also contribute to open-source projects to help add support for AWS Trainium/Inferentia in popular projects.
It will require a close collaboration with the Neuron engineering team to help drive the Neuron product roadmap and give feedback on improving product quality.
About the team
Our team's mission is to provide the fastest, cost-effective and user-friendly place to train and deploy Generative AI workloads in the cloud. The team provides white-glove service to our most strategic customers to implement their models for both training and inference using the Neuron SDK associated libraries and APIs.
Key job responsibilities
In this customer-facing role, you will be responsible for helping our most strategic customers port their models to the AWS Trainium & Inferentia platforms by delivering high-quality code and customizations to make the models functional and performant. You will use and provide feedback to the various Neuron SDK libraries and help prototype and develop new features based on the latest research findings and customer requests.
A day in the life
You will be required to assist our most strategic customers in porting their models to AWS Trainium and Inferentia.
You will work directly with customer data scientists and ML engineering teams and write code to have the models be performant on AWS purpose-built silicon solutions. It may require low-level coding in C++ and writing custom kernels to get the best performance possible.
You will also be responsible for porting the latest open-source models to AWS Trainium/Inferentia. You will also contribute to open-source projects to help add support for AWS Trainium/Inferentia in popular projects.
It will require a close collaboration with the Neuron engineering team to help drive the Neuron product roadmap and give feedback on improving product quality.
About the team
Our team's mission is to provide the fastest, cost-effective and user-friendly place to train and deploy Generative AI workloads in the cloud. The team provides white-glove service to our most strategic customers to implement their models for both training and inference using the Neuron SDK associated libraries and APIs.
Confirm your E-mail: Send Email
All Jobs from Amazon.com