Palo Alto, CA, US
118 days ago
Sr Software Engineer, ML Infra, Amazon Search
The Amazon Search team owns the software that powers Search - a critical customer-focused feature of Amazon.com. Whenever you visit an Amazon site anywhere in the world, it's our technology that delivers you outstanding search results. Our services are used by millions of Amazon customers every day.

The Search Engine Infrastructure team is responsible for the large-scale distributed software systems that power those results. We design, build and operate high performance fault tolerant software services that apply the latest technologies to solve customer problems. As part of this vision, we are building the infrastructure to enable next generation Deep-learning-based relevance ranking, which can be deployed quickly and reliably, with the ability to analyze model and system performance in production. We focus on high availability, and frugally serving billions of requests per day with low latency. We work alongside applied scientists and ML engineers to make this happen.

Joining this team, you’ll experience the benefits of working in a dynamic, entrepreneurial environment, while leveraging the resources of Amazon.com (AMZN), one of the world's leading internet companies. We provide a highly customer-centric, team-oriented environment in our offices located in Palo Alto, California, with a team in San Francisco, California.


Key job responsibilities
As a senior engineer in this team, you will:

1. Evolve a sophisticated deep-learning ranking system and feature store deployed across thousands of machines in AWS, serving billions of queries at tens of millisecond latencies.
2. Immerse yourself in imagining and providing cutting-edge solutions to large-scale information retrieval and machine learning (ML/DL) problems.
3. Have a relentless focus on scalability, latency, performance robustness, and cost trade-offs -- especially those present in highly virtualized, elastic, cloud-based environments.
4. Conduct and automate performance testing of the model serving system to evaluate different hardware options (including GPUs and specialized accelerators such as AWS Inferentia2), model architectures and serving configurations.
5. Lead implementation and enhancement of a rapid experimentation framework to test ranking hypotheses.
6. Create mechanisms to ensure models work as expected in production.
7. Work closely with applied scientists to determine the requirements for deploying ranking models in production environments.
8. Work closely with Principal Engineers in Amazon Search to set the technical vision for this team.
Confirm your E-mail: Send Email