Bengaluru, Karnataka, India
1 day ago
Lead Data Engineer

Are you ready to elevate your career in the dynamic world of data engineering? Join our Data Engineering Team at JPMorgan Chase in Bengaluru, where your skills and passion will drive innovation and make a significant impact. We offer unparalleled opportunities for career growth and a collaborative environment where you can thrive and contribute to meaningful projects.

As a Lead Data Engineer at JPMorgan Chase within the Data Engineering Team, your role will involve designing and delivering trusted, market-leading technology products in a secure, stable, and scalable manner. You will be tasked with implementing critical technology solutions across various technical areas to support the firm's business objectives. Collaborating closely with a diverse team, you will contribute to the enhancement of our data lake, a crucial tool for understanding our customers and promoting business decisions.

Job Responsibilities:

Design, develop, and manage ETL jobs, data marts, event collection, and processing tools. Build data pipelines and tooling to support stakeholders across the project. Create secure and high-quality production code and maintain algorithms that run synchronously with appropriate systems. Write and maintain documentation of technical architecture. Participate in regular code reviews to maintain best code quality and adhere to best practices. Identify areas for quick wins to improve the experience of end users. Stay updated with the latest trends and technologies in data engineering. Work effectively in a team environment and contribute to team goals. Add to team culture of diversity, equity, inclusion, and respect.

Required Qualifications, Capabilities, and Skills

Formal training or certification on Python, Java, or Scala concepts and 5+ years applied experience Ability to design and implement scalable data pipelines for batch and real-time data processing. Experience with big data technologies such as Spark, Hadoop, Hive, EMR, etc. Experience as a data engineer with a track record of manipulating, processing, and extracting from large datasets. Experience working with modern data warehouse platforms like Amazon Redshift, Google BigQuery, or Snowflake. Proficiency in both relational databases and NoSQL databases. Experience with cloud platforms such as AWS, GCP, or Microsoft Azure. Experience with distributed systems as it pertains to data storage and computing. Experience with Kafka components such as Kafka Producers, Consumers, Kafka Streams, and Kafka Connect. Experience in developing, debugging, and maintaining code in a large corporate environment and solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security. Overall knowledge of the Software Development Life Cycle and strong project management skills to oversee data engineering projects, ensuring they are delivered on time and within scope.

Preferred Qualifications, Capabilities, and Skills

Knowledge of modern data lake table formats, i.e., Iceberg, Hudi, etc. Experience with large-scale and high-throughput data systems. Experience with Kubernetes for container orchestration, including deploying, scaling, and managing containerized applications. Certifications in relevant technologies or platforms, such as AWS Certified Big Data – Specialty, Google Professional Data Engineer, or Microsoft Certified: Azure Data Engineer Associate, can be advantageous.
Confirm your E-mail: Send Email