You thrive on diversity and creativity, and we welcome individuals who share our vision of making a lasting impact. Your unique combination of design thinking and experience will help us achieve new heights.
As a Data Engineer II at JPMorgan Chase within the Payments Trust & Safety team, you are part of an agile team that works to enhance, design, and deliver the data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. Our goal is to keep JPMorgan Chase and our clients safe as they transact through ACH, Wire and Credit card channels. The Data team is responsible to make large-scale data across lines of Business for applying machine-learning to our most critical and wide-range customer products to solve not only Trust & Safety problems (e.g. Fraud) but also related problems (e.g. payment optimization, forecasting). As a member of this team you will work with many lines of business and develop Machine Learning solutions that have a broader impact for the bank. We work closely with our engineering and product partners to develop and deploy solutions to reach our customers.
Job Responsibilities:
Collaborate with all of JPMorgan’s lines of business and functions to delivery software solutions. Experiment, Architect, develop and productionize efficient Data pipelines, Data services and Data platforms contributing to the Business. Design and implement highly scalable, efficient and reliable data processing pipelines and perform analysis and insights to drive and optimize business result. Acts on previously identified opportunities to converge physical, IT, and data security architecture to manage access Champions the firm’s culture of diversity, equity, inclusion, and respectRequired qualifications, capabilities and skills:
Formal training or certification on large scale technology program concepts and 2+ years applied experience in Data Technologies. Experienced programming skills with Java, Python or other equivalent languages. Experience across the data lifecycle, building Data frameworks, working with Data lakes. Experience with Batch and Real time Data processing with Spark or Flink Working knowledge of AWS Glue and EMR usage for Data processing Experience working with Databricks Experience working with Python/Java, PySpark etc. Working experience with both relational and NoSQL databases Experience in ETL data pipelines both batch and real-time data processing, Data warehousing, NoSQL DB.Preferred qualifications, capabilities and skills :
Cloud computing: Amazon Web Service, Docker, Kubernetes. Experience in big data technologies: Hadoop, Hive, Spark, Kafka. Experience in distributed system design and development