Bangalore
4 days ago
Lead I - Data Engineering - (Databricks + Python + pyspark)

Role Proficiency:

This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions.

Outcomes:

      Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications.       Document and communicate milestones/stages for end-to-end delivery.       Code adhering to best coding standards debug and test solutions to deliver best-in-class quality.       Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency.       Validate results with user representatives integrating the overall solution seamlessly.       Develop and manage data storage solutions including relational databases NoSQL databases and data lakes.       Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions.

Measures of Outcomes:

Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches.

Outputs Expected:

Code Development:

Develop data processing code independently
ensuring it meets performance and scalability requirements. Define coding standards
templates
and checklists. Review code for team members and peers.


Documentation:

Create and review templates
checklists
guidelines
and standards for design
processes
and development. Create and review deliverable documents
including design documents
architecture documents
infrastructure costing
business requirements
source-target mappings
test cases
and results.


Configuration:

Define and govern the configuration management plan. Ensure compliance within the team.


Testing:

Review and create unit test cases
scenarios
and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed.


Domain Relevance:

Advise data engineers on the design and development of features and components
demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise.


Project Management:

Manage the delivery of modules effectively.


Defect Management:

Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality.


Estimation:

Create and provide input for effort and size estimation for projects.


Knowledge Management:

Consume and contribute to project-related documents
SharePoint
libraries
and client universities. Review reusable documents created by the team.


Release Management:

Execute and monitor the release process to ensure smooth transitions.


Design Contribution:

Contribute to the creation of high-level design (HLD)
low-level design (LLD)
and system architecture for applications
business components
and data models.


Customer Interface:

Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations.


Team Management:

Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives.


Certifications:

Obtain relevant domain and technology certifications to stay competitive and informed.

Skill Examples:

      Proficiency in SQL Python or other programming languages used for data manipulation.       Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF.       Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery).       Conduct tests on data pipelines and evaluate results against data quality and performance specifications.       Experience in performance tuning of data processes.       Expertise in designing and optimizing data warehouses for cost efficiency.       Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets.       Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components.

Knowledge Examples:

Knowledge Examples

      Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF.       Proficiency in SQL for analytics including windowing functions.       Understanding of data schemas and models relevant to various business contexts.       Familiarity with domain-related data and its implications.       Expertise in data warehousing optimization techniques.       Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering.

Additional Comments:

Senior Data Engineer Job Summary: We are seeking a highly motivated and experienced Senior Data Engineer to join our team. This role requires a deep curiosity about our business and a passion for technology and innovation. You will be responsible for designing and developing robust, scalable data engineering solutions that drive our business intelligence and data-driven decision-making processes. If you thrive in a dynamic environment and have a strong desire to deliver top-notch data solutions, we want to hear from you. Key Responsibilities: • Collaborate with agile teams to design and develop cutting-edge data engineering solutions. • Build and maintain distributed, low-latency, and reliable data pipelines ensuring high availability and timely delivery of data. • Design and implement optimized data engineering solutions for Big Data workloads to handle increasing data volumes and complexities. • Develop high-performance real-time data ingestion solutions for streaming workloads. • Adhere to best practices and established design patterns across all data engineering initiatives. • Ensure code quality through elegant design, efficient coding, and performance optimization. • Focus on data quality and consistency by implementing monitoring processes and systems. • Produce detailed design and test documentation, including Data Flow Diagrams, Technical Design Specs, and Source to Target Mapping documents. • Perform data analysis to troubleshoot and resolve data-related issues. • Automate data engineering pipelines and data validation processes to eliminate manual interventions. • Implement data security and privacy measures, including access controls, key management, and encryption techniques. • Stay updated on technology trends, experimenting with new tools, and educating team members. • Collaborate with analytics and business teams to improve data models and enhance data accessibility. • Communicate effectively with both technical and non-technical stakeholders. Qualifications: • Education: Bachelor’s degree in Computer Science, Computer Engineering, or a related field. • Experience: Minimum of 5+ years in architecting, designing, and building data engineering solutions and data platforms. • Proven experience in building Lakehouse or Data Warehouses on platforms like Databricks or Snowflake. • Expertise in designing and building highly optimized batch/streaming data pipelines using Databricks. • Proficiency with data acquisition and transformation tools such as Fivetran and DBT. • Strong experience in building efficient data engineering pipelines using Python and PySpark. • Experience with distributed data processing frameworks such as Apache Hadoop, Apache Spark, or Flink. • Familiarity with real-time data stream processing using tools like Apache Kafka, Kinesis, or Spark Structured Streaming. • Experience with various AWS services, including S3, EC2, EMR, Lambda, RDS, DynamoDB, Redshift, and Glue Catalog. • Expertise in advanced SQL programming and performance tuning. Key Skills: • Strong problem-solving abilities and perseverance in the face of ambiguity. • Excellent emotional intelligence and interpersonal skills. • Ability to build and maintain productive relationships with internal and external stakeholders. • A self-starter mentality with a focus on growth and quick learning. • Passion for operational products and creating outstanding employee experiences.

Confirm your E-mail: Send Email