Data Platforms Engineering Expert
HP
Job Overview
We are building cutting-edge Data Lake and Data solutions, primarily with Databricks on AWS.
We are looking for an experienced Data Tech Lead to join our IT organization and be part of our data transformation program.
In this role, you will be responsible for leading technical aspects in this program, including working with offshore teams and handle infrastructure and design aspects.
You will be part of the program core team to design and implement the data architecture and technology requirements to ensure alignment with our overall data strategy.
Responsibilities
Lead the design and implementation of the Data NextGen Platform Architecture, leveraging modern technologies and best practices.Build scalable data pipelines to integrate and model datasets from different sources that meet functional and non-functional requirements.Evaluate and recommend technologies, tools, and frameworks to support our organization's data needs. Manage the technical backlog on the Data NextGen Platform.Work with data infrastructure team to implement architecture aspects, investigate infrastructure issues and drive to resolution.Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and design appropriate solutions.Drive automation initiatives to streamline data platform deployment, configuration, and maintenance tasks.Help to grow our data team with exceptional engineers.Requirements
Strong Programming Skills with Python/Scala (Java in addition is a plus).Solid engineering foundations (good coding practices, good architectural design skills).Experience building cloud-scalable, real-time and high-performance Data Lake solutions.Solid experience with DatabricksExpert level Spark knowledge and experienceIn depth knowledge in Delta and all surrounding features (e.g. CDF, ACID and concurrency models for updates, federation with iceberg)Experience with Databricks features such as Autoloader and DLTDatabricks Operational knowledge with Job clusters optimization, monitoring and control (e.g. auto-recovery of failed jobs)Knowledge and experience with DB-SQL:3+ years of experience with large-scale data engineering2+ years of solid experience with developing solutions within Cloud Services (Azure, AWS, GCP)Experience in data streams processing technologies including Kafka, Spark Streaming, etc.Experience with ETL tools (e.g. Rivery, Fivetran, Stich), Pipelines (e.g. DBT, Airflow), Reverse ETLExperience with data migrations and data warehousing.Experience working with SQL in advanced scenarios that require heavy optimization.Experience or knowledge in AI/ML/MLOps/LLM - plusCertifications (Cloud / Databricks / Others) – big plusExcellent communication and people skills.Team player, positive, likes to dream big and fulfill dreams all at the same time.**The position is defined for a two-year period with an option for extension.
#LI-POST
Confirm your E-mail: Send Email
All Jobs from HP