About the Role: As the Data Engineer, you will be directly responsible for availability, quality, and reliability of data to meet business data needs across the organization. The primary focus of this role will revolve around data warehouse maintenance in MS Azure. As an Azure data engineer, you will be in charge of designing, building, and maintaining our data infrastructure to make sure that our internal and external systems are integrated seamlessly, and the data is flowing smoothly. You will be working closely with internal and external partners to ensure alignment on data infrastructure and integrations. You will partner with analytics and data science teams to understand business needs and ensure our data warehouse platform can support a wide range of analytics use cases.
Key Responsibilities:
Work closely with business and D&T partners to deliver complex data integrations and projects to meet the needs of business from a data perspective. Oversee the creation of data pipelines and data marts in real time, near real time and batch processing. Ensure accurate and efficient integration of large variety (structured, semi-structured and unstructured) of data from various sources. Own and manage all data warehouse deliverables, such as data models, ETL/ELT processes, and data quality assessments. Ensure consistent adherence to data warehouse architecture standards through governance, best practices, and automation. Manage the day-to-day operations of our Azure data, Power BI and UC4 platforms Monitor, audit, and measure data warehouse performance and recommend improvements. Manage data lake environment including blob storage. Guide development and implementation of procedures for maintenance and monitoring of data warehouse and related systems. Ensure the availability, performance, and scalability of data warehouse platform, including data pipelines, security controls, administration tools and warehouse database. Design, build and maintain data pipelines using tools such as Azure Data Factory, Databricks, Azure Synapse, SQL, Python. Partner with business and D&T teams to ensure adherence to data governance policies and standards and maintain data integrity and compliance.
Basic Qualifications/Requirements:
MS in computer science, engineering, or equivalent software engineering experience. 8+ years of experience in data warehousing, data modelling, and data transformation. Demonstrated extensive experience with Azure Data Factory (ADF) and Azure Data Lake Storage (ADLS). Strong grasp of data ingestion, transformation, and delivery (ETL/ELT) processes. Expert knowledge of SQL Server environments for designing and developing databases. Highly proficient in PySpark and SQL. Experience working with Power BI dataflows and job scheduling platform UC4. Extensive experience working with complex systems to retrieve, analyze and synthesize data. Proven ability to communicate complex technical processes and data concepts in an understandable manner to business stakeholders. Prior experience implementing end-to-end data pipelines for large datasets using cloud and on-premise stack.
Preferred Qualifications:
Experience leading technology implementation projects. Experience working with Azure DevOps/Terraform and administrating infrastructure as code. Familiarity with machine learning frameworks (e.g. Azure ML) and libraries. Proficiency with Power BI (data ingestion, dataflows, data modelling) and job scheduling software UC4. Experience designing complex and interdependent data models for analytics use cases.
Heineken USA is an equal opportunity employer. We believe the diversity of our people makes us as strong and unique as our brands. We do not discriminate based on race, color, religion, age, or any other basis protected by law.
This position is not available for visa sponsorship.