Line of Service
AdvisoryIndustry/Sector
Not ApplicableSpecialism
Data, Analytics & AIManagement Level
Senior AssociateJob Description & Summary
A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge.Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers.
Broad Role / Responsibilities We are seeking a highly skilled and motivated Data Engineer Developer with 6 to 9 years of experience
to join our dynamic team. The ideal candidate must have strong hands-on expertise in technologies such as Spark, Scala, Hadoop,
SQL, and demonstrated exposure to Azure cloud services. The Data Engineer Developer will play a crucial role in designing,
implementing, and maintaining robust data pipelines, ensuring the efficient flow and processing of large datasets. · Data Pipeline
Development: Design, develop, and maintain scalable and efficient data pipelines using Spark and Scala. Implement ETL processes
for ingesting, transforming, and loading data from various sources. · Big Data Technologies: Work with Hadoop ecosystem
components such as HDFS, Hive, and HBase for efficient storage and retrieval of large-scale datasets. Optimize and tune Spark jobs
to ensure optimal performance and resource utilization. · SQL Expertise: Utilize strong SQL skills to query, analyse, and manipulate
data stored in relational databases and data warehouses. · Security - Implement security and data protection measures, at all levels –
DB, API services. Apply Data masking and row-level and column-level security. Keep abreast of latest security issues and incorporate
necessary patches and updates. · Testing and Debugging - Write and maintain test code to validate functionality. Debug applications
and troubleshoot issues as they arise. · Collaboration and Communication - Collaborate with cross-functional teams including
Database engineers, data integration engineers, reporting teams and product development. Communicate complex data findings in a
clear and actionable manner to non-technical stakeholders. · Continual Learning- Keep up to date with emerging tools, techniques,
and technologies in data technologies. Engage in self-improvement and continuous learning opportunities to maintain expertise in
the data science domain. · End to end understanding of project and infrastructure involving multiple technologies (Big Data
Analytics) · Proactively identify problem areas & concerns related to data in the project; exploration of ways to tackle the issues and
come-up with optimal solutions. · Creation of FRS/SRS/Design documents and other technical documents.
· Prepare “lessons learned” documentation for projects / engagements. Develop best practices and tools for project execution and
management. Nice to Have: · Exposure to Azure Cloud. · Experience of working in Travel and logistics domain is preferred. ·
Familiarity with data streaming technologies (e.g., Apache Kafka). · Exposure to containerization and orchestration tools (e.g.,
Docker, Kubernetes). · Knowledge of machine learning concepts and frameworks.
Broad Experience & Expertise Requirements 6 to 9 years of hands-on experience in handling large data volumes, Data Engineering
using Big Data, Hadoop (HDFS, Hive, Hbase), Scala, Spark (Spark Core, Spark SQL, Spark Streaming), Python, PySpark, SQL, ETL,
Databricks, Data modelling, Azure Cloud, Data Pipelines, CI/CD, Docker, Containers, GIT, etc.. Knowledge & experience in handling
structured, semi-structured and unstructured data sets. Specific Past Work Experience Requirements · 6+ Years of relevant
experience in the above technologies. · 3 to 5 years of Consulting experience in Technology Domain, handling data projects
Education (if blank, degree and/or field of study not specified)
Degrees/Field of Study required:Degrees/Field of Study preferred:Certifications (if blank, certifications not specified)
Required Skills
Apache SparkOptional Skills
Desired Languages (If blank, desired languages not specified)
Travel Requirements
Available for Work Visa Sponsorship?
Government Clearance Required?
Job Posting End Date