Nice to have skills (Top 2 only):
Knowledge of design strategies for developing scalable, resilient, always on data lake,Knowledge of agile(scrum) development methodology is a plus Strong development automation skills. Must be very comfortable with reading and writing Scala, Python or Java code.Detailed Job Description:
5 years of experience with the Hadoop ecosystem and Big Data technologies Handson experience with the Hadoop ecosystem HDFS, MapReduce, HBase, Hive, Impala, Spark, Kafka, Kudu, Solr Experience with building stream processing systems using solutions such as spark streaming, Storm or Flink etc Experience in other open sources like Druid, Elastic Search, Logstash, CICD and cloud based deployments is a plus Ability to dynamically adapt to conventional bigdata frameworks and tools with the usecases requiredÂ
Minimum years of experience*: 5+
Certifications Needed: No
Top 3 responsibilities you would expect the Subcon to shoulder and execute*:
Build data pipelines and ETL using heterogeneous sources,You will build data ingestion from various source systems to Hadoop using Kafka, Flume, Sqoop, Spark Streaming etc. You will transform data using data mapping and data processing capabilities like MapReduce, Spark SQL Expands and grows data platform capabilities to solve new data problems and challengesInterview Process (Is face to face required?) No
Does this position require Visa independent candidates only? No