Sr. Spark DeveloperPosted: 3 months ago
Ø Must have 8+ years of industrial experience in Big Data analytics, Data Modeling, profiling, tuning using Hadoop Eco system tool and Spark framework.
Ø Strong experience and knowledge of real time data analytics using Streaming technologies like Kafka, Spark Streaming, Storm and Coordinating system like Zookeeper.
Ø Hands on experience in developing SPARK applications using RDD's, Data frames, Dataset, Spark SQL, Transformations and Actionsusing IDE's like Eclipse etc.
Ø Extensively worked on Spark using Scala on Hadoop cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle.
Ø Extensive experience in Eco systems like Hadoop, Spark, Hive, Map Reduce, AWS, Apache Nifi
Ø Extensive experience in scheduling Hadoop and Spark jobs using Control M
Ø Experience on File formats like Json, Multi line Json, ORC, AVRO, Parquet, CSV.
Ø Expertise in writing Unix shell scripts
Ø Strong Programming skills on scala.
Ø Created Hive tables like internal, external, partition, bucketing tables to store structured data into HDFS and processed it using HiveQL.
Ø Extensive experience in developing applications that perform Data Processing tasks using Teradata, Post Gress, Oracle, SQL Server database.
Ø Involved in Performance Tuning and optimization techniques on spark and Hadoop applications.
Nice to have :
· Working knowledge in Dev Ops CI/CD process and it's tools.
· Working knowledge on migration tools like SQOOP
· Working knowledge of Amazon's Elastic Cloud Compute(EC2) infrastructure for computational tasks and Simple Storage Service (S3) as Storage mechanism.
· Experience in Supply Chain in telecom Domain.