Job Description :

JOB DESCRIPTION: "Spark, Scala/Python, HIVE, Hadoop, BIGDATA developer with Exposure to Cloud (Azure Preferably). 1.4-5 Years of experience in Building and Implementing data ingestion and curation process developed using Big data tools such as Spark (Scala/python), Hive, Spark, HDFS, Sqoop, Hbase, Kerberos, Sentry and Impala huge volumes data from various platforms for Analytics needs and writing high-performance, reliable and maintainable ETL code Strong SQL knowledge and data analysis skills for data anomaly detection and data quality on Experience on writing shell scripts. Complex SQL queries, Hadoop commands and Hands-on creating Database, Schemas, Hive tables (External and Managed) with various file formats (Orc, Parquet, Avro and Text etc.), Complex Transformations, Partitioning, bucketing and Performance Exposure to Cloud will be a good to have. Azure will be ? Complex transformations, data frames, semi-structured data, utilities using spark, Spark Sql and spark and extensive Experience with Spark & Scala/Python and performance tuning is a performance of production jobs and advising any necessary infrastructure changes. 9.Ability to write abstracted reusable code versioning experience using Bitbucket and CI/CD pipe to learn new technologies on the fly and ship to communication skills, both written and verba


Similar Jobs you may be interested in ..