Job Description :
Job Description :

1. Hands on experience in installing, Configuring and using MS Azure Data bricks and Hadoop ecosystem components like DBFS, Parquet, Delta Tables, HDFS, Map Reduce programming, Kafka, Spark & Event Hub.

2. In depth understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, RDD caching, Spark MLib.

3. Hands on experience in Scripting languages like Scala & Python.

4. Hands on experience in Analysis, Design, Coding & Testing phases of SDLC with best practices. 5. ExpertisReqe in using Spark SQL with various data sources like JSON, Parquet and Key Value Pair.

6. Experience in creating tables, partitioning, bucketing, loading and aggregating data using Spark SQL/Scala. 7. Migrating the code from Traditional DW Environments to Apache Spark and Scala using Spark SQL, RDD.

8. Experience in transferring data from RDBMS/BLOB/ADLS to Data bricks using ADF.

9. Experience in Azure Database (PaaS) OR Azure SQL Data warehouse.

10. Experience in orchestrating