Job Description :

Role & Responsibilities 

  1. 12+ years of IT experience in Data Engineering, Data Quality, Data Migrations, Data Architecture, Data Lake formation and Data Analytics. 
  2. 5+ Years hands on solid Experience on AWS services like S3, EMR, VPC, EC2, IAM, EBS, RDS, Glue, Lambda, Lake Formation etc. 
  3. Must have worked in producing architecture document for small to large solution implementations.   
  4. In depth understanding of Spark Architecture including Spark Code, Spark SQL, Data frames, Spark Streaming, Spark MLiB, etc. Experience on handing very high-volume streaming data in various format like JSON, XML, AVRO, Snappy etc.  
  5. Good Exposure to Kafka to design future capacity planning, partition Planning, Read and write  
  6. Must have worked with Big Data and should have good knowledge Mar reduce and Spark. 
  7. Must have very good working exposure on different kind of databases like RDBMS, No SQL Columnar, Document, distributed databases, Could Databases, in memory databases etc. 
  8. Python Exposure is an added advantage. 

,

 

Vamshi Kumar Kanneboyena

IT Recruiter – USA

Desk: Ext:317



Client : Dish/Tech M

             

Similar Jobs you may be interested in ..