Job Description :
Job Title: Big Data Engineer (Spark/Scala)
Location: Plymouth Meeting, PA
Duration: Long term contract

Job Description:

Required Experience (in order of importance):
Hadoop, Hive, Impala, HBase and related technologies
Spark, Scala
MPP, shared nothing database systems, NoSQL systems
Object Oriented and Functional Programming Experience
Modern DevOps experience
Linux
Data Warehousing design and concepts
RDBMS Experience

Minimum Education, Experience, & Specialized Knowledge Required:

Computer Science Degree
3+ years strong native SQL skills
3+ years strong experience in database and data warehousing/data lake concepts and techniques.
Understand: relational and dimensional modeling, star/snowflake schema design, BI, Data Warehouse operating environments and related technologies, ETL, MDM, and data governance practices.
2+ years’ experience with Hadoop, Hive, Impala, HBase, and related technologies
1+ years’ experience with Spark, Scala, Python, Java, and/or R
1+ years’ experience with MPP, shared nothing database systems, and NoSQL systems
             

Similar Jobs you may be interested in ..