Job Description :
Role: BigData Engineer
Location: 100% Remote

Duration: 6+ months contract

Interview: Phone and Skype

Assignment Description: Job Description:
We are looking for a Big Data Engineer who will work on collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company.
 
Responsibilities:
Implementing and developing Scala and Spark streaming with kafka based on business requirements
Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities
Implementing ETL process if importing data from existing data sources is relevant
Monitoring performance and advising any necessary infrastructure changes
Defining data retention policies
 
Skills and Qualifications:
Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
Experience with Spark & Scala
Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
Experience with integration of data from multiple data sources
Experience with various messaging systems, such as Kafka
Proficient understanding of distributed computing principles
Proficiency with Hadoop v2, MapReduce, HDFS
Management of Hadoop cluster, with all included services
Ability to solve any ongoing issues with operating the cluster
Nice to have hands on Apache Airflow tool to programmatically author, schedule, and monitor workflows
Good understanding of Lambda Architecture, along with its advantages and drawbacks
 
 

 

             

Similar Jobs you may be interested in ..