Job Description :
Hi please check and let me know
Title - Big Data Engineer
Loc: Phoenix, Arizona (REMOTE )
Phone and skype
Duration: 6 months

Skills: Hadoop, Spark, Kafka, HBase, Solr- Remote project
Senior Cloudera Big Data Consultants who can implement end-to-big data BI solutions and manage big data projects for clients in different industries dealing with various types of challenging data. Candidates should be driven to help clients gain a competitive advantage by implementing big data solutions which provide them with business insights which enable them to optimize their operational performance. Candidates must have an advanced knowledge of Hadoop (Cloudera or Hortonworks) including supporting projects such as Spark, Kafka, HBase, and SOLR. Programming experience with Python, R or Scala is required as well as implementation experience with DevOps practices. Knowledge of Data Science and Machine Learning is also a plus. Candidates must have strong communication skills that allow them to regularly deal with clients and their staff, as well as with specialists from related fields, including database developers, and software engineers. Candidates must have outstanding analytical and problem-solving skills and a good grasp of both the technical and business side of business intelligence and always keep up to date with the latest big data tools and technologies.
A successful candidate will possess the following skills and capabilities:
Experience in big data development in the Hadoop stack with either the Cloudera (preferred) or Hortonworks platform.
Experience with base Hadoop components including HDFS, YARN, and Zookeeper
Experience with designing and securing storage including data lake design patterns
Experience with data orchestration with Cloudera Director/Apache NiFi and messaging including Kafka
Experience developing data ingestion scripts with Spark, and Hive/Hbase
Experience using SOLR for implementation of Enterprise Search
Experience with big data DevOps to manage deployment of big data configurations, code and monitoring of jobs
Experience tuning big data workloads including Spark and YARN
Experience in the identification of data quality
Provides project management and leadership as appropriate to drive timelines and deliverables
Trains others on skills and competencies required
Programming knowledge in Python, R, or Scala is required
Utilize ad-hoc techniques to perform on-the-fly analysis of data
Ability to gather requirements and conduct focus groups to determine the processes needed to assist and grow the business
             

Similar Jobs you may be interested in ..