Job Description :
Job Title: Big data Engineer
Job Location: Greer, SC 29651
Contract Duration: 11 Months

Note: The ideal candidate will have experience as a data engineer working with hadoop in a production and practical environment, not only theoretical understanding but hands on experience. Data prep, egress, predictive analytics and R programming are R studio are very important in this role.
Description:
This position will provide complete application lifecycle development, deployment, and operations support for Big Data solutions and infrastructure.
In this role, you will partner with product owners, data scientists, solutions engineers, and business analysts to facilitate the development, automation, and seamless delivery of analytics solutions into Big Data clusters.
List the major duties/accountabilities to achieve the positions key objectives.
Implement and enhance complex big data solutions with a focus on collecting, parsing, managing, analyzing, and visualizing large data sets that produce valuable business insights and discoveries.
Determine the required infrastructure, services, and software required to build advanced analytics solutions, both on premise and in the cloud.
Develop prototypes and proof of concepts for specified business challenges.
Assist data scientists with exploration and analysis activities.
Understand advanced algorithms and apply problem solving experience to build high-performance, parallel, and distributed solutions.
Perform code and solution review activities, then recommend enhancements that improve efficiencies, performance, stability, and lower support costs.
Configure and conduct tuning exercises on Hadoop environments.
Quickly understand and apply new technologies and solution approaches.
Apply the latest DevOps and Agile methodologies to improve delivery time
Document requirements and configurations, as well as, clarify ambiguous specs
Requirements:
Bachelor degree in Computer Science, Mathematics, or Engineering
4-7 years enterprise Big Data Engineering experience within any Hadoop environment
5+ years enterprise software engineering experience with object-oriented design, coding and testing patterns, as well as, experience in engineering (commercial or open source) software platforms and large-scale data infrastructures
Professional Hadoop training preferred
(Prefer) Hortonworks or Cloudera certifications
Enterprise support experience with the following items:
Hadoop, HIVE, HBase, Spark, Kafka, Sqoop services and clients
Languages: Python, R, SQL, Java, Scala
Jupyter or Zeppelin, Hue, RStudio
Spark RDDs and DataFrames, machine learning algorithms
Enterprise Big Data security and management operations experience using tools and services, such as, Ambari, Knox, Ranger, HDFS, Oozie, Kerberos
Elasticsearch and Kibana development and operations
Container management and development using Docker
Automation/configuration management tools like Puppet, Chef
Continuous integration tools similar to Jenkins
Visualization tools like Tableau or Qlik
Ingestion tools like NIFI, HDF, Talend