Job Description :
Big Data Engineer
Travel: 100%
Fulltime / Permanent Position
Location: NATIONWIDE ( Atlanta, Dallas, Denver or Chicago )

Responsibilities
Perform data engineering, data modeling, and implementation of Big Data platform and analytic applications
Analyze latest Big Data Analytic technologies and their innovative applications in both business intelligence analysis and new service offerings; bring these insights and best practices
Develop highly scalable and extensible Big Data platforms which enable collection, storage, modeling, and analysis of massive data sets including those from IoT and streaming data
Construct big data pipelines, both real-time and batch
Implement data access and processing frameworks.
Support data users, data scientists, and analytic applications

Technical Experience
Cloud platform technologies such as Microsoft Azure, Amazon Web Services and Google Cloud.
o Hadoop distributions such as Cloudera, Hortonworks
o Big Data Analytic frameworks and query tools such as Spark, Storm, Hive, HBase, Impala, Hue
o Streaming data tools and techniques such as Kafka, AWS Kinesis, Microsoft Streaming Analytics, StreamSets, StreamAnalytixs
ETL (Extract-Transform-Load) tools such as Pentaho, Talend, Informatica); also experience with ELT
Infrastructure setup using things like Kubernetes, Docker
Continuous Integration and Continuous Development (CI/CD)
o Data Warehouse and DataMart design and implementation
o NoSQL environments such as MongoDB, Cassandra
o Metadata management, data lineage, data governance, especially as related to Big Data
o Structured, Unstructured, Semi-Structured Data techniques and processes

Minimum Requirements
Over 10 years of engineering and/or software development experience and demonstrable experience in a large organization.
Experience should contain 5+ years of experience of data engineering (ETL and Big Data) pipelines, both real-time and batch
5+ years of consulting experience desired
3+ years of hands-on experience developing in Big Data Components/Frameworks including: Hadoop/HDFS, Spark, Storm, HBase, Pig, Hive, Scala, Kafka, PyScripts, Unix Shell scripts
3+ years of hands-on experience configuring and implementing solutions on cloud platforms such as Azure, AWS, or Google Cloud.
Experience with Continuous Integration / Continuous Development
Experience in implementation of large and highly complex projects
History of working successfully with cross-functional engineering teams
5+ years experience in one of the following business domains: Manufacturing, Cable/Telecom, Finance and Supply Chain
Demonstrated ability to communicate highly technical concepts in business terms
             

Similar Jobs you may be interested in ..