Job Description :
This is a contract to hire opportunity.

Need candidates who can do an onsite interview, local to Dallas/Texas only.
Need USC or GC or candidates who can work for any employer without
sponsorship.
Irving/Texas.

Rate DOE

*TOP SKILLS*

- 3+ years of hands-on experience with Big Data (Hadoop/Hortonworks) -
they use Hortonworks which is a version of open source Hadoop need actual
hands-on experience, not just POC (proof of concept)
- Python
- Scoop
- Data Lake
- Oracle 10g

*RESPONSIBILITIES*

- As a Big Data Engineer, you will provide technical expertise and
aptitude to Hadoop technologies as they relate to the development of
analytics.
- Responsible for the planning and execution of big data analytics,
predictive analytics and machine learning initiatives.
- Assist in leading the plan, building, and running states within the
Enterprise Analytics Team and act in a lead role driving user story
analysis.
- By creating optimization and stability to the platforms, you will play
a key role in the architecture design and data modeling of the platform and
analytic applications.
- Engage in solving and supporting real business issues with your Hadoop
distributed systems and Open Source framework knowledge.
- Perform detailed analysis of business problems and technical
environments and use this data in designing the solution and maintaining
data architecture.
- Focus will be in creating strategy, researching emerging technology,
and applying technology to enable business solutions within the
organization.
- Design and develop software applications, testing, and building
automation tools.
- Design efficient and robust Hadoop solutions for performance
improvement and end-user experiences.
- Work in a Hadoop ecosystem implementation/administration, installing
software patches along with system upgrades and configuration.
- Conduct performance tuning of Hadoop clusters while monitoring and
managing Hadoop cluster job performance, capacity forecasting, and
security.
- Define compute (Storage & CPU) estimations formula for ELT & Data
consumption workloads from reporting tools and Ad-hoc users.
- Analyze Big Data Analytic technologies and applications in both
business intelligence analysis and new service offerings, adopting and
implementing these insights and standard methodologies.

*REQUIREMENTS*

- 8+ years of experience supporting various enterprise platforms,
performance tuning and application performance optimizations.
- 5+ years of experience working in Linux servers and platform
optimization.
- 3+ years of experience in architecture and implementation of large and
highly complex projects using Hortonworks (Hadoop Distributed File System)
with Isilon commodity hardware.
- 4+ years of experience with Big Data platforms and tools and Hadoop
implementation experience including the following:
- Hands on experience in the platform operations.
- Performance and delivery of Hadoop ecosystem (Hadoop, Hive, Spark,
Hbase, Ambari, Kafka, Pyspark & R
- Application performance tuning.
- Experience as a DBA on any RDBMS or Hive Database.
Experience managing Big Data platform operations or any other large
platform.
             

Similar Jobs you may be interested in ..