Job Description :
Data Scientist
Location: Irving, TX
Duration: 10 mos w/option to extend


Job Description:
Responsible for developing strategies for effective data analysis and reporting. Selects, configures, and implements analytical solutions. Develops and implements data analytics, data collection systems, and other strategies that optimize statistical efficiency and quality. Identifies, analyzes, and interprets trends or patterns in complex data sets.
Manages computer systems in a business environment and responsible for resolving technical issues. Knowledgeable in programming, data structures, computer systems, and software engineering. Bachelor''s or Master''s degree in computer science, software engineering, or other related field. Ability to manage multiple assignments. Superior written and oral communication skills. 6-10+ years of experience.
Requirement:
Programming experience with at least 6 years of Oracle pl/sql, 3 years of Hadoop and 1 year of Java experience.
Data scientist experience with building statistical models and intelligence around it – AIML
Proven understanding and related experience with Hadoop, HBase, Hive, Pig, Sqoop, Flume, Hbase, Map/Reduce, Apache Spark as well as Unix OS Core Java programming, Scala, shell scripting experience.
Hands on Experience in Oozie Job Scheduling, Zookeeper, Solr, ElasticSearch, Storm, LogStash or other similar technologies
Must have experience with MQ technologies (Kafka, RabbitMQ
Solid experience in writing SQL, stored procedures, query performance tuning preferably on Oracle 12c.
Experience working with CI/CD and DevOps
Responsibilities:
Participate in Agile development on a large Hadoop-based data platform as a member of a distributed team
Come out with different statistical models and build data insights on revenue leakages possibilities
Code programs to load data from diverse data sources into Hive structures using SQOOP and other tools.
Translate complex functional and technical requirements into detailed design.
Analyze vast data stores.
Code business logic using Scala on Apache Spark.
Create workflows using Oozie.
Code and test prototypes.
Code to existing frameworks where applicable.
EDUCATION/CERTIFICATIONS:
Bachelor''s/Masters in Computer Engineering or Information Technology
Hadoop and AIML Certification is added advantage