Job Description :
Title : Big Data Analytics Developer
Location : Irving, TX
Duration : 12+ months

Responsible for developing strategies for effective data analysis and reporting. Selects, configures, and implements analytical solutions. Develops and implements data analytics, data collection systems, and other strategies that optimize statistical efficiency and quality. Identifies, analyzes, and interprets trends or patterns in complex data sets. Monitors performance and quality control plans to identify performance. Works on problems of moderate and varied complexity where analysis of data may require adaptation of standardized practices. Works with management to prioritize business and information needs. Bachelor''s degree in computer science, information systems, statistics, or other related field. Ability to manage multiple assignments. Superior written and oral communication skills. 6-10+ years of experience.
What you’ll be doing.
Primary responsibility is to design, develop, and implement data service architecture that ingests real time and batch data using IBM streams
Develop IBM Streams application to implement new use cases from business requirements
Integration of new sources into IBM Streaming platform
Developing streams application to deploy model in streams platform
Doing proof of concepts to expand current platform capabilities
Closely collaborate with cross-functional teams
The successful candidate will have an understanding of coding and a strong sense of architecture, deployment and operational skill.
Candidates must possess 6+ years of overall software development experience.
Four or more years of relevant work experience in data ingestions.
Experience and good knowledge in one of these languages Java/C++
Experience in application development in IBM Streams/Flink/Spark Streaming
Experience in implementing multiple connectors (IBM MQ, Jdbc, RabbitMQ, Kafka, Apache Pulsar, S3 etc)
Experience with data warehousing and data lakes Exposure to NoSQL-based, SQL-like technologies (e. g., Hive, Pig, Spark SQL/Shark, Impala, BigQuery
Good Knowledge in NoSQL technologies like Redis, Casssandra, memCache
Must have experience of handling Large volume of data in terabytes
Experience using Scala with Spark, especially building ETL and complex query models
Working experience on Linux and Cloud platforms
Experience in IBM streams preferred
Good Knowledge in Hadoop platform including Big Data tools