Job Description

Job Title: Hadoop Data Engineer with strong Scala
Job Location: North Carolina

Required Qualifications:

  • Minimum of two to five years of experience working in Hadoop/Big Data related field
  • Working experience on tools like Hive, Spark, HBase, Sqoop, Impala, Kafka, Flume, Oozie, MapReduce, etc.
  • Minimum two to three years of hands on programming experience in Java, Scala, Python, Shell Scripting
  • Experience in end to end design and build process of Near-Real time and Batch Data Pipelines
  • Strong experience with SQL and Data modelling
  • Experience working in Agile development process and has good understanding of various phases of Software Development Life Cycle
  • Experience using Source Code and Version Control systems like SVN, Git, etc.
  • Deep understanding of the Hadoop ecosystem and strong conceptual knowledge in Hadoop architecture components
  • Self-starter who works with minimal supervision
  • Ability to work in a team of diverse skill sets
  • Ability to comprehend customer requests and provide the correct solution
  • Strong analytical mind to help solve complicated problems
  • Desire to resolve issues and dive into potential issues

Preferred Qualifications:

  • Bachelor of Science or Master of Science in Computer Science/Engineering or equivalent experience
  • Good interpersonal with excellent communication skills -written and spoken English
  • Good team player, interested in sharing knowledge with other team members and shows interest in learning new technologies and products
  • Ability to think out of box and provide innovative solutions

QBH#: 2013

Application Instructions

Please click on the link below to apply for this position. A new window will open and direct you to apply at our corporate careers page. We look forward to hearing from you!

Apply Online