download 1 23

Cloudera Data Engineer Jobs | NETS International Jobs

NETS International

Cloudera Data Engineer Jobs | NETS International Jobs

Job Description

 

Job Summary: We are seeking an experienced Data Engineer with 8-10 years of hands-on experience in data engineering, particularly in big data projects. The ideal candidate will have a strong background in various big data technologies such as Hadoop, HBase, Kafka, Hive, Spark, and others. They should also be proficient in programming languages like Python, Scala, or Java and have extensive experience in building and managing data pipelines, migrations, and analytics pipelines in cloud environments such as AWS, Azure, or Google Cloud. Additionally, the candidate should have exposure to artificial intelligence, machine learning, natural language processing, and possess strong leadership qualities.

Responsibilities:

 

  • Design, develop, and maintain big data solutions using technologies like Hadoop, HBase, Kafka, Hive, Spark, Scala, Python, R, TensorFlow, etc.
  • Develop and manage data pipelines, migrations, and analytics pipelines in on-premise and cloud environments (Cloudera, AWS, Azure, Google Cloud).
  • Utilize open-source kernels and understand distributed compute, distributed storage, serverless, and extremely scalable architectures.
  • Collaborate with cross-functional teams to design and implement scalable and efficient data solutions.
  • Stay updated with the latest trends and advancements in big data technologies and integrate them into existing systems where applicable.
  • Conduct data modeling and analysis to optimize data storage and retrieval processes.
  • Provide leadership and mentorship to junior team members.
  • Communicate effectively with stakeholders to understand requirements and present solutions.

Requirements:

 

  • Bachelor’s degree in Computer Science, Information Systems, Business, or related field.
  • 8-10 years of experience in data engineering with a focus on big data projects.
  • Proficiency in programming languages like Python, Scala, or Java.
  • Strong hands-on experience with big data technologies such as Hadoop, HBase, Kafka, Hive, Spark, etc.
  • Experience working with open-source kernels and understanding of distributed compute, distributed storage, serverless, and extremely scalable architectures.
  • Experience in managing industry-standard programs in on-premise and cloud environments for building data pipelines, migrations, and analytics pipelines.
  • Exposure to cloud platforms such as AWS, Azure, or Google Cloud.
  • Knowledge and exposure to artificial intelligence, natural language processing, machine learning, statistical analysis, predictive modeling, and time series analysis.
  • Familiarity with CI/CD practices.
  • Excellent leadership, communication, and presentation skills.
  • Relevant certifications in big data technologies or cloud platforms are a plus.

To apply for this job please visit reapk.xyz.