Hadoop infrastructure engineer

Hadoop platform engineer
As part of DevOps product team you will take end-to-end care of hadoop platform being part of Data
Lake ecosystem, that includes operating, scaling, supporting and engineering challenges.

• Develop: further engineer and automate the platform across technologies and infrastructures
with strong focus on network, servers and monitoring.
• Scale & harden: help to scale the platform to meet rapidly growing demand and load.
• Operate: overlook daily operations, maintenance, monitoring and capacity situation for 24x7
business critical platform.
• Support: help, troubleshoot and consult use cases, solve incidents, coordinate changes.

You should have:
• Master degree in computer science or related field.
• Hands-on experience in running 24x7 critical, high load, big scale production platforms.
• Deep expertise in Hadoop, preferably of mapr (HPE) distribution
• In-depth knowledge of Linux, preferably Red Hat Enterprise.
• Good experience with network and infrastructure administration.
• Practical knowledge on infrastructure automation preferably using Ansible, Jenkins and Git.
• Hands on experience in monitoring frameworks, preferably Prometheus & Grafana.
• Some working experience in Docker / Kubernetes.
• Fair knowledge on Elasticsearch / Kibana.
• Some knowledge on Microsoft Azure / GCP (nice to have).
• Ability to use English in daily communication.
• Ready to learn extremely fast in a very agile and high pace environment. #1164584
Click here to access HAYS Privacy Policy, which provides detailed information on how we use and protect your personal information, and your rights in relation to this.


Job Type
Supply Chain & Logistics

Talk to a consultant

Talk to Ondrej Litavsky, the specialist consultant managing this position, located in Praha
Hays Czech Republic, Olivova 4/2096

Telephone: +420 773 746 532