
Data Engineer w/ ML & GenAI
- Remote
- Prague, Praha, Hlavní město, Czechia
- €40 - €45 per hour
- Jimmy Technologies
If you are passionate about Data Engineering, cloud-native architectures, and AI applications, this role with our client offers an exciting opportunity to work on impactful projects!
Job description
We are looking for Data Engineers with ML skills for the team of our Fortune 50 client, building a new platform for the famous sports team in the US. The main objective of the project is to build a platform that enhances fan engagement, optimizes monetization, and delivers seamless stadium and remote experiences.
This is a remote-first position with a required overlap of US working hours (2-6 PM CET).
Responsibilities
Data Modeling, Migration & ETL Development
Implement ETL processes and real-time data pipelines for efficient data handling and integration.
Develop and maintain scalable, cloud-based data architecture using AWS services, including Glue, Lambda, S3, and Redshift.
Design data models.
Manage ML pipelines and maintain a model registry for version control and efficient model deployment.
Transform raw structured and unstructured data into clean, well-modeled graph inputs (nodes, edges, metadata).
Implement CI/CD practices using CodePipeline and CodeBuild to streamline development and deployment processes.
Employ Docker for containerization and efficient resource management across development environments.
Collaboration & Agile Development
Gather requirements, set targets, define interface specifications, and conduct design sessions.
Work closely with data consumers to ensure proper integration.
Adapt and learn in a fast-paced project environment.
Work Conditions
Start Date: ASAP
Location: Remote
Working hours: US time zone overlap required: 2-6pm CET
Long-term contract based-role: 6+month
Job requirements
Strong SQL skills for ETL, data modeling, and performance tuning.
Experience with ML models
Proficiency in Python, especially for handling and flattening complex JSON structures.
Production experience with deploying ML models and expertise in ML frameworks and tools.
Extensive experience with AWS cloud services.
Understanding of software engineering and testing practices within an Agile environment.
Experience with Data as Code; version control, small and regular commits, unit tests, CI/CD, packaging, familiarity with containerization tools such as Docker (must have) and Kubernetes (plus).
Excellent teamwork and communication skills.
Proficiency in English, with strong written and verbal communication skills.
Efficient, high-performance data pipelines for real-time and batch data processing.
or
All done!
Your application has been successfully submitted!
