No more applications are being accepted for this job
- Develop and maintain data pipelines responsible for extracting, transforming, and loading (ETL) data from various sources into a central storage system, such as a data warehouse or data lake.
- Integrate data from multiple sources and systems, including databases, APIs, log files, streaming platforms, and external data providers.
- Create data transformation routines to clean, normalize, and aggregate data, ensuring it is ready for analysis, reporting, or machine learning tasks.
- Contribute to the development of common frameworks and best practices for code development, deployment, and automation/orchestration of data pipelines.
- Implement data governance in accordance with company standards.
- Collaborate with Data Analytics and Product leaders to establish best practices and standards for developing and deploying analytic pipelines.
- Work with Infrastructure leaders to advance the data and analytics platform's architecture, exploring new tools and techniques that leverage the cloud environment (e.g., Azure, Databricks, others).
- Monitor data pipelines and systems, promptly detecting and resolving issues. Develop monitoring tools, alerts, and automated error handling mechanisms to ensure data integrity and system reliability.
Requirements:
Must haves
- Minimum 3 years of proven experience in a Data Engineering role, with a strong record of delivering scalable data pipelines.
- Extensive experience designing data solutions, including data modeling.
- Hands-on experience developing data processing jobs (PySpark / SQL) demonstrating a solid understanding of software engineering principles.
- Experience orchestrating data pipelines using technologies like ADF, Airflow, etc.
- Experience working with both real-time and batch data.
- Experience building data pipelines on Azure, with AWS data pipeline experience considered beneficial.
- Proficiency in SQL (any flavor), with experience using Window functions and other advanced features.
- Understanding of DevOps tools, Git workflow, and building CI/CD pipelines.
Nice to Have
- Domain knowledge of commodities, encompassing Sales, Trading, Risk, Supply Chain, Customer Interaction, etc.
- Familiarity with Scrum methodology and experience working in a Scrum team.
- Experience with streaming data processing technologies such as Apache Kafka, Apache Flink, or AWS Kinesis.
To Apply
For a confidential chat regarding your next Technology role, please submit your resume (in MS Words format) to Sheralynn Tjioe at , quoting the job title. We regret that only shortlisted candidates will be contacted.
Registration No.: R1878306
License No.: 16S8060
Data Engineer - Singapore - Kerry Consulting Pte Ltd
Description
Roles & Responsibilities
Data Engineer for a Leading MNC (Day Rate Contract)
Job posting done by Sheralynn Tjioe, Head of Interim and Contracting Solutions (Technology) Recruitment at Kerry Consulting
Email:
My Client is a leading stable firm in Singapore.
Responsibilities: