No more applications are being accepted for this job
- Create and maintain optimal data pipeline framework and procedures.
- Assemble complex data sets to meet specific requirements.
- Creating data systems that ingest data both ELT/ETL from various data providers.
- Implement flows with distributed systems and cloud architecture.
- Write efficient, well documented, and highly readable code.
- Schedule/automate data pipelines and monitor their performance.
- Write ad-hoc queries in order to perform analysis
- Interact with the teams to gather requirements and produce functional and technical spec documents.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift, Glue, Lambda, Snowflake and Teradata or Cloudera
- Experience with object-oriented/object function scripting languages: Python, SQLs, Parquet and open source stacks.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) both on-premise and cloud databases.
- Advance in building processes to support data transformation, data structures, metadata, data modelling, dependency, and workload management.
- Experience in building user data repository and BI reporting services (e.g. QuikSense)
- Having AWS data certification is a plus
API Engineer - Singapur, Singapore - U3
Description
Required