Description:
As a Data Engineer, you’ll help our clients deploy and maintain data pipelines in a production-safe manner, using the latest technologies and with a DataOps culture. You’ll work in a fast moving, agile environment, within multi-disciplinary teams, delivering modern data platforms into large organisations.
Responsibilities:
- Develop pipelines that brings data to life for decision making, either delivering insight or for more sophisticated data science applications
- Be confident in being hand on with data, applying the practicalities of quality, flow, and organisation of information.
Qualifications:
- Bring experience in coding in Python and SQL
- Have solid experience of Snowflake or Databricks
- Have exposure to at least one cloud platform (AWS, Azure or GCP) and experience of associated technologies (for example with AWS: DynamoDB, Redshift, Kinesis Pipelines, Glue, Lambda, QuickSight, etc)
- Have experience with DataOps concepts