Job description


  • Entry level
  • No Education
  • Salary £50,000.00 - £80,000.00 gross per year
  • London


As a data engineer specialising in data technology, you’ll work in the data engineering team on the design and implementation of data solutions for our customers.

- You have a background in data engineering or a related area
- You have a strong interest and passion for all things data related
- You are comfortable talking with stakeholders and communicating complex solutions clearly and concisely
- You are driven to developing high quality solutions
- You are a motivated, self-learner who can work with a degree of autonomy when needed


- Collaborate with data scientists and other data-engineers on data problems
- Work with senior data engineers and scientists to understand their data problems and develop solutions
- Work on data integration solutions
- Work with big data components and migration strategies
- Work with cloud based infrastructure for hosting data solutions/applications.
- Participate in the pre-sale engagement process

Essential Requirements

Experience developing ETL and data integration solutions

- Strong SQL skills and experience working with a relational database (MySQL, Postgres, MySQL, Oracle etc…)
- Experience working with data warehouse solutions, extracting and processing data in various ways using off-the-shelf tools and open-source tools
- Experience with a general purpose programming language, 3+ years’ experience one of (Python a plus, JVM languages, JS, Ruby)
- Working with Linux or Windows operating systems and version control systems (Git) (3+ years)

Desirable Requirements

Experience with migrating on premise data stores to cloud based data stores (Hadoop cloud distros a plus)

- Experience with various ETL strategies and architectures
- Hands-on experience with cloud environments (AWS preferred)
- Experience of data security, data governance and quality assessment
- Building API’s and apps using Python/JS or an alternative language
- Knowledge of Hadoop technology (Map-reduce, HDFS, Hive)
- Experience with non-relational database solution (e.g. MongoDB, Google BigQuery)
- Experience with AWS data pipeline, Azure data factory or Google Cloud Dataflow;
- Working with containerisation technologies (Docker, Kubernetes etc…)
- Knowledge of R and statistical languages

- Hadoop
- Big Data
- Data Integration

  • git
  • mysql
  • oracle
  • sql
  • windows