Last modified: 29 Aug 2025 13:16
The aim of this course is to provide students with the specialist knowledge, understanding and skills required to develop modern data engineering applications. The course builds on core computer science subjects such as software engineering, distributed systems, and enterprise computing along with AI to engineer efficient data pipelines based on real-time data and streaming processes at scale.
| Study Type | Undergraduate | Level | 4 |
|---|---|---|---|
| Term | First Term | Credit Points | 15 credits (7.5 ECTS credits) |
| Campus | Aberdeen | Sustained Study | No |
| Co-ordinators |
|
||
Data Engineering is the design of automated workflows to reduce the human work and effort processing big data, as an end user or data analyst or data scientist.
This includes the consideration of cloud-based and edge-based technologies, tools and techniques to solve complex computational problems found within real-world data science applications.
As well as core data engineering concepts, principles and theories, the course covers important key aspects found in associated disciplines such as visualisation, data science and computational science, with the intention of building usable pipelines for data scientists.
Students will explore a range of different topics on data engineering so that they can building real world practical applications. The topics covered include:
Information on contact teaching time is available from the course guide.
| Assessment Type | Summative | Weighting | 50 | |
|---|---|---|---|---|
| Assessment Weeks | Feedback Weeks | |||
| Feedback |
1,200-word individual report worth 50% of the overall grade. |
|||
| Knowledge Level | Thinking Skill | Outcome |
|---|---|---|
| Conceptual | Analyse | Demonstrate the use of techniques for cleaning, anomaly detection and pre-processing of big data. |
| Procedural | Analyse | Analyse and visualise organised data for patterns and trends based on analytics, metrics, segments, aggregates, features and training data. |
| Procedural | Apply | Manage the collection of raw data from instrumentation, logging, sensors, external data, and user generated contents. |
| Procedural | Apply | Build computer systems to handle big data that provides reliable data flow, infrastructure, pipelines, ETL (extract, transform, and load), structured and unstructured data storage. |
| Reflection | Create | Build and evaluate complex data pipelines using A/B testing and experimentation approaches. |
| Assessment Type | Summative | Weighting | 50 | |
|---|---|---|---|---|
| Assessment Weeks | Feedback Weeks | |||
| Feedback |
3,000-word group report worth 50% of the overall grade. Peer assessment will form part of students' individual marks. |
|||
| Knowledge Level | Thinking Skill | Outcome |
|---|---|---|
| Conceptual | Analyse | Demonstrate the use of techniques for cleaning, anomaly detection and pre-processing of big data. |
| Procedural | Analyse | Analyse and visualise organised data for patterns and trends based on analytics, metrics, segments, aggregates, features and training data. |
| Procedural | Apply | Manage the collection of raw data from instrumentation, logging, sensors, external data, and user generated contents. |
| Procedural | Apply | Build computer systems to handle big data that provides reliable data flow, infrastructure, pipelines, ETL (extract, transform, and load), structured and unstructured data storage. |
| Reflection | Create | Build and evaluate complex data pipelines using A/B testing and experimentation approaches. |
There are no assessments for this course.
| Assessment Type | Summative | Weighting | 100 | |
|---|---|---|---|---|
| Assessment Weeks | Feedback Weeks | |||
| Feedback |
A resit individual task will be provided in place of groupwork. |
|||
| Knowledge Level | Thinking Skill | Outcome |
|---|---|---|
|
|
||
| Knowledge Level | Thinking Skill | Outcome |
|---|---|---|
| Procedural | Apply | Manage the collection of raw data from instrumentation, logging, sensors, external data, and user generated contents. |
| Procedural | Apply | Build computer systems to handle big data that provides reliable data flow, infrastructure, pipelines, ETL (extract, transform, and load), structured and unstructured data storage. |
| Procedural | Analyse | Analyse and visualise organised data for patterns and trends based on analytics, metrics, segments, aggregates, features and training data. |
| Conceptual | Analyse | Demonstrate the use of techniques for cleaning, anomaly detection and pre-processing of big data. |
| Reflection | Create | Build and evaluate complex data pipelines using A/B testing and experimentation approaches. |
We have detected that you are have compatibility mode enabled or are using an old version of Internet Explorer. You either need to switch off compatibility mode for this site or upgrade your browser.