NET Capstone Projects

Come and see our latest projects at the NET Capstone Fair

Projects of 2024

Project Image

Cyber Gymnasium for Security Courses

The Cyber Gymnasium is a training and simulation platform designed to teach Cyber Operators/technician/analysts on how to defend their network systems from adversary attacks and techniques. GDMS-C in partnership with Field Effect will provide the Field Effect Cyber Range platform with training and support to Carleton University for use for the duration of this project. A breach and attack scenario is represented on the training and simulation platform via a dataset to simulate the network, background user data, and the breach and attack as it is applied against the network. For this project, students will create an experimental lab for a chosen Carleton course (NET 4010 or NET 4011) using Cyber Range. Students will work with GDMS-C/FE to identify a course for which Carleton will characterize and create a lab exercise. Students will have the opportunity to learn from real cyber courses designed by Field Effect, and using the experience learned through the course, they will create their own lab exercise with consultation from Field Effect, and GDMS-C.

Project Image

Network Datasets on the RANGE Platform

RANGE (RApid Network Generation and modElling tool) is a high fidelity network simulator developed by GDMS as part of the ARMOUR (Automated Computer Network Defence) program. RANGE is used to create realistic network simulations that are used in part to train cyber defence algorithms that utilize ML/AI. The simulated network is represented on RANGE via a dataset to simulate the network. Students will work with GDMS to identify a set of representative networks in which Carleton University will characterize and create simulations using the RANGE platform. These datasets can then be used to further enhance other areas of work within the ARMOUR and other GDMS programs. As part of this project, GDMS will provide RANGE to Carleton University with training and support for use for the duration of this project. Carleton University will also identify areas of the workflow/process which could be optimized to reduce the overall time required to generate a dataset.

Project Image

Analysis and Implementation of Simulation Tools for Fog Computing with Microservice Support

Fog/Edge computing is the prevalent technology to bring a Cloud-like environments as close as possible to the end user. It provides high bandwidth and low latency connectivity due to its close proximity to the location of the end user. Hence, a wide variety of IoT applications such as health monitoring, smart and driverless vehicles, home safety, industrial automation, and robotics can leverage Fog computing benefits.

Microservice is a new architectural approach for any type of application that structures an application as a collection of small and single-task services which communicate with each other using APIs. This new paradigm improves scalability, fault tolerance, program language independency, data security, and hardware resource utilization. In recent years, monolithic IoT applications have moved towards a microservice-based architecture implemented in a Fog environment.

Over the years, various simulation tools have been proposed to evaluate cloud and fog environments. Hence, in this project, we plan to evaluate and compare different Fog simulators that support microservices. This project will be divided according to the following phases.

  • Phase 1 – Getting familiar with the various concepts (cloud, fog, microservice, simulator, etc.) and identifying potential fog simulators supporting microservices from the literature.
  • Phase 2 – Installing and getting familiar with various fog simulators while paying special attention at how microservices are supported.
  • Phase 3 – Deploying a microservice-based application in the selected simulators and perform a detailed comparison of various features. More specifically, we are interested to analyze mobility support, placement support, resource utilization support, etc.
  • Phase 4 – Select one of the fog simulators and propose new features that can be added to improve the simulation results for microservice scenarios. Different options are available ranging from the addition of new evaluation parameters to new options during the initial configuration of the scenario. Time permitting, the suggested features can also be implemented. However, this will require a customization of the simulator source code to support new features.

Phases 3 and 4 will be the most challenging as you will have to learn different simulation environments and brainstorm to propose new features. To successfully complete this project, you will require the following skills:

  • Linux (Ubuntu distro)
  • Bash script – automate the process, run simulation, etc.
  • Java/python – most simulation tools are developed using java or python

Students are expected to take ownership of this project and contribute ideas to its development (especially in phases 3 & 4) as the year progresses. Motivated, curious, and resourceful students are expected.

Project Image

Resource Allocation in Dynamic 5G Networks by Using Reinforcement Learning

The satisfaction of mobile users that have disjoint Quality of Service (QoS) requirements (e.g., bandwidth requirements, delay tolerance, user density, etc.) on the same network infrastructure has risen as one of the most critical challenges in 5G networks. Taking advantage of the Software Defined Network (SDN) infrastructure, splitting the network into different slices is proposed as the lighting of the disjoint requirements challenge.

The concept of network slicing relies on grouping mobile users that have similar communication needs and serving them in the same slice. According to the ITU, three main slices can be defined as: i) enhanced Mobile Broadband (eMBB), ii) massive Machine Type Communication (mMTC), and iii) Ultra Reliable Low Latency Communication (uRLLC). The resource allocation in terms of bandwidth between these slices in a highly dynamic 5G environment can be counted as a complex problem for traditional optimization methods. Hence, employing Machine Learning (ML) techniques to manage the resources is an interesting alternative. The allocation of resources should be done in consecutive periods, thus, it can be defined as a sequential decision-making problem and Reinforcement Learning (RL) can be considered as a suitable solution. This project will be divided according to the following phases.

  • Phase 1: Getting the required knowledge related to 5G networks, slicing, machine learning, Markov decision processes (MDP), etc. You will also have to explore RL algorithms and the logic behind their principles.
  • Phase 2: Writing a script to generate user data by considering various network parameters (and distributions) such as the number of users, required bandwidth, latency requirement, etc.
  • Phase 3: Getting familiar with the Deep Q-Learning (DQL) algorithm and feed the data from phase 2 to solve the problem. You will also have to analyze the data to make sure it makes sense.
  • Phase 4 (time permitting): If time permits, the resource allocation problem can be expanded to consider more parameters such as power, CPU, etc. In that case, multi agent RL algorithms can be used to solve the new comprehensive resource allocation problem. This will involve getting the required knowledge of multiagent RL algorithms and writing a multi-agent RL algorithm script to make the optimum distribution of the network resources which are defined with multiple parameters (bandwidth, power, CPU, etc.) to the slices.

To successfully complete this project, you will need the following skills:

  • General knowledge of 5G networks
  • General knowledge of reinforcement learning
  • Scripting
  • Python

Students are expected to take ownership of this project and contribute ideas to its development as the year progresses. Motivated, curious, and resourceful students are expected.

Project Image

Inter-Cloud Dynamic Smart Routing

To attract more clients, cloud providers offer services that are tailored for the needs of the applications of clients; services ranging from basic computing power to sophisticated auto-scaling features. Among all factors to consider in addition to service provision, service cost is one of the most important ones that impact a user’s choice of cloud service provider. Therefore, optimizing the cost of managing user applications in a multi-cloud environment is crucial for large enterprises such as Google, IBM and Microsoft. Smart inter-cloud routing in real time can help reduce such cost while maintaining the highest throughput and lowest latency. In this project, a group of 4 students will work together to create an inter-cloud environment with physically dispersed (small-size) datacenters and different price plans. Various smart routing algorithms should be created to fit a large spectrum of clients’ demands and needs. Experimentation should be conducted to evaluate the performance of various smart routing algorithms including: overall cost, link load balancing, throughput, latency, etc.

Project Image

Adversarial attacks on Machine Learning in network security

Machine learning models have made many decision support systems to be faster, more accurate and more efficient. However, applications of machine learning in network security face more disproportionate threat of active adversarial attacks compared to other domains. This is because machine learning applications in network security such as malware detection, intrusion detection, and spam filtering are by themselves adversarial in nature. In what could be considered an arms race between attackers and defenders, adversaries constantly probe machine learning systems with inputs which are explicitly designed to bypass the system and induce a wrong prediction. in this project, the students will use opensource libraries and publicly available datasets to evaluate different adversarial attacks and their defences on network security, with emphasis on IoT networks.

Project Image

Simulation of Routing and Load Balancing in Low-Earth Orbit (LEO) Satellite Network

Satellite networks will become an integral part of future communication networks, likely in the era of 6G. Starlink, for example, has deployed more than 3000 Low-Earth Orbit (LEO) satellites and started providing Internet services on multiple continents. Compared to terrestrial networks, satellite networks are more dynamic due to the orbiting satellites. Therefore, data forwarding and delivery in satellite networks need customized designs of routing and load balancing schemes. 

Students working on this project will build on a LEO satellite simulation platform developed and refined by two capstone teams from previous years. The current platform can simulate and visualize a satellite network, source/destination points on the group chosen by the user, and the dynamic route that best connects the two endpoints found using standard routing algorithms. To further develop the platform, additional features to be realized include but are not limited to:

  • Enable ground traffic generation at major cities in every continent that reflects real-world data traffic volume/demand, either using existing open-access data or derivative models;
  • Enable simulation and visualization of optimal routes for multiple (tens of) pairs of destinations;
  • Test and compare the performance of different routing and load-balancing methods. 

The project will consist of four main phases:

  1. Initial research: Check related literature to understand the unique characteristics and challenges of LEO satellite networks;
  2. Feature extension: Implement the new features (e.g., ground traffic generation and multiple-pair of routing) in codes;
  3. Development: Propose customized routing and load-balancing methods/algorithms, which can be based on either heuristics or empowered by machine learning.

Testing and performance analysis: Compare the performance (e.g., delay, complexity, congestion level) of different routing and load balancing methods/algorithms by running them on the developed platform.  

Project Image

Deep Learning for Two-Tier Open Radio Access Network (O-RAN) in Simulated Environment

Open radio access network (O-RAN) is a new radio access network (RAN) architecture proposed by O-RAN Alliance, a global community of mobile operators, vendors, and academic units. Aiming at open, intelligent, virtualized, and Interoperable RAN beyond 5G, O-RAN Alliance defines the architecture, use cases, and interfaces of O-RAN. An important highlight of O-RAN is software-defined and AI-enabled RAN Intelligent Controllers (RICs), which include near real-time and non-real-time RICs, that support programmable-based functions and integrate embedded deep learning (DL) capabilities to optimize RAN performance and reduce operational complexity. Decisions that can be made by RICs via deep learning include power allocation, link scheduling, and traffic steering, just to name a few. 

There have been many studies that investigate how DL empowers O-RAN. Meanwhile, simulation platforms for O-RAN that integrate the 4G/5G protocol stack have been established (e.g., ns-O-RAN, which builds on ns-3). Additionally, tools to integrate machine learning into ns-3 (such as ns3-gym and ns3-ai) are also developed.  

In this project, students are expected to gain knowledge and hands-on experience with O-RAN by simulating a two-tier O-RAN and applying deep learning to manage one or more aspects of its performance. The project will consist of three main phases:

  • Initial research: Investigate the fundamentals of O-RAN and approaches to apply DL for O-RAN. Note that this project may require extensive reading. 
  • Creating a two-tier RAN in a simulated environment: Building a RAN with a macro-base station (tier 1), which is equipped with a non-real-time RIC, and multiple small cell base stations (tier 2), each of which is equipped with a real-time RIC. Implement the basic O-RAN functional architecture on the two-tier RAN. 
  • Implementing DL for RAN management: Select one aspect of RAN management (power allocation, link scheduling, traffic steering, etc.) and integrate a DL method to optimize the RAN performance. Note that either simulation-generated data or open-access real-world data is needed for this step. 
  • Testing: Examine the performance of the created two-tier RAN and the DL algorithm and make corrections and improvements as needed.