Current Subjects for Master Theses
- Security using Artificial Software Diversity in Safety-Critical Real-Time Systems
- Exemplary Topics
- Analysis of Natural Source Code with Machine Learning
subjects for master theses are listed below. We also can offer
subjects not listed here, depending on your interests. Feel free and
Contact: Joachim Fellmuth 
Security using Artificial Software Diversity in Safety-Critical Real-Time Systems
Cyber-Physical Systems (CPS) have an ever increasing impact on our life as more systems are controlled by computers, which are highly interconnected and even connected to the internet. Among these systems are hard real-time systems such as airbag or ABS controllers, where missing a deadline is considered a system failure. If such a functionality is safety-critical, e.g. the ignition of an airbag, the developer is required to provide guarantees on safety and timing properties.
When safety-critical systems are exposed to potential attackers, assuring safety implies also dealing with security issues. In particular, control-flow attacks are a threat to CPS. Existing countermeasures cannot be applied due to limited resources or limited operating system support.
The focus of our work is the development of methods that allow applying artificial software diversity, as a proven security measure, to safety-critical real-time systems. Our work involves analyses and manipulations of low-level code representations such as assembler, and different aspects of static worst-case execution time (WCET) analyses.
Analysis of Natural Source Code with Machine Learning
important or labour-intesive tasks from Software Engineering involve
the analysis of existing source code. These tasks can benefit from
automation to reduce development costs and improve the quality of
resulting software. While some of the tasks can be well-defined and
solved or approximated with static analysis techniques, others defy a
formal description. Examples include determining traceability links
from code to the requirements, as well as detecting code fragments
dedicated to cross-cutting concerns such as logging, persistence or
transactions. Indeed, many important properties of code are best
reflected in their linguistic description, which codifies the intent
of the programmer and allows effective communication between humans.
Still, this information is rarely exploited by existing tools.
In order to exploit the aforementioned linguistic information, as well as the known syntactic structure and behavioural properties of source code, machine learning techniques can be used to learn from large amounts of publically available source code. Appropriate methods, such as graph-based neural network architectures, are required, as well as the datasets necessary to train them.
For some complex tasks, such as the detection of cross-cutting concerns, the mandatory manual creation of datasets severely limits their size. In those cases, models may benefit from pre-training in a related task for which large datasets are available, a technique called transfer learning.
The focus of our work is the development of machine learning methods and tools for the analysis of source code with its syntactic, behavioural and linguistic aspects. Our work involves the compilation of datasets for important analysis tasks and the development of neural network architectures to solve them. We also investigate the application of transfer learning in tasks for which the compilation of large datasets is not feasible.
- Developing graph-based neural network architectures to solve source code analysis tasks.
- Developing methods to construct and validate datasets for source code analysis tasks.
- Adapting transfer learning techniques to the domain of source code analysis.
Contact: Guilherme Azzi 
Trust and Confidentiality of Goals in multi-agent Reinforcement Learning
Recent advances in machine learning lead to its promising but critical application to complex cyber physical systems such as autonomous automobile environments. Particularly, advancements in deep neural networks enable the effective utilization of function approximation in reinforcement learning (RL). This integration is called deep reinforcement learning that allows an agent to learn an effective behavioral policy in (approximated) high dimensional state and action spaces.
An essential element of reinforcement learning is the reward signal that rewards or penalizes an agent based on its actions. Through samples of state-action-reward trajectories, the agent learns to maximize its expected return (sum of rewards). Thus, the reward implicitly specifies the desired behavior of the agent. There are several methods that focus on structuring this methodology. For that matter, an important field of research is hierarchical reinforcement learning with temporal abstractions such as options, concurrent activities and pseudo-reward functions. In complex multi-agent applications, several entities operate in a shared environment to fulfill shared goals as well as private ones. Additionally, a single entity may be represented by a single RL agent or as a combination of multiple RL agents. Efficient coordination of agents to work together is therefore necessary and yields potential improvements. In order to achieve this, agents must establish trust even when some of their goals should remain confidential.
In our research we aim at the integration of goal-based behavior into reinforcement learning agents. Therefore, our approach is to encode goals into the rewarding aspect of reinforcement learning. To ensure secure cooperation of agents, we develop the concept of trust and confidentiality on goals and examine their oppositional characteristics.
- Specification of goals with respect to hierarchical reinforcement learning
- Establishment of trust on goals in multi-agent environments
- Guarantees on the confidentiality of goals
Contact: Simon Schwan