The PL team develops programming languages and programming tools that help developers build reliable software. Our key topics are domain-specific languages, static analysis, and incremental computing.
Program Analysis
Static analyses reason about a program without actually running the program. This has the benefit that potential runtime errors (e.g. null pointer dereference, array out-of-bounds indexing) can be caught early on, already at compile time. In our research group, we study two main challenges of static analyses: performance and soundness.
Data-flow analyses can provide interesting insights for software engineers about programs; they can explain points-to information, reason about value tainting, or compute the range of values a variable can hold. Our goal is to employ these analyses in integrated development environments to provide high-quality feedback during the development process. However, data-flow analyses are complex because modern programming languages use complicated control structures and data structures. This complexity of data-flow analyses may result in poor performance, which can break the development flow. To improve performance, we study how to incrementalize data-flow analyses and re-analyze only the program parts affected by a change. Beyond speeding up data-flow analyses, we also help analysis developers by providing a DSL that hides the technical details of incrementalization.
A static analysis is sound if it correctly predicts the execution of a program. For example, a sound static type checker ensures that a type-correct program does not crash at runtime because of a type error. Soundness of static analyses is especially important if someone acts on the analyses results, such as a compiler that optimizes a program based on static information. However, formally proving that a real-world static analysis is sound is difficult because of the high proof complexity and effort. In our research group, we explore ways to develop and structure static analyses, such that the soundness proof becomes more approachable.
Projects:
- IncA: Incremental program analysis based on Datalog
- Sturdy: A library to develop sound static analyses in Haskell.
Incremental computing
We develop building blocks for incremental algorithms that achieve very high performance when reacting to a change in their input. Rather than repeating the entire computation over the changed input, an incremental algorithm only updates those parts of the previous result that are affected by the input change. This way, incremental algorithms provide asymptotical speedups in theory and we have observed multiple orders of magnitude speedups in practice.
Incremental algorithms are crucial for providing actionable feedback because the feedback needs to be updated after every code change the developer makes. Yet, existing algorithms, such as the Java type checker in Eclipse JDT, are one-off solutions that required years of engineering that cannot be reproduced. We develop building blocks for incremental computing and collect them in frameworks that execute regular algorithms incrementally.
Projects:
- IncA: Incremental program analysis based on Datalog
- iTypes: Incremental type checking
- PIE: Incremental software builds, supersedes pluto
Domain-specific languages
Domain-specific languages abstract from technical details by providing tailored language features. Software developers who use domain-specific languages are more effective, because they reason about the code at the domain level and are freed from accidental complexity. For the same reason, domain-specific languages also make it easier for tools to derive actionable feedback. However, the effective development, application, and usage of domain-specific languages requires a large upfront investment into compilers, editors, and developer training. We develop techniques and toolboxes for significantly reducing the upfront costs of domain-specific languages.
Projects: