Solution architecture is an evolving discipline, as more and larger software systems are built and new challenges emerge. Software is running on an ever larger array of devices and interacts with more other software-components than ever before. The quality, robustness, security and adaptability of a software-system is determined by its solution architecture and software engineering. We believe that focus on the engineering process and the architecture are required to build large software-systems.
Ambitions are research questions of which the answers will give us the opportunity to determine the future. Are you a master student and would you like to significantly contribute to our thought leadership position in the Software Architecture domain? Please reach out to Rinse van Hees, Head of Research Software Architecture, for more information.
Infrastructure as code is one of the foundations of the new wave of web scale software systems. The infrastructure of the software systems dynamically reconfigures itself to react to the demands on the system as a whole. The system can decide to instantiate new instances of a sub component, reroute requests or build a complete copy of itself in another location. These new capabilities can have unspecified interactions with the running software itself and have undesired and unexpected results. This is called emergent behavior, behavior not explicitly build into the system. As part of our ambition to build and maintain quality software systems we feel that infrastructure as code should be part of our solution architecture and we should have the same quality guarantees as the rest of the system.
Multi paradigm programming languages
Scala is a programming language at the intersection of object oriented programming (OOP) and functional programming (FP). As Scala is used in larger and more complex projects how can we ensure the same level of quality and maintainability we have for more traditional languages such as Java and C#? For both OOP and FP paradigms we have metrics and quality guidelines but for multi paradigm programming languages these are not yet available. Can we apply the current OOP and FP metrics to multi paradigm languages? How do the OOP and FP metrics interact and compare to each other? Are there metrics unique to multi paradigm programming languages and what can they tell us? How do metrics on multi paradigm languages signal quality? These are questions we feel are necessary to answer to maintain the same level of quality when building Scala software systems.
Bachelor and Master students can choose one of our Research Assignments as a topic for their thesis.
Blockchains with a connection to the natural world
Is it possible to use blockchain technology when there is a strong relation with a variable value in the natural world. Where does confidence lie? What reality is leading? Learn more…
Scala code quality metrics (multi paradigm)
Scala is gaining more traction as a programming language for large software systems, it is used in companies such a Netflix and Twitter and is the base for many great products. Libraries such a Finagle and Akka are being used in many JVM based products. The quality of such core components and large software systems need to be guaranteed. For Java and other object oriented and functional programming languages code quality metrics and guidelines are available. Unfortunately for Scala, a multi paradigm programming language, this is not yet the case. We would like to research code quality metrics and guidelines for Scala. Since Scala is a multi-paradigm language, we can use research for both objected oriented and functional programming languages.
Previously, we have shown that we are able to use existing object oriented and functional metrics to predict bug density in Scala projects. This means there is a relation between different metrics and the possibility of bugs, we feel that we can extend this research to give a quality score for Scala source code.
Unit of work in a distributed system
There is a large body of research on ordering events and transactions in large distributed software systems. In the past, these large distributed systems were highly specialized and took special care to deal with these things. With the ever increasing size of software systems, we see that many new software systems are distributed. Web-scale-architecture, event-driven, reactive, actor and eventual consistency all software architectural styles and concepts that imply distributed computing. This coupled with the ephemeral nature of runtimes (Docker and other virtualization and containerization) means that it becomes hard to reason about transactions or units of work. How can we use what we learned in the past as guidelines for new software systems? Can we find patterns and solutions that help us implement large scale distributed systems of the future?
Dependency graph of a web-scale architecture (Software Architecture)
Large web-scale systems have many sub-components that interact and depend on each other. The relations between these sub-systems are not evident from just looking at them. For example, in an event driven system a publisher of an event might not know all subscribers. The different relations become even more clouded when we want to take the ephemeral nature of runtimes into account. Web-scale systems often scale up or down based on some metric, this means that a new dependent sub-system could be added or removed at any time. Is there a way to map the relations between sub-systems of a web-scale architecture that also take scaling up and down into account?
Scenario and edge case mining from configuration as code
Reverse engineer process definition based on large data flows
Info Support Real Estate Services develops and runs a data-roundabout. A SaaS platform that supports digital processes and communication between different organizations. The data-roundabout is based on standards and message definitions. For monitoring and alerting a process definition is required. Currently these definitions are made by hand.
Can we use modern techniques such machine learning or big data analytics to reverse engineer a process definition from an example set of messages and data flows? Sub questions are: how can we recognize which messages belong to the same process instance / definition? How do you determine a generic process definition from several process instances? How can we extract monitoring and alerting information for a process from the data flow?
We are proud to present the publications our researchers have been working on lately! Whether it concerns our employees or ambitious students; the people who comprise at Info Support Research share their research insights through publication in conventional scientific journals and through other ways, such as online publications and open source contributions. We truly believe in the power of sharing knowledge in advancing our field of expertise and developing great software. From that point of view we share our own publications without restrictions.
Stable and predictable Voronoi treemaps for software quality monitoring
Department of Information and Computing Sciences, Utrecht University. February 2016.
Rinse van Hees, Jurriaan Hage
Stable Voronoi-based visualizations for software quality monitoring
Dept. of Information and Computing Sciences, Utrecht University, The Netherlands
Rinse van Hees, Jurriaan Hage
In addition to our publications, Info Support Research also has a myriad of relevant theses.
Towards an architecture design for a future societal energy supply system
Master of Science Thesis. University of Twente. April 2017
Estimate the post-release Defect Density based on the Test Level Quality
Master Thesis Software Engineering