Research Ambitions
Ambitions are research questions of which the answers will give us the opportunity to determine the future. Are you a master’s student and would you like to significantly contribute to our thought leadership position in the AI domain? Please reach out to Lucia Conde Moreno, Head of Artificial Intelligence Research, for more information about how you can help.
Advancing Multi-Agent Systems and Generative AI for Autonomous, Long-Horizon Tasks
At Info Support Research Center AI, we want to advance the development of Generative AI and Multi-Agent Systems that can collaborate on complex, multi-step tasks in realistic business environments. Rather than focusing solely on a single general-purpose assistant, we are interested in orchestrated AI systems in which multiple specialized agents work together, divide responsibilities, exchange context and coordinate actions to solve long-horizon (complex, multi-step) problems. In addition, we consider security for agentic systems to be of critical importance and we want to investigate how such systems can be designed, governed and deployed securely.
This research ambition explores how agentic AI can be designed to operate reliably across workflows that unfold over hours or days, with attention to planning, memory, tool use, handoff strategies, human-in-the-loop supervision and guardrails. We are particularly interested in reinforcement learning and self-healing workflows, in which AI agents can detect failure, adapt their approach and continue operating in a controlled and transparent way. We also want to tackle the security challenges of agentic systems, such as safe tool use, access control, data protection and resilience against misuse or manipulation.
Our goal is to understand how Generative AI and Multi-Agent Systems can evolve from conversational tools into dependable and secure digital collaborators that support knowledge, automation and decision support.
Towards A Trustworthy, Explainable, Verifiable, Fair, Sovereign AI
We aim to strengthen trust in AI by advancing Explainable AI (XAI) alongside the broader principles of Trustworthy AI, Verifiable AI and Sovereign AI. As AI becomes increasingly embedded in critical business and societal processes, organizations require more than powerful models: they need AI systems that are transparent, auditable, compliant, aligned with human and legal expectations and designed to reduce harmful bias in AI-driven outcomes.
Our research ambition focuses on improving explainability across diverse data domains and models, including traditional machine learning over tabular data, time-series forecasting, computer vision and (particularly but more challengingly) Generative AI systems. We want to investigate methods for traceability, auditability, observability, data provenance and bias detection and mitigation to better understand how AI systems arrive at their outputs and how risks such as hallucinations, hidden failure modes, unfair outcomes and data quality issues can be mitigated. In addition, we explore sovereign AI architectures and deployment models that support privacy, security, regulatory compliance and regional control over data and models; an example of this could be local (offline) Generative AI models.
By combining explainability with verifiability and sovereignty, we ultimately aim to help organizations and practitioners adopt AI that is not only effective, but also transparent, justifiable, responsible and fair.
Green AI: Balancing Performance with Environmental Sustainability
There is a general growing concern over the carbon footprint associated with training and using complex AI models (particularly Generative AI models), which consume vast amounts of computational resources and energy in their quest for accuracy and sophistication. Our ambition is to contribute to the field of Green AI by researching how AI systems can be high-performing while minimizing computational cost, energy usage and hence environmental impact. As the field moves beyond the assumption that larger models are always better, efficiency has become a central indicator of innovation. We therefore see strong potential in approaches that make AI smaller, faster, more affordable and more sustainable in real-world use.
This research ambition focuses on topics like Small Language Models (SLMs), model compression, quantization, edge AI, or energy-aware AI engineering. We want to investigate how to balance model quality, inference latency, operational cost and carbon footprint when designing and deploying complex AI solutions. We are also interested in how lightweight and specialized models can deliver domain-specific intelligence on constrained infrastructure (including on-device and edge environments).
We look forward to an AI landscape in which sustainability and efficiency are treated as core design principles.
Are you interested in working in this area?
Don’t hesitate to reach out! Contact Lucia Conde Moreno, Head of Artificial Intelligence Research. Or apply directly to one of our assignments.
More about Lucia Conde Moreno
Lucia Conde Moreno is Head of the AI Research Center at Info Support. Next to this role, she works as a consultant software engineer specializing in data and AI applications. She is known as a Jack Of All Trades by her colleagues, having worked in varied roles ranging from .NET or Java developer to data scientist or machine learning engineer.
She has worked for different national and international clients, in diverse fields such as finance, health care, energy, or education. She is part of the AI Champions chapter for promoting AI-augmented engineering tools, and she is responsible for supervising internal research in subfields of AI like explainability or computer vision.
She speaks regularly at international conferences, particularly about trustworthy AI. She holds a MSc in Computer Science, and a BSc in Telecommunications Engineering.
Artificial Intelligence Publications
-
A Comprehensive Empirical Study on Fairness in GraphRAG
A Comprehensive Empirical Study on Fairness in GraphRAG
-
Mesh up your Data Architecture
Mesh up your Data Architecture
-
Investigation into the Influence of Biological Depth Cueson Monocular Depth Estimation for the Improvement of an Automated Privacy-Preserving Video Pr…
Investigation into the Influence of Biological Depth Cueson Monocular Depth Estimation for the Improvement of an Automated Privacy-Preserving Video Pr…
-
Public acceptance of AI-based detection of social security fraud
-
Article ICT/Magazine “Onderzoek naar menselijke benadering van AI”
In an innovative project funded by the NWO, the PersON consortium, led by Radboud University, aims to optimize cancer care. This initiative focuses on…
-
Using generative modelling to perform diversifying data augmentation
-
Recognizing Parkinson’s and Alzheimer’s through video footage and Artificial Intelligence
Neurological movement disorders such as Parkinson’s, Alzheimer’s and Huntington’s disease are not always easy to distinguish for doctors. Giving the r…
-
Thesis Talk Reinier Joosse – Deep learning models based on Z3
Modern cars have cameras that recognize traffic signs at the side of the road. For example, your car may detect that there is a “Stop” sign in front o…
-
Thesis Talk Jan Smits – Mutator, the open-source mutation testing framework
Stryker Mutator, the open-source mutation testing framework developed with Info Support, would like to introduce mutation levels to their framework.
-
Info Support Research Demo Day
The Info Support Research center is organizing a new Demo Day on Thursday 22 October. The session takes place digitally and is for anyone interested i…
-
Robotic Process Automation - An assessment of process discovery techniques with the purpose of finding RPA eligible processes.
Robotic Process Automation is a process where simple tasks that are performed by humans are automated by employing ‘software robots’ to do the task. U…
-
The Combination of Investment Strategies Using the Replicator Equation
-
Automated Privacy-Preserving Video Processing through Anonymized 3D Scene Reconstruction
-
Using Discretization and Resampling for Privacy Preserving Data Analysis: An experimental evaluation
-
Method Call Argument Completion using Deep Neural Regression
-
Unit test generation using machine learning
-
Quantifying Chatbot Performance by using Data Analytics
-
Code Completion with Recurrent Neural Networks
-
Chatbot Personality and Customer Satisfaction
-
Specifying and Testing Conversational User Interfaces
-
Supporting Decision-making in Fraud Sensitive Environments
-
Automated Taxonomy Expansion and Tag Recommendation in a Knowledge Management System
-
Building a Data-Driven Search Engine Spelling Corrector
Building a Data-Driven Search Engine Spelling Corrector