Artificial Intelligence

Artificial Intelligence (AI) is changing how we work and is reshaping the relationship between people, software and decision-making. As AI matures from experimentation into business-critical infrastructure, Info Support Research Center AI aims to stay at the forefront of developments in this domain. Therefore, research in Artificial Intelligence remains vital and we continue to challenge and facilitate our people to explore the technologies, methods and architectures that will define the next generation of AI solutions.

Research Ambitions

Ambitions are research questions of which the answers will give us the opportunity to determine the future. Are you a master’s student and would you like to significantly contribute to our thought leadership position in the AI domain? Please reach out to Lucia Conde Moreno, Head of Artificial Intelligence Research, for more information about how you can help.

Advancing Multi-Agent Systems and Generative AI for Autonomous, Long-Horizon Tasks

At Info Support Research Center AI, we want to advance the development of Generative AI and Multi-Agent Systems that can collaborate on complex, multi-step tasks in realistic business environments. Rather than focusing solely on a single general-purpose assistant, we are interested in orchestrated AI systems in which multiple specialized agents work together, divide responsibilities, exchange context and coordinate actions to solve long-horizon (complex, multi-step) problems. In addition, we consider security for agentic systems to be of critical importance and we want to investigate how such systems can be designed, governed and deployed securely.

This research ambition explores how agentic AI can be designed to operate reliably across workflows that unfold over hours or days, with attention to planning, memory, tool use, handoff strategies, human-in-the-loop supervision and guardrails. We are particularly interested in reinforcement learning and self-healing workflows, in which AI agents can detect failure, adapt their approach and continue operating in a controlled and transparent way. We also want to tackle the security challenges of agentic systems, such as safe tool use, access control, data protection and resilience against misuse or manipulation.
Our goal is to understand how Generative AI and Multi-Agent Systems can evolve from conversational tools into dependable and secure digital collaborators that support knowledge, automation and decision support.

Towards A Trustworthy, Explainable, Verifiable, Fair, Sovereign AI

We aim to strengthen trust in AI by advancing Explainable AI (XAI) alongside the broader principles of Trustworthy AI, Verifiable AI and Sovereign AI. As AI becomes increasingly embedded in critical business and societal processes, organizations require more than powerful models: they need AI systems that are transparent, auditable, compliant, aligned with human and legal expectations and designed to reduce harmful bias in AI-driven outcomes.

Our research ambition focuses on improving explainability across diverse data domains and models, including traditional machine learning over tabular data, time-series forecasting, computer vision and (particularly but more challengingly) Generative AI systems. We want to investigate methods for traceability, auditability, observability, data provenance and bias detection and mitigation to better understand how AI systems arrive at their outputs and how risks such as hallucinations, hidden failure modes, unfair outcomes and data quality issues can be mitigated. In addition, we explore sovereign AI architectures and deployment models that support privacy, security, regulatory compliance and regional control over data and models; an example of this could be local (offline) Generative AI models.
By combining explainability with verifiability and sovereignty, we ultimately aim to help organizations and practitioners adopt AI that is not only effective, but also transparent, justifiable, responsible and fair.

Green AI: Balancing Performance with Environmental Sustainability

There is a general growing concern over the carbon footprint associated with training and using complex AI models (particularly Generative AI models), which consume vast amounts of computational resources and energy in their quest for accuracy and sophistication. Our ambition is to contribute to the field of Green AI by researching how AI systems can be high-performing while minimizing computational cost, energy usage and hence environmental impact. As the field moves beyond the assumption that larger models are always better, efficiency has become a central indicator of innovation. We therefore see strong potential in approaches that make AI smaller, faster, more affordable and more sustainable in real-world use.

This research ambition focuses on topics like Small Language Models (SLMs), model compression, quantization, edge AI, or energy-aware AI engineering. We want to investigate how to balance model quality, inference latency, operational cost and carbon footprint when designing and deploying complex AI solutions. We are also interested in how lightweight and specialized models can deliver domain-specific intelligence on constrained infrastructure (including on-device and edge environments).

We look forward to an AI landscape in which sustainability and efficiency are treated as core design principles.

Are you interested in working in this area?

Don’t hesitate to reach out! Contact Lucia Conde Moreno, Head of Artificial Intelligence Research. Or apply directly to one of our assignments.

More about Lucia Conde Moreno

Lucia Conde Moreno is Head of the AI Research Center at Info Support. Next to this role, she works as a consultant software engineer specializing in data and AI applications. She is known as a Jack Of All Trades by her colleagues, having worked in varied roles ranging from .NET or Java developer to data scientist or machine learning engineer.

She has worked for different national and international clients, in diverse fields such as finance, health care, energy, or education. She is part of the AI Champions chapter for promoting AI-augmented engineering tools, and she is responsible for supervising internal research in subfields of AI like explainability or computer vision.

She speaks regularly at international conferences, particularly about trustworthy AI. She holds a MSc in Computer Science, and a BSc in Telecommunications Engineering.

Artificial Intelligence Publications