👋 Welcome to the GitHub repository of the Explainable Artificial Intelligence (XAI) Research Group at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University. Our mission is to enhance the transparency and interpretability of AI and Evolutionary Computing (EC) systems, fostering trust and understanding in AI technologies.
Led by Dr. Niki van Stein, our team focuses on developing methods that elucidate AI decision-making processes. Our interdisciplinary research spans various scientific domains and industry applications, including:
- Explainable Predictive Maintenance: Creating explainable models to anticipate equipment failures and optimize maintenance schedules.
- Heuristic Optimization Analysis: Investigating the behavior of optimization algorithms to improve their performance and transparency.
- Novel XAI Methods: Developing innovative techniques to interpret complex AI systems.
- Large Language Model driven EC methods: Developing innovative techniques to use LLMs for automatic algorithm construction.
We collaborate with experts in machine learning, optimization, and domain-specific fields to build effective and user-friendly explainable systems.
Our current projects include:
- AI for Oversight: Developing AI systems that assist in oversight functions across various sectors.
- XAIPre: Focusing on explainable AI for predictive maintenance applications.
- Complex Lens Design Optimization: Enhancing the design of complex lens systems through optimization techniques.
We welcome collaboration opportunities and inquiries. Please reach out to us through our website or connect with Dr. Niki van Stein directly.
Thank you for your interest in our work.