Trustworthy Artificial Intelligence and Machine Learning

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 4 of 4
  • Item
    Towards Trustworthy AI: Evaluating SHAP and LIME for Facial Emotion Recognition
    (2025-01-07) Lorch, Selina; Gebele, Jens; Brune, Philipp
    Explainable Artificial Intelligence (XAI) plays a crucial role in enhancing the transparency and interpretability of Machine Learning (ML) models, especially in sensitive domains like Facial Emotion Recognition (FER). This paper evaluates the effectiveness of the model-agnostic methods SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations) in explaining ML decision-making for FER. Leveraging two established facial emotion databases, FER 2013 and RAF-DB, our research identifies key facial features of ML model predictions. Our results indicate that SHAP offers more consistent and reliable visualizations than LIME, effectively emphasizing critical regions such as the mouth, eyes, and cheeks, which align with the facial Action Units outlined by the Facial Action Coding System (FACS). This alignment enhances model interpretability, demonstrating how XAI can reconcile accuracy with transparency to foster the development of trustworthy AI systems in FER. Our study also shows that in complex domains like FER, XAI methods alone are insufficient; expert interpretation is crucial for applying insights from XAI visualizations, underscoring the need for interdisciplinary research to advance future studies in complex domains.
  • Item
    Automated Machine Learning in Research – A Literature Review
    (2025-01-07) Haberl, Armin; Thalmann, Stefan
    Machine learning (ML) has become increasingly popular among researchers and is used to analyze large and complex data sets to gain novel insights in various domains. This trend is further boosted by the introduction of automated machine learning (autoML), empowering researchers without extensive data science or ML expertise to use ML methods in their research. Several studies focus on the use of traditional ML in research and have identified reproducibility and ethical issues as major challenges. Despite the significant uptake by researchers, the use of autoML in research remains mostly unexplored. This literature review aims to close this gap and investigates 49 papers focusing on the opportunities and challenges of autoML in research. As a result, we identify five challenges and three opportunities associated with autoML in research. Finally, we propose a research agenda with five major action points for future research.
  • Item
    varMax: Uncertainty and Novelty Management in Deep Neural Networks
    (2025-01-07) Broggi, Alexandre; Baye, Gaspard; Silva, Priscila; Costagliola, Nicholas; Bastian, Nathaniel; Fiondella, Lance; Kul, Gokhan
    Traditional Deep Neural Networks often struggle with new or unfamiliar data patterns since they operate on a closed-set assumption. This challenge arises due to inherent limitations in the model architecture, such as the softmax function commonly used for classification tasks, which tends to exhibit overconfidence and inaccuracies when faced with novel inputs. Prior studies have highlighted the need for open-set recognition (OSR) techniques to differentiate between known and unknown data points, but existing approaches often exhibit a bias toward flagging inputs as unknown. To address this issue, we introduce a novel OSR technique called VarMax, designed to maintain a balanced approach. VarMax leverages the variance in model predictions to discern between known and unknown inputs. We propose a method for classifying ambiguous samples based on prediction variance to detect out-of-distribution samples to enhance classification accuracy and reliability. Our experiments demonstrate that VarMax meets and exceeds the performance of existing methods in identifying unknown data points while also improving the model's confidence and robustness in distinguishing between known and unknown inputs.