techhub.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A hub primarily for passionate technologists, but everyone is welcome

Administered by:

Server stats:

4.6K
active users

#interpretability

0 posts0 participants0 posts today

🧪 The Knowledge Graphs for Responsible AI Workshop is now underway at #ESWC2025!
📍 Room 7 – Nautilus Floor 0

The Knowledge Graphs for Responsible AI Workshop aims to explore how Knowledge Graphs (KGs) can promote the principles of Responsible AI—such as fairness, transparency, accountability, and inclusivity—by enhancing the interpretability, trustworthiness, and ethical grounding of AI systems. 📊🤖

Beyond the Black Box: Interpretability of LLMs in Finance

arxiv.org/abs/2505.24650

arXiv logo
arXiv.orgBeyond the Black Box: Interpretability of LLMs in FinanceLarge Language Models (LLMs) exhibit remarkable capabilities across a spectrum of tasks in financial services, including report generation, chatbots, sentiment analysis, regulatory compliance, investment advisory, financial knowledge retrieval, and summarization. However, their intrinsic complexity and lack of transparency pose significant challenges, especially in the highly regulated financial sector, where interpretability, fairness, and accountability are critical. As far as we are aware, this paper presents the first application in the finance domain of understanding and utilizing the inner workings of LLMs through mechanistic interpretability, addressing the pressing need for transparency and control in AI systems. Mechanistic interpretability is the most intuitive and transparent way to understand LLM behavior by reverse-engineering their internal workings. By dissecting the activations and circuits within these models, it provides insights into how specific features or components influence predictions - making it possible not only to observe but also to modify model behavior. In this paper, we explore the theoretical aspects of mechanistic interpretability and demonstrate its practical relevance through a range of financial use cases and experiments, including applications in trading strategies, sentiment analysis, bias, and hallucination detection. While not yet widely adopted, mechanistic interpretability is expected to become increasingly vital as adoption of LLMs increases. Advanced interpretability tools can ensure AI systems remain ethical, transparent, and aligned with evolving financial regulations. In this paper, we have put special emphasis on how these techniques can help unlock interpretability requirements for regulatory and compliance purposes - addressing both current needs and anticipating future expectations from financial regulators globally.

Are LM more than their behavior? 🤔

Join our Conference on Language Modeling (COLM) workshop and explore the interplay between what LMs answer and what happens internally ✨

See you in Montréal 🍁

CfP: shorturl.at/sBomu
Page: shorturl.at/FT3fX
Reviewer Nomination: shorturl.at/Jg1BP

Unlock the Secrets of AI Learning! ????Ever wondered how generative AI, the powerhouse behind stunning images and sophisticated text, truly learns? Park et al.'s groundbreaking study, ‘Emergence of Hidden Capabilities: Exploring Learning Dynamics in Concept Space,’ offers a revolutionary new perspective. Forget black boxes – this research unveils a "concept space" where AI learning becomes a visible journey!By casting ideas into geometric space, the authors bring to life how AI models learn step by step, stripping bare the order and timing of their knowledge. See the crucial role played by the "concept signal" in predicting what a model is first going to learn and note the fascinating "trajectory turns" revealing the sudden "aha!" moments of emergent abilities.This is not a theoretical abstraction – the framework has deep implications in the real world:Supercharge AI Training: Optimise training data to speed learning and improve efficiency.Demystify New Behaviours: Understand and even manage unforeseen strengths of state-of-the-art AI.Debug at Scale: Gain unprecedented insights into the knowledge state of a model to identify and fix faults.Future-Proof AI: This mode-agnostic feature primes the understanding of learning in other AI systems.This study is a must-read for all who care about the future of AI, from scientists and engineers to tech geeks and business executives. It's not only what AI can accomplish, but how it comes to do so.Interested in immersing yourself in the captivating universe of AI learning?Click here to read the complete article and discover the secrets of the concept space! #AI #MachineLearning #GenerativeAI #DeepLearning #Research #Innovation #ConceptSpace #EmergentCapabilities #AIDevelopment #Tech #ArtificialIntelligence #DataScience #FutureofAI #Interpretability