AIS Logo
Living knowledge for digital leadership
All AI Governance & Ethics Digital Transformation & Innovation Supply Chain & IoT SME & IT Management Platform Ecosystems & Strategy Cybersecurity & Risk AI Applications & Technologies Healthcare & Well-being Digital Work & Collaboration
AI Agents as Governance Actors in Data Trusts – A Normative and Design Framework

AI Agents as Governance Actors in Data Trusts – A Normative and Design Framework

Arnold F. Arz von Straussenburg, Jens J. Marga, Timon T. Aldenhoff, and Dennis M. Riehle
This study proposes a design theory to safely and ethically integrate Artificial Intelligence (AI) agents into the governance of data trusts. The paper introduces a normative framework that unifies fiduciary principles, institutional trust, and AI ethics. It puts forward four specific design principles to guide the development of AI systems that can act as responsible governance actors within these trusts, ensuring they protect beneficiaries' interests.

Problem Data trusts are frameworks for responsible data management, but integrating powerful AI systems creates significant ethical and security challenges. AI can be opaque and may have goals that conflict with the interests of data owners, undermining the fairness and accountability that data trusts are designed to protect. This creates a critical need for a governance model that allows organizations to leverage AI's benefits without compromising their fundamental duties to data owners.

Outcome - The paper establishes a framework to guide the integration of AI into data trusts, ensuring AI actions align with ethical and fiduciary responsibilities.
- It introduces four key design principles for AI agents: 1) Fiduciary alignment to prioritize beneficiary interests, 2) Accountability through complete traceability and oversight, 3) Transparent explainability for all AI decisions, and 4) Autonomy-preserving oversight to maintain robust human supervision.
- The research demonstrates that AI can enhance efficiency in data governance without eroding stakeholder trust or ethical standards if implemented correctly.
- It provides actionable recommendations, such as automated audits and dynamic consent mechanisms, to ensure the responsible use of AI within data ecosystems for the common good.
Data Trusts, Normative Framework, AI Governance, Fairness, AI Agents
Overcoming Algorithm Aversion with Transparency: Can Transparent Predictions Change User Behavior?

Overcoming Algorithm Aversion with Transparency: Can Transparent Predictions Change User Behavior?

Lasse Bohlen, Sven Kruschel, Julian Rosenberger, Patrick Zschech, and Mathias Kraus
This study investigates whether making a machine learning (ML) model's reasoning transparent can help overcome people's natural distrust of algorithms, known as 'algorithm aversion'. Through a user study with 280 participants, researchers examined how transparency interacts with the previously established method of allowing users to adjust an algorithm's predictions.

Problem People often hesitate to rely on algorithms for decision-making, even when the algorithms are superior to human judgment. While giving users control to adjust algorithmic outputs is known to reduce this aversion, it has been unclear whether making the algorithm's 'thinking process' transparent would also help, or perhaps even be more effective.

Outcome - Giving users the ability to adjust an algorithm's predictions significantly reduces their reluctance to use it, confirming findings from previous research.
- In contrast, simply making the algorithm transparent by showing its decision logic did not have a statistically significant effect on users' willingness to choose the model.
- The ability to adjust the model's output (adjustability) appears to be a more powerful tool for encouraging algorithm adoption than transparency alone.
- The effects of transparency and adjustability were found to be largely independent of each other, rather than having a combined synergistic effect.
Algorithm Aversion, Adjustability, Transparency, Interpretable Machine Learning, Replication Study
Bridging Mind and Matter: A Taxonomy of Embodied Generative AI

Bridging Mind and Matter: A Taxonomy of Embodied Generative AI

Jan Laufer, Leonardo Banh, Gero Strobel
This study develops a comprehensive classification system, or taxonomy, for Embodied Generative AI—AI that can perceive, reason, and act in physical systems like robots. The taxonomy was created through a systematic literature review and an analysis of 40 real-world examples of this technology. The resulting framework provides a structured way to understand and categorize the various dimensions of AI integrated into physical forms.

Problem As Generative AI (GenAI) moves from digital content creation to controlling physical agents, there has been a lack of systematic classification and evaluation methods. While many studies focus on specific applications, a clear framework for understanding the core characteristics and capabilities of these embodied AI systems has been missing. This gap makes it difficult for researchers and practitioners to compare, analyze, and optimize emerging applications in fields like robotics and automation.

Outcome - The study created a detailed taxonomy for Embodied Generative AI to systematically classify its characteristics.
- This taxonomy is structured into three main categories (meta-characteristics): Embodiment, Intelligence, and System.
- It further breaks down these categories into 16 dimensions and 50 specific characteristics, providing a comprehensive framework for analysis.
- The framework serves as a foundational tool for future research and helps businesses and developers make informed decisions when designing or implementing embodied AI systems in areas like service robotics and industrial automation.
Generative Artificial Intelligence, Embodied AI, Autonomous Agents, Human-GenAI Collaboration
Understanding How Freelancers in the Design Domain Collaborate with Generative Artificial Intelligence

Understanding How Freelancers in the Design Domain Collaborate with Generative Artificial Intelligence

Fabian Helms, Lisa Gussek, and Manuel Wiesche
This study explores how generative AI (GenAI), specifically text-to-image generation (TTIG) systems, impacts the creative work of freelance designers. Through qualitative interviews with 10 designers, the researchers conducted a thematic analysis to understand the nuances of this new form of human-AI collaboration.

Problem While the impact of GenAI on creative fields is widely discussed, there is little specific research on how it affects freelance designers. This group is uniquely vulnerable to technological disruption due to their direct market exposure and lack of institutional support, creating an urgent need to understand how these tools are changing their work processes and job security.

Outcome - The research identified four key tradeoffs freelancers face when using GenAI: creativity can be enhanced (inspiration) but also risks becoming generic (standardization).
- Efficiency is increased, but this can be undermined by 'overprecision', a form of perfectionism where too much time is spent on minor AI-driven adjustments.
- The interaction with AI is viewed dually: either as a helpful 'sparring partner' for ideas or as an unpredictable tool causing a frustrating lack of control.
- For the future of work, GenAI is seen as forcing a job transition where designers must adapt new skills, while also posing a direct threat of job loss, particularly for junior roles.
Generative Artificial Intelligence, Online Freelancing, Human-AI collaboration, Freelance designers, Text-to-image generation, Creative process
Extracting Explanatory Rationales of Activity Relationships using LLMs - A Comparative Analysis

Extracting Explanatory Rationales of Activity Relationships using LLMs - A Comparative Analysis

Kerstin Andree, Zahi Touqan, Leon Bein, and Luise Pufahl
This study investigates using Large Language Models (LLMs) to automatically extract and classify the reasons (explanatory rationales) behind the ordering of tasks in business processes from text. The authors compare the performance of various LLMs and four different prompting techniques (Vanilla, Few-Shot, Chain-of-Thought, and a combination) to determine the most effective approach for this automation.

Problem Understanding why business process steps occur in a specific order (due to laws, business rules, or best practices) is crucial for process improvement and redesign. However, this information is typically buried in textual documents and must be extracted manually, which is a very expensive and time-consuming task for organizations.

Outcome - Few-Shot prompting, where the model is given a few examples, significantly improves classification accuracy compared to basic prompting across almost all tested LLMs.
- The combination of Few-Shot learning and Chain-of-Thought reasoning also proved to be a highly effective approach.
- Interestingly, smaller and more cost-effective LLMs (like GPT-4o-mini) achieved performance comparable to or even better than larger models when paired with sophisticated prompting techniques.
- The findings demonstrate that LLMs can successfully automate the extraction of process knowledge, making advanced process analysis more accessible and affordable for organizations with limited resources.
Activity Relationships Classification, Large Language Models, Explanatory Rationales, Process Context, Business Process Management, Prompt Engineering
Gender Bias in LLMs for Digital Innovation: Disparities and Fairness Concerns

Gender Bias in LLMs for Digital Innovation: Disparities and Fairness Concerns

Sumin Kim-Andres¹ and Steffi Haag¹
This study investigates gender bias in large language models (LLMs) like ChatGPT within the context of digital innovation and entrepreneurship. Using two tasks—associating gendered terms with professions and simulating venture capital funding decisions—the researchers analyzed ChatGPT-4o's outputs to identify how societal gender biases are reflected and reinforced by AI.

Problem As businesses increasingly integrate AI tools for tasks like brainstorming, hiring, and decision-making, there's a significant risk that these systems could perpetuate harmful gender stereotypes. This can create disadvantages for female entrepreneurs and innovators, potentially widening the existing gender gap in technology and business leadership.

Outcome - ChatGPT-4o associated male-denoting terms with digital innovation and tech-related professions significantly more often than female-denoting terms.
- In simulated venture capital scenarios, the AI model exhibited 'in-group bias,' predicting that both male and female venture capitalists would be more likely to fund entrepreneurs of their own gender.
- The study confirmed that LLMs can perpetuate gender bias through implicit cues like names alone, even when no explicit gender information is provided.
- The findings highlight the risk of AI reinforcing stereotypes in professional decision-making, which can limit opportunities for underrepresented groups in business and innovation.
Gender Bias, Large Language Models, Fairness, Digital Innovation, Artificial Intelligence
Using Large Language Models for Healthcare Data Interoperability: A Data Mediation Pipeline to Integrate Heterogeneous Patient-Generated Health Data and FHIR

Using Large Language Models for Healthcare Data Interoperability: A Data Mediation Pipeline to Integrate Heterogeneous Patient-Generated Health Data and FHIR

Torben Ukena, Robin Wagler, and Rainer Alt
This study explores the use of Large Language Models (LLMs) to streamline the integration of diverse patient-generated health data (PGHD) from sources like wearables. The researchers propose and evaluate a data mediation pipeline that combines an LLM with a validation mechanism to automatically transform various data formats into the standardized Fast Healthcare Interoperability Resources (FHIR) format.

Problem Integrating patient-generated health data from various devices into clinical systems is a major challenge due to a lack of interoperability between different data formats and hospital information systems. This data fragmentation hinders clinicians' ability to get a complete view of a patient's health, potentially leading to misinformed decisions and obstacles to patient-centered care.

Outcome - LLMs can effectively translate heterogeneous patient-generated health data into the valid, standardized FHIR format, significantly improving healthcare data interoperability.
- Providing the LLM with a few examples (few-shot prompting) was more effective than providing it with abstract rules and guidelines (reasoning prompting).
- The inclusion of a validation and self-correction loop in the pipeline is crucial for ensuring the LLM produces accurate and standard-compliant output.
- While successful with text-based data, the LLM struggled to accurately aggregate values from complex structured data formats like JSON and CSV, leading to lower semantic accuracy in those cases.
FHIR, semantic interoperability, large language models, hospital information system, patient-generated health data
Generative AI Usage of University Students: Navigating Between Education and Business

Generative AI Usage of University Students: Navigating Between Education and Business

Fabian Walke, Veronika Föller
This study investigates how university students who also work professionally use Generative AI (GenAI) in both their academic and business lives. Using a grounded theory approach, the researchers interviewed eleven part-time students from a distance learning university to understand the characteristics, drivers, and challenges of their GenAI usage.

Problem While much research has explored GenAI in education or in business separately, there is a significant gap in understanding its use at the intersection of these two domains. Specifically, the unique experiences of part-time students who balance professional careers with their studies have been largely overlooked.

Outcome - GenAI significantly enhances productivity and learning for students balancing work and education, helping with tasks like writing support, idea generation, and summarizing content.
- Students express concerns about the ethical implications, reliability of AI-generated content, and the risk of academic misconduct or being falsely accused of plagiarism.
- A key practical consequence is that GenAI tools like ChatGPT are replacing traditional search engines for many information-seeking tasks due to their speed and directness.
- The study highlights a strong need for universities to provide clear guidelines, regulations, and formal training on using GenAI effectively and ethically.
- User experience is a critical factor; a positive, seamless interaction with a GenAI tool promotes continuous usage, while a poor experience diminishes willingness to use it.
Artificial Intelligence, ChatGPT, Enterprise, Part-time students, Generative AI, Higher Education
The GenAI Who Knew Too Little – Revisiting Transactive Memory Systems in Human GenAI Collaboration

The GenAI Who Knew Too Little – Revisiting Transactive Memory Systems in Human GenAI Collaboration

Christian Meske, Tobias Hermanns, Florian Brachten
This study investigates how traditional models of team collaboration, known as Transactive Memory Systems (TMS), manifest when humans work with Generative AI. Through in-depth interviews with 14 knowledge workers, the research analyzes the unique dynamics of expertise recognition, trust, and coordination that emerge in these partnerships.

Problem While Generative AI is increasingly used as a collaborative tool, our understanding of teamwork is based on human-to-human interaction. This creates a knowledge gap, as the established theories do not account for an AI partner that operates on algorithms rather than social cues, potentially leading to inefficient and frustrating collaborations.

Outcome - Human-AI collaboration is asymmetrical: Humans learn the AI's capabilities, but the AI fails to recognize and remember human expertise beyond a single conversation.
- Trust in GenAI is ambivalent and requires verification: Users simultaneously see the AI as an expert yet doubt its reliability, forcing them to constantly verify its outputs, a step not typically taken with trusted human colleagues.
- Teamwork is hierarchical, not mutual: Humans must always take the lead and direct a passive AI that lacks initiative, creating a 'boss-employee' dynamic rather than a reciprocal partnership where both parties contribute ideas.
Generative AI, Transactive Memory Systems, Human-AI Collaboration, Knowledge Work, Trust in AI, Expertise Recognition, Coordination
Aisle be Back: State-of-the-Art Adoption of Retail Service Robots in Brick-and-Mortar Retail

Aisle be Back: State-of-the-Art Adoption of Retail Service Robots in Brick-and-Mortar Retail

Luisa Strelow, Michael Dominic Harr, and Reinhard Schütte
This study analyzes the current state of Retail Service Robot (RSR) adoption in physical, brick-and-mortar (B&M) stores. Using a dual research method that combines a systematic literature review with a multi-case study of major European retailers, the paper synthesizes how these robots are currently being used for various operational tasks.

Problem Brick-and-mortar retailers are facing significant challenges, including acute staff shortages and intense competition from online stores, which threaten their operational efficiency. While service robots offer a potential solution to sustain operations and transform the customer experience, a comprehensive understanding of their current adoption in retail environments is lacking.

Outcome - Retail Service Robots (RSRs) are predominantly adopted for tasks related to information exchange and goods transportation, which improves both customer service and operational efficiency.
- The potential for more advanced, human-like (anthropomorphic) interaction between robots and customers has not yet been fully utilized by retailers.
- The adoption of RSRs in the B&M retail sector is still in its infancy, with most robots being used for narrowly defined, single-purpose tasks rather than leveraging their full multi-functional potential.
- Research has focused more on customer-robot interactions than on employee-robot interactions, leaving a gap in understanding employee acceptance and collaboration.
- Many robotic systems discussed in academic literature are prototypes tested in labs, with few long-term, real-world deployments reported, especially in customer service roles.
Retail Service Robot, Brick-and-Mortar, Technology Adoption, Artificial Intelligence, Automation
LLMs for Intelligent Automation - Insights from a Systematic Literature Review

LLMs for Intelligent Automation - Insights from a Systematic Literature Review

David Sonnabend, Mahei Manhai Li and Christoph Peters
This study conducts a systematic literature review to examine how Large Language Models (LLMs) can enhance Intelligent Automation (IA). The research aims to overcome the limitations of traditional Robotic Process Automation (RPA), such as handling unstructured data and workflow changes, by systematically investigating the integration of LLMs.

Problem Traditional Robotic Process Automation (RPA) struggles with complex tasks involving unstructured data and dynamic workflows. While Large Language Models (LLMs) show promise in addressing these issues, there has been no systematic investigation into how they can specifically advance the field of Intelligent Automation (IA), creating a significant research gap.

Outcome - LLMs are primarily used to process complex inputs, such as unstructured text, within automation workflows.
- They are leveraged to generate automation workflows directly from natural language commands, simplifying the creation process.
- LLMs are also used to guide goal-oriented Graphical User Interface (GUI) navigation, making automation more adaptable to interface changes.
- A key research gap was identified in the lack of systems that combine these different capabilities and enable continuous learning at runtime.
Large Language Models (LLMs), Intelligent Process Automation (IPA), Intelligent Automation (IA), Cognitive Automation (CA), Tool Learning, Systematic Literature Review, Robotic Process Automation (RPA)
Label Error Detection in Defect Classification using Area Under the Margin (AUM) Ranking on Tabular Data

Label Error Detection in Defect Classification using Area Under the Margin (AUM) Ranking on Tabular Data

Pavlos Rath-Manakidis, Kathrin Nauth, Henry Huick, Miriam Fee Unger, Felix Hoenig, Jens Poeppelbuss, and Laurenz Wiskott
This study introduces an efficient method using Area Under the Margin (AUM) ranking with gradient-boosted decision trees to detect labeling errors in tabular data. The approach is designed to improve data quality for machine learning models used in industrial quality control, specifically for flat steel defect classification. The method's effectiveness is validated on both public and real-world industrial datasets, demonstrating it can identify problematic labels in a single training run.

Problem Automated surface inspection systems in manufacturing rely on machine learning models trained on large datasets. The performance of these models is highly dependent on the quality of the data labels, but errors frequently occur due to annotator mistakes or ambiguous defect definitions. Existing methods for finding these label errors are often computationally expensive and not optimized for the tabular data formats common in industrial applications.

Outcome - The proposed AUM method is as effective as more complex, computationally expensive techniques for detecting label errors but requires only a single model training run.
- The method successfully identifies both synthetically created and real-world label errors in industrial datasets related to steel defect classification.
- Integrating this method into quality control workflows significantly reduces the manual effort required to find and correct mislabeled data, improving the overall quality of training datasets and subsequent model performance.
- In a real-world test, the method flagged suspicious samples for expert review, where 42% were confirmed to be labeling errors.
Label Error Detection, Automated Surface Inspection System (ASIS), Machine Learning, Gradient Boosting, Data-centric AI
Measuring AI Literacy of Future Knowledge Workers: A Mediated Model of AI Experience and AI Knowledge

Measuring AI Literacy of Future Knowledge Workers: A Mediated Model of AI Experience and AI Knowledge

Sarah Hönigsberg, Sabrine Mallek, Laura Watkowski, and Pauline Weritz
This study investigates how future professionals develop AI literacy, which is the ability to effectively use and understand AI tools. Using a survey of 352 business school students, the researchers examined how hands-on experience with AI (both using and designing it) and theoretical knowledge about AI work together to build overall proficiency. The research proposes a new model showing that knowledge acts as a critical bridge between simply using AI and truly understanding it.

Problem As AI becomes a standard tool in professional settings, simply knowing how to use it isn't enough; professionals need a deeper understanding, or "AI literacy," to use it effectively and responsibly. The study addresses the problem that current frameworks for teaching AI skills often overlook the specific needs of knowledge workers and don't clarify how hands-on experience translates into true competence. This gap makes it difficult for companies and universities to design effective training programs to prepare the future workforce.

Outcome - Hands-on experience with AI is crucial, but it doesn't directly create AI proficiency; instead, it serves to build a foundation of AI knowledge.
- This structured AI knowledge is the critical bridge that turns practical experience into true AI literacy, allowing individuals to critique and apply AI insights effectively.
- Experience in designing or configuring AI systems has a significantly stronger positive impact on developing AI literacy than just using AI tools.
- The findings suggest that education and corporate training should combine practical, hands-on projects with structured learning about how AI works to build a truly AI-literate workforce.
knowledge worker, Al literacy, digital intelligence, digital literacy, AI knowledge
Unveiling the Influence of Personality, Identity, and Organizational Culture on Generative AI Adoption in the Workplace

Unveiling the Influence of Personality, Identity, and Organizational Culture on Generative AI Adoption in the Workplace

Dugaxhin Xhigoli
This qualitative study examines how an employee's personality, professional identity, and company culture influence their engagement with generative AI (GenAI). Through 23 expert interviews, the research explores the underlying factors that shape different AI adoption behaviors, from transparent integration to strategic concealment.

Problem As companies rapidly adopt generative AI, they encounter a wide range of employee responses, yet there is limited understanding of what drives this variation. This study addresses the research gap by investigating why employees differ in their AI usage, specifically focusing on how individual psychology and the organizational environment interact to shape these behaviors.

Outcome - The study identified four key dimensions influencing GenAI adoption: Personality-driven usage behavior, AI-driven changes to professional identity, organizational culture factors, and the organizational risks of unmanaged AI use.
- Four distinct employee archetypes were identified: 'Innovative Pioneers' who openly use and identify with AI, 'Hidden Users' who identify with AI but conceal its use for competitive advantage, 'Transparent Users' who openly use AI as a tool, and 'Critical Skeptics' who remain cautious and avoid it.
- Personality traits, particularly those from the 'Dark Triad' like narcissism, and competitive work environments significantly drive the strategic concealment of AI use.
- A company's culture is critical; open, innovative cultures foster ethical and transparent AI adoption, whereas rigid, hierarchical cultures encourage concealment and the rise of risky 'Shadow AI'.
Generative AI, Personality Traits, AI Identity, Organizational Culture, AI Adoption
The Role of Generative AI in P2P Rental Platforms: Investigating the Effects of Timing and Interactivity on User Reliance in Content (Co-)Creation Processes

The Role of Generative AI in P2P Rental Platforms: Investigating the Effects of Timing and Interactivity on User Reliance in Content (Co-)Creation Processes

Niko Spatscheck, Myriam Schaschek, Christoph Tomitza, and Axel Winkelmann
This study investigates how Generative AI can best assist users on peer-to-peer (P2P) rental platforms like Airbnb in writing property listings. Through an experiment with 244 participants, the researchers tested how the timing of when AI suggestions are offered and the level of interactivity (automatic vs. user-prompted) influence how much a user relies on the AI.

Problem While Generative AI offers a powerful way to help property hosts create compelling listings, platforms don't know the most effective way to implement these tools. It's unclear if AI assistance is more impactful at the beginning or end of the writing process, or if users prefer to actively ask for help versus receiving it automatically. This study addresses this knowledge gap to provide guidance for designing better AI co-writing assistants.

Outcome - Offering AI suggestions earlier in the writing process significantly increases how much users rely on them.
- Allowing users to actively prompt the AI for assistance leads to a slightly higher reliance compared to receiving suggestions automatically.
- Higher cognitive load (mental effort) reduces a user's reliance on AI-generated suggestions.
- For businesses like Airbnb, these findings suggest that AI writing tools should be designed to engage users at the very beginning of the content creation process to maximize their adoption and impact.
Human-genAI collaboration, Co-writing, P2P rental platforms, Reliance, Generative AI, Cognitive Load
A Framework for Context-Specific Theorizing on Trust and Reliance in Collaborative Human-AI Decision-Making Environments

A Framework for Context-Specific Theorizing on Trust and Reliance in Collaborative Human-AI Decision-Making Environments

Niko Spatscheck
This study analyzes 59 empirical research papers to understand why findings on human trust in AI have been inconsistent. It synthesizes this research into a single framework that identifies the key factors influencing how people decide to trust and rely on AI systems for decision-making. The goal is to provide a more unified and context-aware understanding of the complex relationship between humans and AI.

Problem Effective collaboration between humans and AI is often hindered because people either trust AI too much (overreliance) or too little (underreliance), leading to poor outcomes. Existing research offers conflicting explanations for this behavior, creating a knowledge gap for developers and organizations. This study addresses the problem that prior research has largely ignored the specific context—such as the user's expertise, the AI's design, and the nature of the task—which is crucial for explaining these inconsistencies.

Outcome - The study created a comprehensive framework that categorizes the factors influencing trust and reliance on AI into three main groups: human-related (e.g., user expertise, cognitive biases), AI-related (e.g., performance, explainability), and decision-related (e.g., risk, complexity).
- It concludes that trust is not static but is dynamically shaped by the interaction of these various contextual factors.
- This framework provides a practical tool for researchers and businesses to better predict how users will interact with AI and to design systems that foster appropriate levels of trust, leading to better collaborative performance.
AI Systems, Trust, Reliance, Collaborative Decision-Making, Human-AI Collaboration, Contextual Factors, Conceptual Framework
Unveiling Location-Specific Price Drivers: A Two-Stage Cluster Analysis for Interpretable House Price Predictions

Unveiling Location-Specific Price Drivers: A Two-Stage Cluster Analysis for Interpretable House Price Predictions

Paul Gümmer, Julian Rosenberger, Mathias Kraus, Patrick Zschech, and Nico Hambauer
This study proposes a novel machine learning approach for house price prediction using a two-stage clustering method on 43,309 German property listings from 2023. The method first groups properties by location and then refines these groups with additional property features, subsequently applying interpretable models like linear regression (LR) or generalized additive models (GAM) to each cluster. This balances predictive accuracy with the ability to understand the model's decision-making process.

Problem Predicting house prices is difficult because of significant variations in local markets. Current methods often use either highly complex 'black-box' models that are accurate but hard to interpret, or overly simplistic models that are interpretable but fail to capture the nuances of different market segments. This creates a trade-off between accuracy and transparency, making it difficult for real estate professionals to get reliable and understandable property valuations.

Outcome - The two-stage clustering approach significantly improved prediction accuracy compared to models without clustering.
- The mean absolute error was reduced by 36% for the Generalized Additive Model (GAM/EBM) and 58% for the Linear Regression (LR) model.
- The method provides deeper, cluster-specific insights into how different features, like construction year and living space, affect property prices in different local markets.
- By segmenting the market, the model reveals that price drivers vary significantly across geographical locations and property types, enhancing market transparency for buyers, sellers, and analysts.
House Pricing, Cluster Analysis, Interpretable Machine Learning, Location-Specific Predictions
Designing AI-driven Meal Demand Prediction Systems

Designing AI-driven Meal Demand Prediction Systems

Alicia Cabrejas Leonhardt, Maximilian Kalff, Emil Kobel, and Max Bauch
This study outlines the design of an Artificial Intelligence (AI) system for predicting meal demand, with a focus on the airline catering industry. Through interviews with various stakeholders, the researchers identified key system requirements and developed nine fundamental design principles. These principles were then consolidated into a feasible system architecture to guide the development of effective forecasting tools.

Problem Inaccurate demand forecasting creates significant challenges for industries like airline catering, leading to a difficult balance between waste and customer satisfaction. Overproduction results in high costs and food waste, while underproduction causes lost sales and unhappy customers. This paper addresses the need for a more precise, data-driven approach to forecasting to improve sustainability, reduce costs, and enhance operational efficiency.

Outcome - The research identified key requirements for AI-driven demand forecasting systems based on interviews with industry experts.
- Nine core design principles were established to guide the development of these systems, focusing on aspects like data integration, sustainability, modularity, transparency, and user-centric design.
- A feasible system architecture was proposed that consolidates all nine principles, demonstrating a practical path for implementation.
- The findings provide a framework for creating advanced AI tools that can improve prediction accuracy, reduce food waste, and support better decision-making in complex operational environments.
meal demand prediction, forecasting methodology, customer choice behaviour, supervised machine learning, design science research
Load More Showing 36 of 48