AIS Logo
Living knowledge for digital leadership
All AI Governance & Ethics Digital Transformation & Innovation Supply Chain & Operations AI Adoption & Implementation Platform Ecosystems & Strategy SME & Entrepreneurship Cybersecurity & Risk AI Applications & Technologies Digital Health & Well-being Digital Work & Collaboration Education & Training
Toward Triadic Delegation: How Agentic IS Artifacts Affect the Patient-Doctor Relationship in Healthcare
Journal of the Association for Information Systems (2025)

Toward Triadic Delegation: How Agentic IS Artifacts Affect the Patient-Doctor Relationship in Healthcare

Pascal Fechner, Luis Lämmermann, Jannik Lockl, Maximilian Röglinger, Nils Urbach
This study investigates how autonomous information systems (agentic IS artifacts) are transforming the traditional two-way relationship between patients and doctors into a three-way, or triadic, relationship. Using an in-depth case study of an AI-powered health companion for managing neurogenic lower urinary tract dysfunction, the paper analyzes the new dynamics, roles, and interactions that emerge when an intelligent technology becomes an active participant in healthcare delivery.

Problem With the rise of artificial intelligence in medicine, autonomous systems are no longer just passive tools but active agents in patient care. This shift challenges the conventional patient-doctor dynamic, yet existing theories are ill-equipped to explain the complexities of this new three-part relationship. This research addresses the gap in understanding how these AI agents redefine roles, interactions, and potential conflicts in patient-centric healthcare.

Outcome - The introduction of an AI agent transforms the dyadic patient-doctor relationship into a triadic one, often with the AI acting as a central intermediary.
- The AI's capabilities create 'attribute interference,' where responsibilities and knowledge overlap between the patient, doctor, and AI, introducing new complexities.
- New 'triadic delegation choices' emerge, allowing tasks to be delegated to the doctor, the AI, or both, based on factors like task complexity and emotional context.
- The study identifies novel conflicts arising from this triad, including human concerns over losing control (autonomy conflicts), new information imbalances, and the blurring of traditional medical roles.
Agentic IS Artifacts, Delegation, Patient-Doctor Relationship, Personalized Healthcare, Triadic Delegation, Healthcare AI
Understanding the Ethics of Generative AI: Established and New Ethical Principles
Communications of the Association for Information Systems (2025)

Understanding the Ethics of Generative AI: Established and New Ethical Principles

Joakim Laine, Matti Minkkinen, Matti Mäntymäki
This study conducts a comprehensive review of academic literature to synthesize the ethical principles of generative artificial intelligence (GenAI) and large language models (LLMs). It explores how established AI ethics are presented in the context of GenAI and identifies what new ethical principles have surfaced due to the unique capabilities of this technology.

Problem The rapid development and widespread adoption of powerful GenAI tools like ChatGPT have introduced new ethical challenges that are not fully covered by existing AI ethics frameworks. This creates a critical gap, as the specific ethical principles required for the responsible development and deployment of GenAI systems remain relatively unclear.

Outcome - Established AI ethics principles (e.g., fairness, privacy, responsibility) are still relevant, but their importance and interpretation are shifting in the context of GenAI.
- Six new ethical principles specific to GenAI are identified: respect for intellectual property, truthfulness, robustness, recognition of malicious uses, sociocultural responsibility, and human-centric design.
- Principles such as non-maleficence, privacy, and environmental sustainability have gained heightened importance due to the general-purpose, large-scale nature of GenAI systems.
- The paper proposes 'meta-principles' for managing ethical complexities, including ranking principles, mapping contradictions between them, and implementing continuous monitoring.
Generative AI, AI Ethics, Large Language Models, AI Governance, Ethical Principles, AI Auditing
IBM Watson Health Growth Strategy: Is Artificial Intelligence (AI) The Answer
Communications of the Association for Information Systems (2025)

IBM Watson Health Growth Strategy: Is Artificial Intelligence (AI) The Answer

Abhinav Shekhar, Rakesh Gupta, Sujeet Kumar Sharma
This study analyzes IBM's strategic dilemma with its Watson Health initiative, which aimed to monetize artificial intelligence for cancer detection and treatment recommendations. It explores whether IBM should continue its specialized focus on healthcare (a vertical strategy) or reposition Watson as a versatile, cross-industry AI platform (a horizontal strategy). The paper provides insights into the opportunities and challenges associated with unlocking the transformational power of AI in a business context.

Problem Despite a multi-billion dollar investment and initial promise, IBM's Watson Health struggled with profitability, model accuracy, and scalability. The AI's recommendations were not consistently reliable or generalizable across different patient populations and healthcare systems, leading to poor adoption. This created a critical strategic crossroads for IBM: whether to continue investing heavily in the specialized healthcare vertical or to pivot towards a more scalable, general-purpose AI platform to drive future growth.

Outcome - Model Accuracy & Bias: Watson's performance was inconsistent, and its recommendations, trained primarily on US data, were not always applicable to international patient populations, revealing significant algorithmic bias.
- Lack of Explainability: The 'black box' nature of the AI made it difficult for clinicians to trust its recommendations, hindering adoption as they could not understand its reasoning process.
- Integration and Scaling Challenges: Integrating Watson into existing hospital workflows and electronic health records was costly and complex, creating significant barriers to widespread implementation.
- Strategic Dilemma: The challenges forced IBM to choose between continuing its high-investment vertical strategy in healthcare, pivoting to a more scalable horizontal cross-industry platform, or attempting a convergence of both approaches.
Artificial Intelligence (AI), AI Strategy, Watson, Healthcare AI, Vertical AI, Horizontal AI, AI Ethics
Digital Resilience in High-Tech SMEs: Exploring the Synergy of AI and IoT in Supply Chains
Communications of the Association for Information Systems (2025)

Digital Resilience in High-Tech SMEs: Exploring the Synergy of AI and IoT in Supply Chains

Adnan Khan, Syed Hussain Murtaza, Parisa Maroufkhani, Sultan Sikandar Mirza
This study investigates how digital resilience enhances the adoption of AI and Internet of Things (IoT) practices within the supply chains of high-tech small and medium-sized enterprises (SMEs). Using survey data from 293 Chinese high-tech SMEs, the research employs partial least squares structural equation modeling to analyze the impact of these technologies on sustainable supply chain performance.

Problem In an era of increasing global uncertainty and supply chain disruptions, businesses, especially high-tech SMEs, struggle to maintain stability and performance. There is a need to understand how digital technologies can be leveraged not just for efficiency, but to build genuine resilience that allows firms to adapt to and recover from shocks while maintaining sustainability.

Outcome - Digital resilience is a crucial driver for the adoption of both IoT-oriented supply chain practices and AI-driven innovative practices.
- The implementation of IoT and AI practices, fostered by digital resilience, significantly improves sustainable supply chain performance.
- AI-driven practices were found to be particularly vital for resource optimization and predictive analytics, strongly influencing sustainability outcomes.
- The effectiveness of digital resilience in promoting IoT adoption is amplified in dynamic and unpredictable market environments.
Digital Resilience, Internet of Things-Oriented Supply Chain Management Practices, AI-Driven Innovative Practices, Supply Chain Dynamism, Sustainable Supply Chain Performance
Rethinking Healthcare Technology Adoption: The Critical Role of Visibility & Consumption Values
Communications of the Association for Information Systems (2025)

Rethinking Healthcare Technology Adoption: The Critical Role of Visibility & Consumption Values

Sonali Dania, Yogesh Bhatt, Paula Danskin Englis
This study explores how the visibility of digital healthcare technologies influences a consumer's intention to adopt them, using the Theory of Consumption Value (TCV) as a framework. It investigates the roles of different values (e.g., functional, social, emotional) as mediators and examines how individual traits like openness-to-change and gender moderate this relationship. The research methodology involved collecting survey data from digital healthcare users and analyzing it with structural equation modeling.

Problem Despite the rapid growth of the digital health market, user adoption rates vary significantly, and the factors driving these differences are not fully understood. Specifically, there is limited research on how consumption values and the visibility of a technology impact adoption, along with a poor understanding of how individual traits like openness to change or gender-specific behaviors influence these decisions.

Outcome - The visibility of digital healthcare applications significantly and positively influences a consumer's intention to adopt them.
- Visibility strongly shapes user perceptions, positively impacting the technology's functional, conditional, social, and emotional value; however, it did not significantly influence epistemic value (curiosity).
- The relationship between visibility and adoption is mediated by key factors: the technology's perceived usefulness, the user's perception of privacy, and their affinity for technology.
- A person's innate openness to change and their gender can moderate the effect of visibility; for instance, individuals who are already open to change are less influenced by a technology's visibility.
Adoption Intention, Healthcare Applications, Theory of Consumption Values, Values, Visibility
Reinventing French Agriculture: The Era of Farmers 4.0, Technological Innovation and Sustainability
Communications of the Association for Information Systems (2025)

Reinventing French Agriculture: The Era of Farmers 4.0, Technological Innovation and Sustainability

Claude Chammaa, Fatma Fourati-Jamoussi, Lucian Ceapraz, Valérie Leroux
This study investigates the behavioral, contextual, and economic factors that influence French farmers' adoption of innovative agricultural technologies. Using a mixed-methods approach that combines qualitative interviews and quantitative surveys, the research proposes and validates the French Farming Innovation Adoption (FFIA) model, an agricultural adaptation of the UTAUT2 model, to explain technology usage.

Problem The agricultural sector is rapidly transforming with digital innovation, but the factors driving technology adoption among farmers, particularly in cost-sensitive and highly regulated environments like France, are not fully understood. Existing technology acceptance models often fail to capture the central role of economic viability, leaving a gap in explaining how sustainability goals and policy supports translate into practical adoption.

Outcome - The most significant direct predictor of technology adoption is 'Price Value'; farmers prioritize innovations they perceive as economically beneficial and cost-effective.
- Traditional drivers like government subsidies (Facilitating Conditions), expected performance, and social influence do not directly impact technology use. Instead, their influence is indirect, mediated through the farmer's perception of the technology's price value.
- Perceived sustainability benefits alone do not significantly drive adoption. For farmers to invest, environmental advantages must be clearly linked to economic gains, such as reduced costs or increased yields.
- Economic appraisal is the critical filter through which farmers evaluate new technologies, making it the central consideration in their decision-making process.
Farmers 4.0, Technology Adoption, Sustainability, Agricultural Innovation, UTAUT2, Price Value, Artificial Intelligence
Unveiling Enablers to the Use of Generative AI Artefacts in Rural Educational Settings: A Socio-Technical Perspective
Communications of the Association for Information Systems (2025)

Unveiling Enablers to the Use of Generative AI Artefacts in Rural Educational Settings: A Socio-Technical Perspective

Pramod K. Patnaik, Kunal Rao, Gaurav Dixit
This study investigates the factors that enable the use of Generative AI (GenAI) tools in rural educational settings within developing countries. Using a mixed-method approach that combines in-depth interviews and the Grey DEMATEL decision-making method, the research identifies and analyzes these enablers through a socio-technical lens to understand their causal relationships.

Problem Marginalized rural communities in developing countries face significant challenges in education, including a persistent digital divide that limits access to modern learning tools. This research addresses the gap in understanding how Generative AI can be practically leveraged to overcome these education-related challenges and improve learning quality in under-resourced regions.

Outcome - The study identified fifteen key enablers for using Generative AI in rural education, grouped into social and technical categories.
- 'Policy initiatives at the government level' was found to be the most critical enabler, directly influencing other key factors like GenAI training for teachers and students, community awareness, and school leadership commitment.
- Six novel enablers were uncovered through interviews, including affordable internet data, affordable telecommunication networks, and the provision of subsidized devices for lower-income groups.
- An empirical framework was developed to illustrate the causal relationships among the enablers, helping stakeholders prioritize interventions for effective GenAI adoption.
Generative AI, Rural, Education, Digital Divide, Interviews, Socio-technical Theory
Implementing AI into ERP Software
Communications of the Association for Information Systems (2025)

Implementing AI into ERP Software

Siar Sarferaz
This study investigates how to systematically integrate Artificial Intelligence (AI) into complex Enterprise Resource Planning (ERP) systems. Through an analysis of real-world use cases, the author identifies key challenges and proposes a comprehensive DevOps (Development and Operations) framework to standardize and streamline the entire lifecycle of AI applications within an ERP environment.

Problem While integrating AI into ERP software offers immense potential for automation and optimization, organizations lack a systematic approach to do so. This absence of a standardized framework leads to inconsistent, inefficient, and costly implementations, creating significant barriers to adopting AI capabilities at scale within enterprise systems.

Outcome - Identified 20 specific, recurring gaps in the development and operation of AI applications within ERP systems, including complex setup, heterogeneous development, and insufficient monitoring.
- Developed a comprehensive DevOps framework that standardizes the entire AI lifecycle into six stages: Create, Check, Configure, Train, Deploy, and Monitor.
- The proposed framework provides a systematic, self-service approach for business users to manage AI models, reducing the reliance on specialized technical teams and lowering the total cost of ownership.
- A quantitative evaluation across 10 real-world AI scenarios demonstrated that the framework reduced processing time by 27%, increased cost savings by 17%, and improved outcome quality by 15%.
Enterprise Resource Planning, Artificial Intelligence, DevOps, Software Integration, AI Development, AI Operations, Enterprise AI
Trust Me, I'm a Tax Advisor: Influencing Factors for Adopting Generative AI Assistants in Tax Law
International Conference on Wirtschaftsinformatik (2025)

Trust Me, I'm a Tax Advisor: Influencing Factors for Adopting Generative AI Assistants in Tax Law

Ben Möllmann, Leonardo Banh, Jan Laufer, and Gero Strobel
This study explores the critical role of user trust in the adoption of Generative AI assistants within the specialized domain of tax law. Employing a mixed-methods approach, researchers conducted quantitative questionnaires and qualitative interviews with legal experts using two different AI prototypes. The goal was to identify which design factors are most effective at building trust and encouraging use.

Problem While Generative AI can assist in fields like tax law that require up-to-date research, its adoption is hindered by issues like lack of transparency, potential for bias, and inaccurate outputs (hallucinations). These problems undermine user trust, which is essential for collaboration in high-stakes professional settings where accuracy is paramount.

Outcome - Transparency, such as providing clear source citations, was a key factor in building user trust.
- Human-like features (anthropomorphism), like a conversational greeting and layout, positively influenced user perception and trust.
- Compliance with social and ethical norms, including being upfront about the AI's limitations, was also found to enhance trustworthiness.
- A higher level of trust in the AI assistant directly leads to an increased intention among professionals to use the tool in their work.
Generative Artificial Intelligence, Human-GenAI Collaboration, Trust, GenAI Adoption
Towards the Acceptance of Virtual Reality Technology for Cyclists
International Conference on Wirtschaftsinformatik (2025)

Towards the Acceptance of Virtual Reality Technology for Cyclists

Sophia Elsholz, Paul Neumeyer, and Rüdiger Zarnekow
This study investigates the factors that influence cyclists' willingness to adopt virtual reality (VR) for indoor training. Using a survey of 314 recreational and competitive cyclists, the research applies an extended Technology Acceptance Model (TAM) to determine what makes VR appealing for platforms like Zwift.

Problem While digital indoor cycling platforms exist, they lack the full immersion that VR can offer. However, it is unclear whether cyclists would actually accept and use VR technology, as its potential in sports remains largely theoretical and the specific factors driving adoption in cycling are unknown.

Outcome - Perceived enjoyment is the single most important factor determining if a cyclist will adopt VR for training.
- Perceived usefulness, or the belief that VR will improve training performance, is also a strong predictor of acceptance.
- Surprisingly, the perceived ease of use of the VR technology did not significantly influence a cyclist's intention to use it.
- Social factors, such as the opinions of other athletes and trainers, along with a cyclist's general openness to new technology, positively contribute to their acceptance of VR.
- Both recreational and competitive cyclists showed similar levels of acceptance, indicating a broad potential market, but both groups are currently skeptical about VR's ability to improve performance.
Technology Acceptance, TAM, Cycling, Extended Reality, XR
Navigating Generative AI Usage Tensions in Knowledge Work: A Socio-Technical Perspective
International Conference on Wirtschaftsinformatik (2025)

Navigating Generative AI Usage Tensions in Knowledge Work: A Socio-Technical Perspective

Anna Gieß, Sofia Schöbel, and Frederik Möller
This study explores the complex challenges and advantages of integrating Generative Artificial Intelligence (GenAI) into knowledge-based work. Using socio-technical systems theory, the researchers conducted a systematic literature review and qualitative interviews with 18 knowledge workers to identify key points of conflict. The paper proposes solutions like human-in-the-loop models and robust AI governance policies to foster responsible and efficient GenAI usage.

Problem As organizations rapidly adopt GenAI to boost productivity, they face significant tensions between efficiency, reliability, and data privacy. There is a need to understand these conflicting forces to develop strategies that maximize the benefits of GenAI while mitigating risks related to ethics, data protection, and over-reliance on the technology.

Outcome - Productivity-Reflection Tension: GenAI increases efficiency but can lead to blind reliance and reduced critical thinking on the content it generates.
- Availability-Reliability Contradiction: While GenAI offers constant access to information, its output is not always reliable, increasing the risk of misinformation.
- Efficiency-Traceability Dilemma: Content is produced quickly, but the lack of clear source references makes verification difficult in professional settings.
- Usefulness-Transparency Tension: The utility of GenAI is limited by a lack of transparency in how it generates outputs, which reduces user trust.
- Convenience-Data Protection Tension: GenAI simplifies tasks but creates significant concerns about the privacy and security of sensitive information.
Generative AI, Knowledge work, Tensions, Socio-technical systems theory
Thinking Twice: A Sequential Approach to Nudge Towards Reflective Judgment in GenAI-Assisted Decision Making
International Conference on Wirtschaftsinformatik (2025)

Thinking Twice: A Sequential Approach to Nudge Towards Reflective Judgment in GenAI-Assisted Decision Making

Hüseyin Hussein Keke, Daniel Eisenhardt, Christian Meske
This study investigates how to encourage more thoughtful and analytical decision-making when people use Generative AI (GenAI). Through an experiment with 130 participants, researchers tested an interaction design where users first made their own decision on a problem-solving task before receiving AI assistance. This sequential approach was compared to conditions where users received AI help concurrently or not at all.

Problem When using GenAI tools for decision support, humans have a natural tendency to rely on quick, intuitive judgments rather than engaging in deep, analytical thought. This can lead to suboptimal decisions and increases the risks associated with relying on AI, as users may not critically evaluate the AI's output. The study addresses the challenge of designing human-AI interactions that promote a shift towards more reflective thinking.

Outcome - Requiring users to make an initial decision before receiving GenAI help (a sequential approach) significantly improved their final decision-making performance.
- This sequential interaction method was more effective than providing AI assistance at the same time as the task (concurrently) or providing no AI assistance at all.
- Users who made an initial decision first were more likely to use the available AI prompts, suggesting a more deliberate engagement with the technology.
- The findings suggest that this sequential design acts as a 'cognitive nudge,' successfully shifting users from fast, intuitive thinking to slower, more reflective analysis.
Dual Process Theory, Digital Nudging, Cognitive Forcing, Generative AI, Decision Making
Adopting Generative AI in Industrial Product Companies: Challenges and Early Pathways
International Conference on Wirtschaftsinformatik (2025)

Adopting Generative AI in Industrial Product Companies: Challenges and Early Pathways

Vincent Paffrath, Manuel Wlcek, and Felix Wortmann
This study investigates the adoption of Generative AI (GenAI) within industrial product companies by identifying key challenges and potential solutions. Based on expert interviews with industry leaders and technology providers, the research categorizes findings into technological, organizational, and environmental dimensions to bridge the gap between expectation and practical implementation.

Problem While GenAI is transforming many industries, its adoption by industrial product companies is particularly difficult. Unlike software firms, these companies often lack deep digital expertise, are burdened by legacy systems, and must integrate new technologies into complex hardware and service environments, making it hard to realize GenAI's full potential.

Outcome - Technological challenges like AI model 'hallucinations' and inconsistent results are best managed through enterprise grounding (using company data to improve accuracy) and standardized testing procedures.
- Organizational hurdles include the difficulty of calculating ROI and managing unrealistic expectations. The study suggests focusing on simple, non-financial KPIs (like user adoption and time saved) and providing realistic employee training to demystify the technology.
- Environmental risks such as vendor lock-in and complex new regulations can be mitigated by creating model-agnostic systems that allow switching between providers and establishing standardized compliance frameworks for all AI use cases.
GenAI, AI Adoption, Industrial Product Companies, AI in Manufacturing, Digital Transformation
AI-Powered Teams: How the Usage of Generative AI Tools Enhances Knowledge Transfer and Knowledge Application in Knowledge-Intensive Teams
International Conference on Wirtschaftsinformatik (2025)

AI-Powered Teams: How the Usage of Generative AI Tools Enhances Knowledge Transfer and Knowledge Application in Knowledge-Intensive Teams

Olivia Bruhin, Luc Bumann, Philipp Ebel
This study investigates the role of Generative AI (GenAI) tools, such as ChatGPT and GitHub Copilot, in software development teams. Through an empirical study with 80 software developers, the research examines how GenAI usage influences key knowledge management processes—knowledge transfer and application—and the subsequent effect on team performance.

Problem While the individual productivity gains from GenAI tools are increasingly recognized, their broader impact on team-level knowledge management and performance remains poorly understood. This gap poses a risk for businesses, as adopting these technologies without understanding their collaborative effects could lead to unintended consequences like reduced knowledge retention or impaired team dynamics.

Outcome - The use of Generative AI (GenAI) tools significantly enhances both knowledge transfer (sharing) and knowledge application within software development teams.
- GenAI usage has a direct positive impact on overall team performance.
- The performance improvement is primarily driven by the team's improved ability to apply knowledge, rather than just the transfer of knowledge alone.
- The findings highlight GenAI's role as a catalyst for innovation, but stress that knowledge gained via AI must be actively and contextually applied to boost team performance effectively.
Human-AI Collaboration, AI in Knowledge Work, Collaboration, Generative AI, Software Development, Team Performance, Knowledge Management
Extracting Explanatory Rationales of Activity Relationships using LLMs - A Comparative Analysis
International Conference on Wirtschaftsinformatik (2025)

Extracting Explanatory Rationales of Activity Relationships using LLMs - A Comparative Analysis

Kerstin Andree, Zahi Touqan, Leon Bein, and Luise Pufahl
This study investigates using Large Language Models (LLMs) to automatically extract and classify the reasons (explanatory rationales) behind the ordering of tasks in business processes from text. The authors compare the performance of various LLMs and four different prompting techniques (Vanilla, Few-Shot, Chain-of-Thought, and a combination) to determine the most effective approach for this automation.

Problem Understanding why business process steps occur in a specific order (due to laws, business rules, or best practices) is crucial for process improvement and redesign. However, this information is typically buried in textual documents and must be extracted manually, which is a very expensive and time-consuming task for organizations.

Outcome - Few-Shot prompting, where the model is given a few examples, significantly improves classification accuracy compared to basic prompting across almost all tested LLMs.
- The combination of Few-Shot learning and Chain-of-Thought reasoning also proved to be a highly effective approach.
- Interestingly, smaller and more cost-effective LLMs (like GPT-4o-mini) achieved performance comparable to or even better than larger models when paired with sophisticated prompting techniques.
- The findings demonstrate that LLMs can successfully automate the extraction of process knowledge, making advanced process analysis more accessible and affordable for organizations with limited resources.
Activity Relationships Classification, Large Language Models, Explanatory Rationales, Process Context, Business Process Management, Prompt Engineering
Gender Bias in LLMs for Digital Innovation: Disparities and Fairness Concerns
International Conference on Wirtschaftsinformatik (2025)

Gender Bias in LLMs for Digital Innovation: Disparities and Fairness Concerns

Sumin Kim-Andres¹ and Steffi Haag¹
This study investigates gender bias in large language models (LLMs) like ChatGPT within the context of digital innovation and entrepreneurship. Using two tasks—associating gendered terms with professions and simulating venture capital funding decisions—the researchers analyzed ChatGPT-4o's outputs to identify how societal gender biases are reflected and reinforced by AI.

Problem As businesses increasingly integrate AI tools for tasks like brainstorming, hiring, and decision-making, there's a significant risk that these systems could perpetuate harmful gender stereotypes. This can create disadvantages for female entrepreneurs and innovators, potentially widening the existing gender gap in technology and business leadership.

Outcome - ChatGPT-4o associated male-denoting terms with digital innovation and tech-related professions significantly more often than female-denoting terms.
- In simulated venture capital scenarios, the AI model exhibited 'in-group bias,' predicting that both male and female venture capitalists would be more likely to fund entrepreneurs of their own gender.
- The study confirmed that LLMs can perpetuate gender bias through implicit cues like names alone, even when no explicit gender information is provided.
- The findings highlight the risk of AI reinforcing stereotypes in professional decision-making, which can limit opportunities for underrepresented groups in business and innovation.
Gender Bias, Large Language Models, Fairness, Digital Innovation, Artificial Intelligence
Using Large Language Models for Healthcare Data Interoperability: A Data Mediation Pipeline to Integrate Heterogeneous Patient-Generated Health Data and FHIR
International Conference on Wirtschaftsinformatik (2025)

Using Large Language Models for Healthcare Data Interoperability: A Data Mediation Pipeline to Integrate Heterogeneous Patient-Generated Health Data and FHIR

Torben Ukena, Robin Wagler, and Rainer Alt
This study explores the use of Large Language Models (LLMs) to streamline the integration of diverse patient-generated health data (PGHD) from sources like wearables. The researchers propose and evaluate a data mediation pipeline that combines an LLM with a validation mechanism to automatically transform various data formats into the standardized Fast Healthcare Interoperability Resources (FHIR) format.

Problem Integrating patient-generated health data from various devices into clinical systems is a major challenge due to a lack of interoperability between different data formats and hospital information systems. This data fragmentation hinders clinicians' ability to get a complete view of a patient's health, potentially leading to misinformed decisions and obstacles to patient-centered care.

Outcome - LLMs can effectively translate heterogeneous patient-generated health data into the valid, standardized FHIR format, significantly improving healthcare data interoperability.
- Providing the LLM with a few examples (few-shot prompting) was more effective than providing it with abstract rules and guidelines (reasoning prompting).
- The inclusion of a validation and self-correction loop in the pipeline is crucial for ensuring the LLM produces accurate and standard-compliant output.
- While successful with text-based data, the LLM struggled to accurately aggregate values from complex structured data formats like JSON and CSV, leading to lower semantic accuracy in those cases.
FHIR, semantic interoperability, large language models, hospital information system, patient-generated health data
Acceptance Analysis of the Metaverse: An Investigation in the Paper- and Packaging Industry
International Conference on Wirtschaftsinformatik (2025)

Acceptance Analysis of the Metaverse: An Investigation in the Paper- and Packaging Industry

First Author¹, Second Author¹, Third Author¹,², and Fourth Author²
This study investigates employee acceptance of metaverse technologies within the traditionally conservative paper and packaging industry. Using the Technology Acceptance Model 3, the research was conducted as a living lab experiment in a leading packaging company. The methodology combined qualitative content analysis with quantitative multiple regression modelling to assess the key factors influencing adoption.

Problem While major technology companies are heavily investing in the metaverse for workplace applications, there is a significant research gap concerning employee acceptance of these immersive technologies. This is particularly relevant for traditionally non-digital industries, like paper and packaging, which are seeking to digitalize but face unique adoption barriers. This study addresses the lack of empirical data on how employees in such sectors perceive and accept metaverse tools for work and collaboration.

Outcome - Employees in the paper and packaging industry show a moderate but ambiguous acceptance of the metaverse, with an average score of 3.61 out of 5.
- The most significant factors driving acceptance are the perceived usefulness (PU) of the technology for their job and its perceived ease of use (PEU).
- Job relevance was found to be a key influencer of perceived usefulness, while an employee's confidence in their own computer skills (computer self-efficacy) was a key predictor for perceived ease of use.
- While employees recognized benefits like improved virtual collaboration, they also raised concerns about hardware limitations (e.g., headset weight, image clarity) and the technology's overall maturity compared to existing tools.
Metaverse, Technology Acceptance Model 3, Living lab, Paper and Packaging industry, Workplace
Load More Showing 18 of 30