Journal of the Association for Information Systems (2025)
Toward Triadic Delegation: How Agentic IS Artifacts Affect the Patient-Doctor Relationship in Healthcare
Pascal Fechner, Luis Lämmermann, Jannik Lockl, Maximilian Röglinger, Nils Urbach
This study investigates how autonomous information systems (agentic IS artifacts) are transforming the traditional two-way relationship between patients and doctors into a three-way, or triadic, relationship. Using an in-depth case study of an AI-powered health companion for managing neurogenic lower urinary tract dysfunction, the paper analyzes the new dynamics, roles, and interactions that emerge when an intelligent technology becomes an active participant in healthcare delivery.
Problem
With the rise of artificial intelligence in medicine, autonomous systems are no longer just passive tools but active agents in patient care. This shift challenges the conventional patient-doctor dynamic, yet existing theories are ill-equipped to explain the complexities of this new three-part relationship. This research addresses the gap in understanding how these AI agents redefine roles, interactions, and potential conflicts in patient-centric healthcare.
Outcome
- The introduction of an AI agent transforms the dyadic patient-doctor relationship into a triadic one, often with the AI acting as a central intermediary. - The AI's capabilities create 'attribute interference,' where responsibilities and knowledge overlap between the patient, doctor, and AI, introducing new complexities. - New 'triadic delegation choices' emerge, allowing tasks to be delegated to the doctor, the AI, or both, based on factors like task complexity and emotional context. - The study identifies novel conflicts arising from this triad, including human concerns over losing control (autonomy conflicts), new information imbalances, and the blurring of traditional medical roles.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating new study titled, "Toward Triadic Delegation: How Agentic IS Artifacts Affect the Patient-Doctor Relationship in Healthcare." Host: With me is our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, this study sounds quite specific, but it has broad implications. In a nutshell, what is it about? Expert: It’s about how smart, autonomous AI systems are fundamentally changing the traditional two-way relationship between a professional and their client—in this case, a doctor and a patient—by turning it into a three-way relationship. Host: A three-way relationship? You mean Patient, Doctor, and... AI? Expert: Exactly. The AI is no longer just a passive tool; it’s an active participant, an agent, in the process. This study looks at the new dynamics, roles, and interactions that emerge from this triad. Host: That brings us to the big problem this research is tackling. Why is this shift from a two-way to a three-way relationship such a big deal? Expert: Well, the classic patient-doctor dynamic is built on direct communication and trust. But as AI becomes more capable, it starts taking on tasks, making suggestions, and even acting on its own. Host: It's doing more than just showing data on a screen. Expert: Precisely. It's becoming an agent. The problem is, our existing models for how we work and interact don't account for this third, non-human agent in the room. This creates a gap in understanding how roles are redefined and where new conflicts might arise. Host: How did the researchers actually study this? What was their approach? Expert: They conducted a very detailed, in-depth case study. They focused on a specific piece of technology: an AI-powered health companion designed to help patients manage a complex bladder condition. Host: So, a real-world application. Expert: Yes. It involved a wearable sensor and a smartphone app that monitors the patient's condition and provides real-time guidance. The researchers closely observed the interactions between patients, their doctors, and this new AI agent to see how the relationship changed over time. Host: Let’s get into those changes. What were the key findings from the study? Expert: The first major finding is that the AI almost always becomes a central intermediary. Communication that was once directly between the patient and doctor now often flows through the AI. Host: So the AI is like a new go-between? Expert: In many ways, yes. The second finding, which is really interesting, is something they call 'attribute interference'. Host: That sounds a bit technical. What does it mean for us? Expert: It just means that the responsibilities and even the knowledge start to overlap. For instance, both the doctor and the AI can analyze patient data to spot a potential infection. This creates confusion: Who is responsible? Who should the patient listen to? Host: I can see how that would get complicated. What else did they find? Expert: They found that new 'triadic delegation choices' emerge. Patients and doctors now have to decide which tasks to give to the human and which to the AI. Host: Can you give an example? Expert: Absolutely. A routine task, like logging data 24/7, is perfect for the AI. But delivering a difficult diagnosis—a task with a high emotional context—is still delegated to the doctor. The choice depends on the task's complexity and emotional weight. Host: And I imagine this new setup isn't without its challenges. Did the study identify any new conflicts? Expert: It did. The most common were 'autonomy conflicts'—basically, a fear from both patients and doctors of losing control to the AI. There were also new information imbalances and a blurring of the lines around traditional medical roles. Host: This is the crucial part for our listeners, Alex. Why does this matter for business leaders, even those outside of healthcare? Expert: Because this isn't just a healthcare phenomenon. Anywhere you introduce an advanced AI to mediate between your employees and your customers, or even between different teams, you are creating this same triadic relationship. Host: So a customer service chatbot that works with both a customer and a human agent would be an example. Expert: A perfect example. The key business takeaway is that you can't design these systems as simple tools. You have to design them as teammates. This means clearly defining the AI's role, its responsibilities, and its boundaries. Host: It's about proactive management of that new relationship. Expert: Exactly. Businesses need to anticipate 'attribute interference'. If an AI sales assistant can draft proposals, you need to clarify how that affects the role of your human sales team. Who has the final say? How do they collaborate? Host: So clarity is key. Expert: Clarity and trust. The study showed that conflicts arise from ambiguity. For businesses, this means being transparent about what the AI does and how it makes decisions. You have to build trust not just between the human and the AI, but between all three agents in the new triad. Host: Fascinating stuff. So, to summarize, as AI becomes more autonomous, it’s not just a tool, but a third agent in professional relationships. Expert: That's the big idea. It turns a simple line into a triangle, creating new pathways for communication and delegation, but also new potential points of conflict. Host: And for businesses, the challenge is to manage that triangle by designing for collaboration, clarifying roles, and intentionally building trust between all parties—human and machine. Host: Alex, thank you so much for breaking this down for us. This gives us a lot to think about. Expert: My pleasure, Anna. Host: And thank you to our listeners for tuning into A.I.S. Insights. Join us next time as we continue to explore the future of business and technology.
Agentic IS Artifacts, Delegation, Patient-Doctor Relationship, Personalized Healthcare, Triadic Delegation, Healthcare AI
Communications of the Association for Information Systems (2025)
Understanding the Ethics of Generative AI: Established and New Ethical Principles
Joakim Laine, Matti Minkkinen, Matti Mäntymäki
This study conducts a comprehensive review of academic literature to synthesize the ethical principles of generative artificial intelligence (GenAI) and large language models (LLMs). It explores how established AI ethics are presented in the context of GenAI and identifies what new ethical principles have surfaced due to the unique capabilities of this technology.
Problem
The rapid development and widespread adoption of powerful GenAI tools like ChatGPT have introduced new ethical challenges that are not fully covered by existing AI ethics frameworks. This creates a critical gap, as the specific ethical principles required for the responsible development and deployment of GenAI systems remain relatively unclear.
Outcome
- Established AI ethics principles (e.g., fairness, privacy, responsibility) are still relevant, but their importance and interpretation are shifting in the context of GenAI. - Six new ethical principles specific to GenAI are identified: respect for intellectual property, truthfulness, robustness, recognition of malicious uses, sociocultural responsibility, and human-centric design. - Principles such as non-maleficence, privacy, and environmental sustainability have gained heightened importance due to the general-purpose, large-scale nature of GenAI systems. - The paper proposes 'meta-principles' for managing ethical complexities, including ranking principles, mapping contradictions between them, and implementing continuous monitoring.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. Today, we're diving into the complex ethical world of Generative AI. Host: We're looking at a fascinating new study titled "Understanding the Ethics of Generative AI: Established and New Ethical Principles." Host: In short, this study explores how our established ideas about AI ethics apply to tools like ChatGPT, and what new ethical rules we need to consider because of what this powerful technology can do. Host: To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, Generative AI has exploded into our professional and personal lives. It feels like everyone is using it. What's the big problem that this rapid adoption creates, according to the study? Expert: The big problem is that we’re moving faster than our rulebook. The study highlights that the rapid development of GenAI has created new ethical challenges that our existing AI ethics frameworks just weren't built for. Host: What’s so different about Generative AI? Expert: Well, older AI ethics guidelines were often designed for systems that make specific decisions, like approving a loan or analyzing a medical scan. GenAI is fundamentally different. It's creative, it generates completely new content, and its responses are open-ended. Expert: This creates unique risks. The study notes that these models can reproduce societal biases, invent false information, or even be used to generate harmful and malicious content at an incredible scale. We're facing a critical gap between the technology's capabilities and our ethical understanding of it. Host: So we have a gap in our ethical rulebook. How did the researchers in this study go about trying to fill it? Expert: They conducted what's known as a scoping review. Essentially, they systematically analyzed a wide range of recent academic work on GenAI ethics. They identified the core principles being discussed and organized them into a clear framework. They compared this new landscape to a well-established set of AI ethics principles to see what's changed and what's entirely new. Host: That sounds very thorough. So, what were the key findings? Are the old ethical rules of AI, like fairness and transparency, now obsolete? Expert: Not at all. In fact, they're more important than ever. The study found that established principles like fairness, privacy, and responsibility are still completely relevant. However, their meaning and importance have shifted. Host: How so? Expert: Take privacy. GenAI models are trained on unimaginable amounts of data scraped from the internet. The study points out the significant risk that they could memorize and reproduce someone's private, personal information. So the stakes for privacy are much higher. Expert: The same goes for sustainability. The massive energy consumption needed to train and run these large models has made environmental impact a much more prominent ethical concern than it was with older, smaller-scale AI. Host: So the old rules apply, but with a new intensity. What about the completely new principles that emerged from the study? Expert: This is where it gets really interesting. The researchers identified six new ethical principles that are specific to Generative AI. These are respect for intellectual property, truthfulness, robustness, recognition of malicious uses, sociocultural responsibility, and human-centric design. Host: Let’s pick a couple of those. What do they mean by 'truthfulness' and 'respect for intellectual property'? Expert: 'Truthfulness' tackles the problem of AI "hallucinations"—when a model generates plausible but completely false information. Since these systems are designed to create, not to verify, ensuring their outputs are factual is a brand-new ethical challenge. Expert: 'Respect for intellectual property' addresses the massive debate around copyright. These models are trained on content created by humans—artists, writers, programmers. This raises huge questions about ownership, attribution, and fair compensation that we're only just beginning to grapple with. Host: This is crucial information, Alex. Let's bring it home for our audience. What are the key business takeaways here? Why does this matter for a CEO or a team leader? Expert: It matters immensely. The biggest takeaway is that having a generic "AI Ethics Policy" on a shelf is no longer enough. Businesses using GenAI must develop specific, actionable governance frameworks. Host: Can you give us a practical example of a risk? Expert: Certainly. If your customer service department uses a GenAI chatbot that hallucinates and gives a customer incorrect information about your product's safety or warranty, your company is responsible for that. That’s a truthfulness and accountability failure with real financial and legal consequences. Host: And the study mentioned something called 'meta-principles' to help manage this complexity. What are those? Expert: Meta-principles are guiding strategies for navigating the inevitable trade-offs. For example, being fully transparent about how your AI works might conflict with protecting proprietary data or user privacy. Expert: The study suggests businesses should rank principles to know what’s non-negotiable, proactively map these contradictions, and, most importantly, continuously monitor their AI systems. The technology evolves so fast that your ethics framework has to be a living document, not a one-time project. Host: Fantastic insights. So, to summarize: established AI ethics like fairness and privacy are still vital, but Generative AI has raised the stakes and introduced six new principles that businesses cannot afford to ignore. Host: Leaders need to be proactive in updating their governance to address issues like truthfulness and intellectual property, and adopt a dynamic approach—ranking priorities, managing trade-offs, and continuously monitoring their impact. Host: Alex Ian Sutherland, thank you for making this complex study so clear and actionable for us. Expert: It was my pleasure, Anna. Host: And thank you to our listeners for tuning into A.I.S. Insights. Join us next time for more on the intersection of business and technology.
Generative AI, AI Ethics, Large Language Models, AI Governance, Ethical Principles, AI Auditing
Communications of the Association for Information Systems (2025)
IBM Watson Health Growth Strategy: Is Artificial Intelligence (AI) The Answer
This study analyzes IBM's strategic dilemma with its Watson Health initiative, which aimed to monetize artificial intelligence for cancer detection and treatment recommendations. It explores whether IBM should continue its specialized focus on healthcare (a vertical strategy) or reposition Watson as a versatile, cross-industry AI platform (a horizontal strategy). The paper provides insights into the opportunities and challenges associated with unlocking the transformational power of AI in a business context.
Problem
Despite a multi-billion dollar investment and initial promise, IBM's Watson Health struggled with profitability, model accuracy, and scalability. The AI's recommendations were not consistently reliable or generalizable across different patient populations and healthcare systems, leading to poor adoption. This created a critical strategic crossroads for IBM: whether to continue investing heavily in the specialized healthcare vertical or to pivot towards a more scalable, general-purpose AI platform to drive future growth.
Outcome
- Model Accuracy & Bias: Watson's performance was inconsistent, and its recommendations, trained primarily on US data, were not always applicable to international patient populations, revealing significant algorithmic bias. - Lack of Explainability: The 'black box' nature of the AI made it difficult for clinicians to trust its recommendations, hindering adoption as they could not understand its reasoning process. - Integration and Scaling Challenges: Integrating Watson into existing hospital workflows and electronic health records was costly and complex, creating significant barriers to widespread implementation. - Strategic Dilemma: The challenges forced IBM to choose between continuing its high-investment vertical strategy in healthcare, pivoting to a more scalable horizontal cross-industry platform, or attempting a convergence of both approaches.
Host: Welcome to A.I.S. Insights, the podcast powered by Living Knowledge, where we translate complex research into actionable business strategy. I'm your host, Anna Ivy Summers.
Host: Today, we're diving into a fascinating study titled "IBM Watson Health Growth Strategy: Is Artificial Intelligence (AI) The Answer". It analyzes one of the most high-profile corporate AI ventures in recent memory.
Host: This analysis explores the strategic dilemma IBM faced with Watson Health, its ambitious initiative to use AI for cancer detection and treatment. The core question: should IBM double down on this specialized healthcare focus, or pivot to a more versatile, cross-industry AI platform?
Host: With me to unpack this is our analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Glad to be here, Anna.
Host: So, Alex, IBM's Watson became famous for winning on the game show Jeopardy. The move into healthcare seemed like a noble and brilliant next step. What was the big problem they were trying to solve?
Expert: It was a massive problem. The amount of medical research and data is exploding. It's impossible for any single doctor to keep up with it all. IBM's vision was for Watson to ingest millions of research articles, clinical trial results, and patient records to help oncologists make better, more personalized treatment recommendations.
Host: A truly revolutionary idea. But the study suggests that despite billions of dollars in investment, the reality was quite different.
Expert: That's right. Watson Health struggled significantly with profitability and adoption. The AI's recommendations weren't as reliable or as useful as promised, which created a critical crossroads for IBM. They had to decide whether to keep pouring money into this very specific healthcare vertical or to change their entire strategy.
Host: How did the researchers in this study approach such a complex business case?
Expert: The study is a deep strategic analysis. It examines IBM's business model, its technology, and the market environment. The authors reviewed everything from internal strategy components and partnerships with major cancer centers to the specific technological hurdles Watson faced. It's essentially a case study on the immense challenges of monetizing a "moonshot" AI project.
Host: Let's get into those challenges. What were some of the key findings?
Expert: A major one was model accuracy and bias. The study highlights that Watson was primarily trained using patient data from one institution, Memorial Sloan Kettering Cancer Center in the US. This meant its recommendations didn't always translate well to different patient populations, especially internationally.
Host: So, an AI trained in New York might not be effective for a patient in Tokyo or Mumbai?
Expert: Precisely. This revealed a significant algorithmic bias. For example, one finding mentioned in the analysis showed a mismatch rate of over 27% between Watson's suggestions and the actual treatments given to cervical cancer patients in China. That's a critical failure when you're dealing with patient health.
Host: That naturally leads to the issue of trust. How did doctors react to this new tool?
Expert: That was the second major hurdle: a lack of explainability. Doctors called it the 'black box' problem. Watson would provide a ranked list of treatments, but it couldn't clearly articulate the reasoning behind its top choice. Clinicians need to understand the 'why' to trust a recommendation, and without that transparency, adoption stalled.
Host: And beyond trust, were there practical, on-the-ground problems?
Expert: Absolutely. The study points to massive integration and scaling challenges. Integrating Watson into a hospital's existing complex workflows and electronic health records was incredibly difficult and expensive. The partnership with MD Anderson Cancer Center, for instance, struggled because Watson couldn't properly interpret doctors' unstructured notes. It wasn't a simple plug-and-play solution.
Host: This is a powerful story. For our listeners—business leaders, strategists, tech professionals—what's the big takeaway? Why does the Watson Health story matter for them?
Expert: There are a few key lessons. First, it's a cautionary tale about managing hype. IBM positioned Watson as a revolution, but the technology wasn't there yet. This created a gap between promise and reality that damaged its credibility.
Host: So, under-promise and over-deliver, even with exciting new tech. What else?
Expert: The second lesson is that technology, no matter how powerful, is not a substitute for deep domain expertise. The nuances of medicine—patient preferences, local treatment availability, the context of a doctor's notes—were things Watson struggled with. You can't just apply an algorithm to a complex field and expect it to work without genuine, human-level understanding.
Host: And what about that core strategic dilemma the study focuses on—this idea of a vertical versus a horizontal strategy?
Expert: This is the most critical takeaway for any business investing in AI. IBM chose a vertical strategy—a deep, specialized solution for one industry. The study shows how incredibly high-risk and expensive that can be. The alternative is a horizontal strategy: building a general, flexible AI platform that other companies can adapt for their own needs. It's a less risky, more scalable approach, and it’s the path that competitors like Google and Amazon have largely taken.
Host: So, to wrap it up: IBM's Watson Health was a bold and ambitious vision to transform cancer care with AI.
Host: But this analysis shows its struggles were rooted in very real-world problems: data bias, the 'black box' issue of trust, and immense practical challenges with integration.
Host: For business leaders, the story is a masterclass in the risks of a highly-specialized vertical AI strategy and a reminder that the most advanced technology is only as good as its understanding of the people and processes it's meant to serve.
Host: Alex, thank you so much for breaking down this complex topic for us.
Expert: My pleasure, Anna.
Host: And thank you for tuning in to A.I.S. Insights — powered by Living Knowledge. We'll see you next time.
Artificial Intelligence (AI), AI Strategy, Watson, Healthcare AI, Vertical AI, Horizontal AI, AI Ethics
Communications of the Association for Information Systems (2025)
Digital Resilience in High-Tech SMEs: Exploring the Synergy of AI and IoT in Supply Chains
Adnan Khan, Syed Hussain Murtaza, Parisa Maroufkhani, Sultan Sikandar Mirza
This study investigates how digital resilience enhances the adoption of AI and Internet of Things (IoT) practices within the supply chains of high-tech small and medium-sized enterprises (SMEs). Using survey data from 293 Chinese high-tech SMEs, the research employs partial least squares structural equation modeling to analyze the impact of these technologies on sustainable supply chain performance.
Problem
In an era of increasing global uncertainty and supply chain disruptions, businesses, especially high-tech SMEs, struggle to maintain stability and performance. There is a need to understand how digital technologies can be leveraged not just for efficiency, but to build genuine resilience that allows firms to adapt to and recover from shocks while maintaining sustainability.
Outcome
- Digital resilience is a crucial driver for the adoption of both IoT-oriented supply chain practices and AI-driven innovative practices. - The implementation of IoT and AI practices, fostered by digital resilience, significantly improves sustainable supply chain performance. - AI-driven practices were found to be particularly vital for resource optimization and predictive analytics, strongly influencing sustainability outcomes. - The effectiveness of digital resilience in promoting IoT adoption is amplified in dynamic and unpredictable market environments.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into a fascinating new study titled "Digital Resilience in High-Tech SMEs: Exploring the Synergy of AI and IoT in Supply Chains."
Host: In simple terms, this study looks at how being digitally resilient helps smaller high-tech companies adopt AI and the Internet of Things, or IoT, in their supply chains, and what that means for their long-term sustainable performance. Here to break it all down is our analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Thanks for having me, Anna.
Host: Alex, let's start with the big picture. We hear a lot about supply chain disruptions. What is the specific problem this study is trying to solve?
Expert: The core problem is that global uncertainty is the new normal. We’ve seen it with the pandemic, with geopolitical conflicts, and even cybersecurity threats. These events create massive shocks to supply chains.
Host: And this is especially tough on smaller companies, right?
Expert: Exactly. High-tech Small and Medium-sized Enterprises, or SMEs, often lack the resources of larger corporations. They struggle to maintain stability and performance when disruptions hit. The old "just-in-time" model, which prioritized efficiency above all, proved to be very fragile. So, the question is no longer just about being efficient; it’s about being resilient.
Host: The study uses the term "digital resilience." What does that mean in this context?
Expert: Digital resilience is a company's ability to use technology not just to operate, but to absorb shocks, adapt to disruptions, and recover quickly. It’s about building a digital foundation that is fundamentally flexible and strong.
Host: So how did the researchers go about studying this? What was their approach?
Expert: They conducted a survey with 293 high-tech SMEs in China that were already using AI and IoT technologies in their supply chains. This is important because it means they were analyzing real-world applications, not just theories. They then used advanced statistical analysis to map out the connections between digital resilience, the use of AI and IoT, and overall performance.
Host: A practical approach for a practical problem. Let's get to the results. What were the key findings?
Expert: There were a few really powerful takeaways. First, digital resilience is the critical starting point. The study found that companies with a strong foundation of digital resilience were far more successful at implementing both IoT-oriented practices, like real-time asset tracking, and innovative AI-driven practices.
Host: So, resilience comes first, then the technology adoption. And does that adoption actually make a difference?
Expert: It absolutely does. That’s the second key finding. When that resilience-driven adoption of AI and IoT happens, it significantly boosts what the study calls sustainable supply chain performance. This isn't just about profits; it means the supply chain becomes more reliable, efficient, and environmentally responsible.
Host: Was there a difference in the impact between AI and IoT?
Expert: Yes, and this was particularly interesting. While both were important, the study found that AI-driven practices were especially vital for achieving those sustainability outcomes. This is because AI excels at things like resource optimization and predictive analytics—it can help a company see a problem coming and adjust before it hits.
Host: And what about the business environment? Does that play a role?
Expert: A huge role. The final key insight was that in highly dynamic and unpredictable markets, the value of digital resilience is amplified. Specifically, it becomes even more crucial for driving the adoption of IoT. When things are chaotic, the ability to get real-time data from IoT sensors and devices becomes a massive strategic advantage.
Host: This is where it gets really crucial for our listeners. If I'm a business leader, what is the main lesson I should take from this study?
Expert: The single most important takeaway is to shift your mindset. Stop viewing digital tools as just a way to cut costs or improve efficiency. Start viewing them as the core of your company's resilience strategy. It’s not about buying software; it's about building the strategic capability to anticipate, respond, and recover from shocks.
Host: So it's about moving from a defensive posture to an offensive one?
Expert: Precisely. IoT gives you unprecedented, real-time visibility across your entire supply chain. You know where your materials are, you can monitor production, you can track shipments. Then, AI takes that firehose of data and turns it into intelligent action. It helps you make smarter, predictive decisions. The combination creates a supply chain that isn't just tough—it's intelligent.
Host: So, in today's unpredictable world, this isn't just a nice-to-have, it's a competitive necessity.
Expert: It is. In a volatile market, the ability to adapt faster than your competitors is what separates the leaders from the laggards. For an SME, leveraging AI and IoT this way can level the playing field, allowing them to be just as agile, if not more so, than much larger rivals.
Host: Fantastic insights. To summarize for our audience: Building a foundation of digital resilience is the key first step. This resilience enables the powerful adoption of AI and IoT, which in turn drives a stronger, smarter, and more sustainable supply chain. And in our fast-changing world, that capability is what truly defines success.
Host: Alex Ian Sutherland, thank you so much for your time and for making this research so accessible.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning in to A.I.S. Insights — powered by Living Knowledge. We'll see you next time.
Digital Resilience, Internet of Things-Oriented Supply Chain Management Practices, AI-Driven Innovative Practices, Supply Chain Dynamism, Sustainable Supply Chain Performance
Communications of the Association for Information Systems (2025)
Rethinking Healthcare Technology Adoption: The Critical Role of Visibility & Consumption Values
Sonali Dania, Yogesh Bhatt, Paula Danskin Englis
This study explores how the visibility of digital healthcare technologies influences a consumer's intention to adopt them, using the Theory of Consumption Value (TCV) as a framework. It investigates the roles of different values (e.g., functional, social, emotional) as mediators and examines how individual traits like openness-to-change and gender moderate this relationship. The research methodology involved collecting survey data from digital healthcare users and analyzing it with structural equation modeling.
Problem
Despite the rapid growth of the digital health market, user adoption rates vary significantly, and the factors driving these differences are not fully understood. Specifically, there is limited research on how consumption values and the visibility of a technology impact adoption, along with a poor understanding of how individual traits like openness to change or gender-specific behaviors influence these decisions.
Outcome
- The visibility of digital healthcare applications significantly and positively influences a consumer's intention to adopt them. - Visibility strongly shapes user perceptions, positively impacting the technology's functional, conditional, social, and emotional value; however, it did not significantly influence epistemic value (curiosity). - The relationship between visibility and adoption is mediated by key factors: the technology's perceived usefulness, the user's perception of privacy, and their affinity for technology. - A person's innate openness to change and their gender can moderate the effect of visibility; for instance, individuals who are already open to change are less influenced by a technology's visibility.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. In a world buzzing with new health apps and wearable devices, why do some technologies take off while others flop? Today, we’re diving into a fascinating new study that offers some answers. Host: It’s titled "Rethinking Healthcare Technology Adoption: The Critical Role of Visibility & Consumption Values", and it explores how simply seeing a technology in use can dramatically influence our decision to adopt it. To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. The digital health market is enormous and growing fast, yet getting users to actually adopt these new tools is a real challenge for businesses. What’s the core problem this study wanted to solve? Expert: You've hit on the key issue. We have a multi-billion-dollar market, but user adoption is inconsistent. Companies are pouring money into developing incredible technology, but they're struggling to understand the final step: what makes a consumer say "yes, I'll use that"? This study argues that we've been missing a few key pieces of the puzzle. Expert: Specifically, how much does the simple "visibility" of a product—seeing friends or influencers use it—actually matter? And beyond its basic function, what other values, like social status or emotional comfort, are people looking for in their health tech? Host: So, it's about more than just having the best features. How did the researchers go about measuring something as complex as value and visibility? Expert: They took a very practical approach. The research team conducted a detailed survey with over 300 active users of digital healthcare technology in India. They asked them not just about the tools they used, but about their personal values, their perceptions of privacy, their affinity for technology, and how much they saw these products being used around them. Expert: They then used a powerful statistical method called structural equation modeling to map out the connections and find out which factors were the true drivers of adoption. It’s like creating a blueprint of the consumer’s decision-making process. Host: A blueprint of the decision. I love that. So what did this blueprint reveal? What were the key findings? Expert: The first and most striking finding was just how critical visibility is. The study found that seeing a health technology in the wild—on social media, used by friends, or in advertisements—had a significant and direct positive impact on a person's intention to adopt it. Host: That’s the power of social proof, right? If everyone else is doing it, it must be good. Expert: Exactly. But it goes deeper. Visibility didn’t just create a general sense of popularity; it actively shaped how people valued the technology. It made the tech seem more useful, more socially desirable, and even created a stronger emotional connection, or what the study calls 'technology affinity'. Host: So, seeing it makes it seem more practical and even cooler to use. Was there anything visibility *didn't* affect? Expert: Yes, and this was very interesting. It didn't significantly spark curiosity, or what the researchers call 'epistemic value'. People weren't adopting these apps just to explore them for fun. They needed to see a clear purpose, whether that was functional, social, or emotional. Novelty for its own sake wasn't enough. Host: And what about individual differences? Does visibility work on everyone the same way? Expert: Not at all. The study found that personality traits play a big role. For individuals who are naturally very open to change—your classic early adopters—visibility was far less important. They are intrinsically motivated to try new things, so they don't need the same external validation. The buzz is for the mainstream audience, not the trendsetters. Host: Alex, this is where it gets really crucial for our audience. What are the practical, bottom-line business takeaways from this study? Expert: I see four main takeaways for any leader in the tech or healthcare space. First, your most powerful marketing tool is making the *benefits* of your product visible. Go beyond ads. Focus on authentic user testimonials, case studies, and partnerships with trusted professionals who can demonstrate the product's value in a real-world context. Host: So it’s about showing, not just telling. What's the second takeaway? Expert: Second, understand that you are selling more than a function; you're selling a set of values. Is your product about the functional value of efficiency? The social value of being seen as health-conscious? Or the emotional value of feeling secure? Your marketing messages must connect with these deeper motivations. Host: That makes a lot of sense. And the third? Expert: The third is about trust. The study showed that as visibility increases, so do concerns about privacy. This was a huge factor. To succeed, companies must make their privacy and security features just as visible as their product benefits. Be transparent, be proactive, and build that trust from day one. Host: An excellent point. And the final takeaway? Expert: Finally, segment your audience. A one-size-fits-all message will fail. As we saw, early adopters don't need the same social proof as the mainstream. The study also suggests that men and women may respond differently, with marketing to women perhaps needing to focus more on reliability and security, while messages to men might emphasize innovation and ease of use. Host: Fantastic. So, to summarize: Make the benefits visible, understand the values you're selling, build trust through transparency on privacy, and tailor your message to your audience. Host: Alex, this has been incredibly insightful. Thank you for breaking down this complex research into such clear, actionable advice. Expert: My pleasure, Anna. It’s a valuable piece of work that offers a much-needed new perspective. Host: And thank you to our listeners for joining us on A.I.S. Insights — powered by Living Knowledge. We'll see you next time.
Adoption Intention, Healthcare Applications, Theory of Consumption Values, Values, Visibility
Communications of the Association for Information Systems (2025)
Reinventing French Agriculture: The Era of Farmers 4.0, Technological Innovation and Sustainability
Claude Chammaa, Fatma Fourati-Jamoussi, Lucian Ceapraz, Valérie Leroux
This study investigates the behavioral, contextual, and economic factors that influence French farmers' adoption of innovative agricultural technologies. Using a mixed-methods approach that combines qualitative interviews and quantitative surveys, the research proposes and validates the French Farming Innovation Adoption (FFIA) model, an agricultural adaptation of the UTAUT2 model, to explain technology usage.
Problem
The agricultural sector is rapidly transforming with digital innovation, but the factors driving technology adoption among farmers, particularly in cost-sensitive and highly regulated environments like France, are not fully understood. Existing technology acceptance models often fail to capture the central role of economic viability, leaving a gap in explaining how sustainability goals and policy supports translate into practical adoption.
Outcome
- The most significant direct predictor of technology adoption is 'Price Value'; farmers prioritize innovations they perceive as economically beneficial and cost-effective. - Traditional drivers like government subsidies (Facilitating Conditions), expected performance, and social influence do not directly impact technology use. Instead, their influence is indirect, mediated through the farmer's perception of the technology's price value. - Perceived sustainability benefits alone do not significantly drive adoption. For farmers to invest, environmental advantages must be clearly linked to economic gains, such as reduced costs or increased yields. - Economic appraisal is the critical filter through which farmers evaluate new technologies, making it the central consideration in their decision-making process.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge, where we translate complex research into actionable business strategy. Today, we're digging into the world of smart farming.
Host: We're looking at a fascinating study called "Reinventing French Agriculture: The Era of Farmers 4.0, Technological Innovation and Sustainability." It investigates what really makes farmers adopt new technologies. Here to break it down for us is our analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Great to be here, Anna.
Host: So, Alex, we hear a lot about Agriculture 4.0—drones, sensors, A.I. on the farm. But this study suggests it's not as simple as just building new tech. What's the real-world problem they're tackling?
Expert: Exactly. The big problem is that while technology offers huge potential, the factors driving adoption aren't well understood, especially in a place like France. French farmers are under immense pressure from complex regulations like the EU's Common Agricultural Policy and global trade deals.
Expert: They face a constant balancing act between sustainability goals, high production costs, and international competition. Previous models for technology adoption often missed the most critical piece of the puzzle for farmers: economic viability.
Host: So how did the researchers get to the heart of what farmers are actually thinking? What was their approach?
Expert: They used a really smart mixed-methods approach. First, they went out and conducted in-depth interviews with a dozen farmers to understand their real-world challenges and resistance to new tech. These conversations revealed frustrations with cost, complexity, and even digital anxiety.
Expert: Then, using those real-world insights, they designed a quantitative survey for 171 farmers who were already using innovative technologies. This allowed them to build and test a model that reflects the actual decision-making process on the ground.
Host: That sounds incredibly thorough. So, after talking to farmers and analyzing the data, what were the key findings? What really drives a farmer to invest in a new piece of technology?
Expert: The results were crystal clear on one thing: Price Value is king. The single most significant factor predicting whether a farmer will use a new technology is their perception of its economic benefit. Will it save or make them money? That's the first and most important question.
Host: That makes intuitive sense. But what about other factors, like government subsidies designed to encourage this, or seeing your neighbor use a new tool?
Expert: This is where it gets really interesting. Factors like government support, the technology’s expected performance, and even social influence from other farmers do not directly lead to adoption.
Host: Not at all? That's surprising.
Expert: Not directly. Their influence is indirect, and it's all filtered through that lens of Price Value. A government subsidy is only persuasive if it makes the technology profitable. A neighbor’s success only matters if it proves the economic case. If the numbers don't add up, these other factors have almost no impact.
Host: And the sustainability angle? Surely, promoting a greener way of farming is a major driver?
Expert: You'd think so, but the study found that perceived sustainability benefits alone do not significantly drive adoption. For a farmer to invest, environmental advantages must be clearly linked to an economic gain, like reducing fertilizer costs or increasing crop yields. Sustainability has to pay the bills.
Host: This is such a critical insight. Let's shift to the "so what" for our listeners. What are the key business takeaways from this?
Expert: For any business in the Agri-tech space, the message is simple: lead with the Return on Investment. Don't just sell fancy features or sustainability buzzwords. Your marketing, your sales pitch—it all has to clearly demonstrate the economic value. Frame environmental benefits as a happy consequence of a smart financial decision.
Host: And what about for policymakers?
Expert: Policymakers need to realize that subsidies aren't a magic bullet. To be effective, financial incentives must be paired with tools that prove the tech's value—things like cost-benefit calculators, technical support, and farmer-to-farmer demonstration programs. They have to connect the policy to the farmer's bottom line.
Expert: For everyone else, it’s a powerful lesson in understanding your customer's core motivation. You have to identify their critical decision filter. For French farmers, every innovation is judged by its economic impact. The question is, what’s the non-negotiable filter for your customers?
Host: A fantastic summary. So, to recap: for technology to truly take root in agriculture, it’s not enough to be innovative, popular, or even sustainable. It must first and foremost prove its economic worth. The bottom line truly is the bottom line.
Host: Alex, thank you so much for bringing these insights to life for us.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we uncover more research that’s shaping the future of business.
Communications of the Association for Information Systems (2025)
Unveiling Enablers to the Use of Generative AI Artefacts in Rural Educational Settings: A Socio-Technical Perspective
Pramod K. Patnaik, Kunal Rao, Gaurav Dixit
This study investigates the factors that enable the use of Generative AI (GenAI) tools in rural educational settings within developing countries. Using a mixed-method approach that combines in-depth interviews and the Grey DEMATEL decision-making method, the research identifies and analyzes these enablers through a socio-technical lens to understand their causal relationships.
Problem
Marginalized rural communities in developing countries face significant challenges in education, including a persistent digital divide that limits access to modern learning tools. This research addresses the gap in understanding how Generative AI can be practically leveraged to overcome these education-related challenges and improve learning quality in under-resourced regions.
Outcome
- The study identified fifteen key enablers for using Generative AI in rural education, grouped into social and technical categories. - 'Policy initiatives at the government level' was found to be the most critical enabler, directly influencing other key factors like GenAI training for teachers and students, community awareness, and school leadership commitment. - Six novel enablers were uncovered through interviews, including affordable internet data, affordable telecommunication networks, and the provision of subsidized devices for lower-income groups. - An empirical framework was developed to illustrate the causal relationships among the enablers, helping stakeholders prioritize interventions for effective GenAI adoption.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're looking at how Generative AI can transform education, not in Silicon Valley, but in some of the most under-resourced corners of the world.
Host: We're diving into a fascinating new study titled "Unveiling Enablers to the Use of Generative AI Artefacts in Rural Educational Settings: A Socio-Technical Perspective". It investigates the key factors that can help bring powerful AI tools to classrooms in developing countries. With me today is our expert analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Thanks for having me, Anna. It's a critical topic.
Host: Let's start with the big picture. What is the real-world problem this study is trying to solve?
Expert: The core problem is the digital divide. In many marginalized rural communities, especially in developing nations, students and teachers face huge educational challenges. We're talking about a lack of resources, infrastructure, and access to modern learning tools. While we see Generative AI changing industries in developed countries, there's a real risk these rural communities get left even further behind.
Host: So the question is, can GenAI be a bridge across that divide, instead of making it wider?
Expert: Exactly. The study specifically looks at how we can practically leverage these AI tools to overcome those long-standing challenges and actually improve the quality of education where it's needed most.
Host: So how did the researchers approach such a complex issue? It must be hard to study on the ground.
Expert: It is, and they used a really smart mixed-method approach. First, they went directly to the source, conducting in-depth interviews with teachers, government officials, and community members in rural India. This gave them rich, qualitative data—the real stories and challenges. Then, they took all the factors they identified and used a quantitative analysis to find the causal relationships between them.
Host: So it’s not just a list of problems, but a map of how one factor influences another?
Expert: Precisely. It allows them to say, 'If you want to achieve X, you first need to solve for Y'. It creates a clear roadmap for intervention.
Host: That sounds powerful. What were the key findings? What are the biggest levers we can pull?
Expert: The study identified fifteen key 'enablers', which are the critical ingredients for success. But the single most important finding, the one that drives almost everything else, is 'Policy initiatives at the government level'.
Host: That's surprising. I would have guessed something more technical, like internet access.
Expert: And that's crucial, but the study shows that strong government policy is the 'cause' factor. It directly enables other key things like funding, GenAI training for teachers and students, creating community awareness, and getting school leadership on board. Without that top-down strategic support, everything else struggles.
Host: What other enablers stood out?
Expert: The interviews uncovered some really practical, foundational needs that go beyond just theory. Things we might take for granted, like affordable internet data plans, reliable telecommunication networks, and providing subsidized devices like laptops or tablets for lower-income families. It highlights that access isn't just about availability; it’s about affordability.
Host: This is the most important question for our listeners, Alex. This research is clearly vital for educators and policymakers, but why should business professionals pay attention? What are the takeaways for them?
Expert: I see three major opportunities here. First, this study is essentially a market-entry roadmap for a massive, untapped audience. For EdTech companies, telecoms, and hardware manufacturers, it lays out exactly what is needed to succeed in these emerging markets. It points directly to opportunities for public-private partnerships to provide those subsidized devices and affordable data plans we just talked about.
Host: So it’s a blueprint for doing business in these regions.
Expert: Absolutely. Second, it's a guide for product development. The study found that 'ease of use' and 'localized language support' are critical enablers. This tells tech companies that you can't just parachute in a complex, English-only product. Your user interface needs to be simple, intuitive, and available in local languages to gain any traction. That’s a direct mandate for product and design teams.
Host: That makes perfect sense. What’s the third opportunity?
Expert: It redefines effective Corporate Social Responsibility, or CSR. Instead of just one-off donations, a company can use this framework to make strategic investments. They could fund teacher training programs or develop technical support hubs in rural areas. This creates sustainable, long-term impact, builds immense brand loyalty, and helps develop the very ecosystem their business will depend on in the future.
Host: So to sum it up: Generative AI holds incredible promise for bridging the educational divide in rural communities, but technology alone isn't the answer.
Expert: That's right. Success hinges on a foundation of supportive government policy, which then enables crucial factors like training, awareness, and true affordability.
Host: And for businesses, this isn't just a social issue—it’s a clear roadmap for market opportunity, product design, and creating strategic, high-impact investments. Alex, thank you so much for breaking this down for us.
Expert: My pleasure, Anna.
Host: And thank you for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we continue to explore the intersection of business, technology, and groundbreaking research.
Generative AI, Rural, Education, Digital Divide, Interviews, Socio-technical Theory
Communications of the Association for Information Systems (2025)
Implementing AI into ERP Software
Siar Sarferaz
This study investigates how to systematically integrate Artificial Intelligence (AI) into complex Enterprise Resource Planning (ERP) systems. Through an analysis of real-world use cases, the author identifies key challenges and proposes a comprehensive DevOps (Development and Operations) framework to standardize and streamline the entire lifecycle of AI applications within an ERP environment.
Problem
While integrating AI into ERP software offers immense potential for automation and optimization, organizations lack a systematic approach to do so. This absence of a standardized framework leads to inconsistent, inefficient, and costly implementations, creating significant barriers to adopting AI capabilities at scale within enterprise systems.
Outcome
- Identified 20 specific, recurring gaps in the development and operation of AI applications within ERP systems, including complex setup, heterogeneous development, and insufficient monitoring. - Developed a comprehensive DevOps framework that standardizes the entire AI lifecycle into six stages: Create, Check, Configure, Train, Deploy, and Monitor. - The proposed framework provides a systematic, self-service approach for business users to manage AI models, reducing the reliance on specialized technical teams and lowering the total cost of ownership. - A quantitative evaluation across 10 real-world AI scenarios demonstrated that the framework reduced processing time by 27%, increased cost savings by 17%, and improved outcome quality by 15%.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into a fascinating study titled "Implementing AI into ERP Software," which looks at how businesses can systematically integrate Artificial Intelligence into their core operational systems.
Host: With me is our expert analyst, Alex Ian Sutherland. Alex, great to have you.
Expert: Thanks for having me, Anna.
Host: Let's start with the big picture. ERP systems are the digital backbone of so many companies, managing everything from finance to supply chains. And everyone is talking about AI. It seems like a perfect match, but this study suggests it's not that simple. What's the real-world problem here?
Expert: Exactly. The potential is massive, but the execution is often chaotic. The core problem is that most organizations lack a standardized playbook for embedding AI into these incredibly complex ERP systems. This leads to implementations that are inconsistent, inefficient, and very costly.
Host: Can you give us a concrete example of that chaos?
Expert: Absolutely. The study identified 20 recurring problems, or 'gaps'. For instance, one gap they called 'Heterogeneous Development'. They found cases where a company's supply chain team would build a demand forecasting model using one set of AI tools, while the sales team built a similar model for price optimization using a completely different, incompatible set of tools.
Host: So, they're essentially reinventing the wheel in different departments, driving up costs and effort.
Expert: Precisely. Another major issue is the 'Need for AI Expertise'. Business users are told a model is, say, 85% accurate, but they have no way to know if that's good enough for their specific inventory decisions. They become completely dependent on expensive technical teams for every step.
Host: So how did the research approach solving such a complex and widespread problem?
Expert: Instead of just theorizing, the author analyzed numerous real-world AI use cases within a major ERP environment. They systematically documented what was going wrong in practice—all those gaps we mentioned—and used that direct evidence to design and build a practical framework to fix them.
Host: A solution born from real-world challenges. I like that. So what were the key findings? What did this new framework look like?
Expert: The main outcome is a comprehensive DevOps framework that standardizes the entire lifecycle of an AI model into six clear stages.
Host: Okay, what are those stages?
Expert: They are: Create, Check, Configure, Train, Deploy, and Monitor. Think of it as a universal assembly line for AI applications. The 'Create' stage is for development, but the 'Check' stage is crucial—it automatically verifies if you even have the right quality and amount of data before you start.
Host: That sounds like it would prevent a lot of failed projects right from the beginning.
Expert: It does. And the later stages, like 'Train' and 'Deploy', are designed as self-service tools. This empowers a business user, not just a data scientist, to retrain a model or roll it back to a previous version with a few clicks. It dramatically reduces the reliance on specialized teams.
Host: This is the part our listeners are waiting for, Alex. Why does this framework matter for business? What are the tangible benefits of adopting this kind of systematic approach?
Expert: This is where it gets really compelling. The study evaluated the framework's performance across 10 real-world AI scenarios and the results were significant. They saw a 27% reduction in processing time.
Host: So you get your AI-powered insights almost a third faster.
Expert: Exactly. They also measured a 17% increase in cost savings. By eliminating that duplicated effort and streamlining the process, the total cost of ownership for these AI features drops.
Host: A direct impact on the bottom line. And what about the quality of the results?
Expert: That improved as well. They found a 15% improvement in outcome quality. This means the AI is making better predictions and smarter recommendations, which leads to better business decisions—whether that's optimizing inventory, predicting delivery delays, or detecting fraud.
Host: So it's faster, cheaper, and better. It sounds like this framework is what turns AI from a series of complex science experiments into a scalable, reliable business capability.
Expert: That's the perfect way to put it. It provides the governance and standardization needed to move from a few one-off AI projects to an enterprise-wide strategy where AI is truly integrated into the core of the business.
Host: Fantastic insights, Alex. So, to summarize for our listeners: integrating AI into ERP systems has been challenging and chaotic. This study identified the key gaps and proposed a six-stage framework—Create, Check, Configure, Train, Deploy, and Monitor—to standardize the process. The business impact is clear: significant gains in speed, cost savings, and the quality of outcomes.
Host: Alex Ian Sutherland, thank you so much for breaking that down for us.
Expert: My pleasure, Anna.
Host: And thanks to all of you for tuning in to A.I.S. Insights — powered by Living Knowledge.
Enterprise Resource Planning, Artificial Intelligence, DevOps, Software Integration, AI Development, AI Operations, Enterprise AI
International Conference on Wirtschaftsinformatik (2025)
Trust Me, I'm a Tax Advisor: Influencing Factors for Adopting Generative AI Assistants in Tax Law
Ben Möllmann, Leonardo Banh, Jan Laufer, and Gero Strobel
This study explores the critical role of user trust in the adoption of Generative AI assistants within the specialized domain of tax law. Employing a mixed-methods approach, researchers conducted quantitative questionnaires and qualitative interviews with legal experts using two different AI prototypes. The goal was to identify which design factors are most effective at building trust and encouraging use.
Problem
While Generative AI can assist in fields like tax law that require up-to-date research, its adoption is hindered by issues like lack of transparency, potential for bias, and inaccurate outputs (hallucinations). These problems undermine user trust, which is essential for collaboration in high-stakes professional settings where accuracy is paramount.
Outcome
- Transparency, such as providing clear source citations, was a key factor in building user trust. - Human-like features (anthropomorphism), like a conversational greeting and layout, positively influenced user perception and trust. - Compliance with social and ethical norms, including being upfront about the AI's limitations, was also found to enhance trustworthiness. - A higher level of trust in the AI assistant directly leads to an increased intention among professionals to use the tool in their work.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a fascinating new study called “Trust Me, I'm a Tax Advisor: Influencing Factors for Adopting Generative AI Assistants in Tax Law.” Host: It explores a huge question: In a specialized, high-stakes field like tax law, what makes a professional actually trust an AI assistant? And how can we design AI that people will actually use? With me is our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, let's start with the big picture. We hear a lot about AI's potential, but this study highlights a major roadblock, especially in professional fields. What's the core problem they're addressing? Expert: The core problem is trust. Generative AI can be incredibly powerful for tasks like legal research, which requires sifting through constantly changing laws and rulings. But these tools can also make mistakes, invent sources—what we call 'hallucinations'—and their reasoning can be a total 'black box.' Host: And in tax law, a mistake isn't just a typo. Expert: Exactly. As the study points out, a misplaced trust in an AI’s output can lead to severe financial penalties for a client, or even malpractice litigation for the attorney. When the stakes are that high, you're not going to use a tool you don't fundamentally trust. That lack of trust is the biggest barrier to adoption. Host: So how did the researchers measure something as subjective as trust? What was their approach? Expert: They used a really clever mixed-methods approach. They built two different prototypes of a Generative AI tax assistant. The first was a basic, no-frills tool. The second prototype was designed specifically to build trust. Host: How so? What was different about it? Expert: It had features we'll talk about in a moment. They then had a group of legal experts perform real-world tax research tasks using both prototypes. Afterwards, the researchers gathered feedback through detailed questionnaires and in-depth interviews to see which version the experts trusted more, and why. Host: A direct head-to-head comparison. I love that. So, what were the key findings? What are the secret ingredients for building a trustworthy AI? Expert: The results were incredibly clear, and they came down to three main factors. First, transparency was paramount. The prototype that clearly cited its sources for every piece of information was trusted far more. Host: So users could check the AI's work, essentially. Expert: Precisely. One expert in the study was quoted as saying the system was "definitely more trustworthy, precisely because the sources have been specified." It gives the user a sense of control and verification. Host: That makes perfect sense. What was the second factor? Expert: The second was what the study calls 'anthropomorphism'—basically, making the AI feel more human-like. The more trusted prototype had a conversational greeting and a familiar chat layout. Experts said it made them feel "more familiar and better supported." Host: It’s interesting that a simple design choice can have such a big impact on trust. Expert: It is. And the third factor was just as fascinating: the AI’s honesty about its own limitations. Host: You mean the AI admitting what it *can't* do? Expert: Yes. The trusted prototype included an introduction that mentioned its capabilities and its limits. The experts saw this not as a weakness, but as a sign of reliability. Being upfront about its boundaries actually made the AI seem more trustworthy. Host: Transparency, a human touch, and a bit of humility. It sounds like a recipe for a good human colleague, not just an AI. Alex, let's get to the bottom line. What does this all mean for business leaders listening right now? Expert: This is the most important part. For any business implementing AI, especially for expert users, this study provides a clear roadmap. The biggest takeaway is that you have to design for trust, not just for function. Host: What does that look like in practice? Expert: It means for any AI that provides information—whether to your legal team, your financial analysts, or your engineers—it must be able to show its work. Building in transparent, clickable source citations isn't an optional feature; it's essential for adoption. Host: Okay, so transparency is job one. What else? Expert: Don't underestimate the user interface. A sterile, purely functional tool might be technically perfect, but a more conversational and intuitive design can significantly lower the barrier to entry and make users more comfortable. User experience directly impacts trust. Host: And that third point about limitations seems critical for managing expectations. Expert: Absolutely. Be upfront with your teams about what your new AI tool is good at and where it might struggle. Marketing might want to sell it as a magic bullet, but for actual adoption, managing expectations and being honest about limitations builds the long-term trust you need for the tool to succeed. Host: So, to recap for our listeners: if you're rolling out AI tools, the key to getting your teams to actually use them is building trust. And you do that through transparency, like citing sources; a thoughtful, human-centric design; and being honest about the AI’s limitations. Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us. Expert: My pleasure, Anna. Host: And thank you for tuning in to A.I.S. Insights. We’ll see you next time.
International Conference on Wirtschaftsinformatik (2025)
Towards the Acceptance of Virtual Reality Technology for Cyclists
Sophia Elsholz, Paul Neumeyer, and Rüdiger Zarnekow
This study investigates the factors that influence cyclists' willingness to adopt virtual reality (VR) for indoor training. Using a survey of 314 recreational and competitive cyclists, the research applies an extended Technology Acceptance Model (TAM) to determine what makes VR appealing for platforms like Zwift.
Problem
While digital indoor cycling platforms exist, they lack the full immersion that VR can offer. However, it is unclear whether cyclists would actually accept and use VR technology, as its potential in sports remains largely theoretical and the specific factors driving adoption in cycling are unknown.
Outcome
- Perceived enjoyment is the single most important factor determining if a cyclist will adopt VR for training. - Perceived usefulness, or the belief that VR will improve training performance, is also a strong predictor of acceptance. - Surprisingly, the perceived ease of use of the VR technology did not significantly influence a cyclist's intention to use it. - Social factors, such as the opinions of other athletes and trainers, along with a cyclist's general openness to new technology, positively contribute to their acceptance of VR. - Both recreational and competitive cyclists showed similar levels of acceptance, indicating a broad potential market, but both groups are currently skeptical about VR's ability to improve performance.
Host: Welcome to A.I.S. Insights, the podcast where we connect Living Knowledge with real-world business strategy. I'm your host, Anna Ivy Summers. Host: Today, we're gearing up to talk about the intersection of fitness and immersive technology. We're diving into a fascinating study called "Towards the Acceptance of Virtual Reality Technology for Cyclists." Host: It explores what makes cyclists, both amateur and pro, willing to adopt VR for their indoor training routines. Here to break it all down for us is our analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: So, Alex, let's start with the big picture. People are already using platforms like Zwift for indoor cycling. What's the problem this study is trying to solve? Expert: That's the perfect place to start. Those platforms are popular, but they're still fundamentally a 2D screen experience. The big problem is that while VR promises a much more immersive, realistic training session, its potential in sports is still largely theoretical. Expert: Companies are hesitant to invest millions in developing VR cycling apps because they simply don't know if cyclists will actually use them. We need to understand the 'why' behind adoption before the 'what' gets built. Host: So it’s about closing that gap between a cool idea and a viable product. How did the researchers go about figuring out what cyclists want? Expert: They took a very methodical approach. They conducted a detailed survey with 314 cyclists, ranging from recreational riders to competitive athletes. Expert: They used a framework called the Technology Acceptance Model, or TAM, which they extended for this specific purpose. Essentially, it's a way to measure the key psychological factors that make someone decide to use a new piece of tech. Expert: They didn't just look at whether it's useful or easy to use. They also measured the impact of perceived enjoyment, a cyclist's general openness to new tech, and even social pressure from trainers and other athletes. Host: And after surveying all those cyclists, what were the most surprising findings? Expert: There were a few real eye-openers. First and foremost, the single most important factor for adoption wasn't performance gains—it was perceived enjoyment. Host: You mean, it has to be fun? More so than effective? Expert: Exactly. The data shows that if the experience isn't fun, cyclists won't be interested. This suggests they see VR cycling as a 'hedonic' system—one used for enjoyment—rather than a purely utilitarian training tool. Usefulness was the second biggest factor, but fun came first. Host: That is interesting. What else stood out? Expert: The biggest surprise was what *didn't* matter. The perceived ease of use of the VR technology had no significant direct impact on a cyclist's intention to adopt it. Host: So, they don't mind if it's a bit complicated to set up, as long as the experience is worth it? Expert: Precisely. They're willing to overcome a technical hurdle if the payoff in enjoyment and usefulness is there. The study also confirmed that social factors are key—what your teammates and coach think about the tech really does influence your willingness to try it. Host: This is where it gets critical for our listeners. Alex, what does this all mean for business? What are the key takeaways for a company in the fitness tech space? Expert: This study provides a clear roadmap. The first takeaway is: lead with fun. Your marketing, your design, your user experience—it all has to be built around creating an engaging and enjoyable world. Forget sterile lab simulations; think gamified adventures. Host: So sell the experience, not just the specs. Expert: Exactly. The second takeaway addresses the usefulness problem. The study found that cyclists are currently skeptical that VR can actually improve their performance. So, a business needs to explicitly educate the market. Expert: This means developing and promoting features that offer clear performance benefits you can't get elsewhere—like real-time feedback on your pedaling technique or the ability to practice a specific, difficult segment of a real-world race course in VR. Host: That sounds like a powerful marketing angle. You're not just riding; you're gaining a competitive edge. Expert: It is. And the final key takeaway is to leverage the community. Since social norms are so influential, businesses should target teams, clubs, and coaches. A positive review from a respected trainer could be more valuable than a massive ad campaign. Build community features that encourage social interaction and friendly competition. Host: Fantastic insights, Alex. So, to summarize for our business leaders: to succeed in the VR cycling market, the winning formula is to first make it fun, then prove it makes you faster, and finally, empower the community to spread the word. Expert: You've got it. It's about balancing the enjoyment with tangible, marketable benefits. Host: Thank you so much for breaking that down for us, Alex. It's clear that understanding the user is the first and most important lap in this race. Host: And thank you to our audience for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time as we uncover more actionable insights from the world of research.
Technology Acceptance, TAM, Cycling, Extended Reality, XR
International Conference on Wirtschaftsinformatik (2025)
Navigating Generative AI Usage Tensions in Knowledge Work: A Socio-Technical Perspective
Anna Gieß, Sofia Schöbel, and Frederik Möller
This study explores the complex challenges and advantages of integrating Generative Artificial Intelligence (GenAI) into knowledge-based work. Using socio-technical systems theory, the researchers conducted a systematic literature review and qualitative interviews with 18 knowledge workers to identify key points of conflict. The paper proposes solutions like human-in-the-loop models and robust AI governance policies to foster responsible and efficient GenAI usage.
Problem
As organizations rapidly adopt GenAI to boost productivity, they face significant tensions between efficiency, reliability, and data privacy. There is a need to understand these conflicting forces to develop strategies that maximize the benefits of GenAI while mitigating risks related to ethics, data protection, and over-reliance on the technology.
Outcome
- Productivity-Reflection Tension: GenAI increases efficiency but can lead to blind reliance and reduced critical thinking on the content it generates. - Availability-Reliability Contradiction: While GenAI offers constant access to information, its output is not always reliable, increasing the risk of misinformation. - Efficiency-Traceability Dilemma: Content is produced quickly, but the lack of clear source references makes verification difficult in professional settings. - Usefulness-Transparency Tension: The utility of GenAI is limited by a lack of transparency in how it generates outputs, which reduces user trust. - Convenience-Data Protection Tension: GenAI simplifies tasks but creates significant concerns about the privacy and security of sensitive information.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a topic that’s on every leader’s mind: Generative AI in the workplace. We're looking at a fascinating new study titled "Navigating Generative AI Usage Tensions in Knowledge Work: A Socio-Technical Perspective". Host: It explores the complex challenges and advantages of integrating tools like ChatGPT into our daily work, identifying key points of conflict and proposing solutions. Host: And to help us unpack it all, we have our expert analyst, Alex Ian Sutherland. Alex, welcome to the show. Expert: Thanks for having me, Anna. It’s a timely topic. Host: It certainly is. So, let's start with the big picture. What is the core problem this study addresses for businesses? Expert: The core problem is that companies are rushing to adopt Generative AI for its incredible productivity benefits, but they’re hitting roadblocks. They're facing these powerful, conflicting forces—or 'tensions,' as the study calls them—between the need for speed, the demand for reliability, and the absolute necessity of data privacy. Host: Can you give us a real-world example of what that tension looks like? Expert: The study opens with a perfect one. Imagine a manager under pressure to hire someone. They upload all the applicant resumes to ChatGPT and ask it to pick the best candidate. It’s incredibly fast, but they've just ignored company policy and likely violated data privacy laws by uploading sensitive personal data to a public tool. That’s the conflict right there: efficiency versus ethics and security. Host: That’s a very clear, and slightly scary, example. So how did the researchers get to the heart of these issues? What was their approach? Expert: They used a really solid two-part method. First, they did a deep dive into all the existing academic literature on the topic. Then, to ground the theory in reality, they conducted in-depth interviews with 18 knowledge workers—people who are using these AI tools every single day in demanding professional fields. Host: So they combined the academic view with on-the-ground experience. What were some of the key tensions they uncovered from those interviews? Expert: There were five major ones, but a few really stand out for business. The first is what they call the "Productivity-Reflection Tension." Host: That sounds like a classic speed versus quality trade-off. Expert: Exactly. GenAI makes us incredibly efficient. One interviewee noted their use of programmer forums like Stack Overflow dropped by 99% because they could get code faster from an AI. But the major risk is what the study calls 'blind reliance.' We stop thinking critically about the output. Host: We just trust the machine? Expert: Precisely. Another interviewee said, "You’re tempted to simply believe what it says and it’s quite a challenge to really question whether it’s true." This can lead to a decline in critical thinking skills across the team, which is a huge long-term risk. Host: That's a serious concern. You also mentioned reliability. I imagine that connects to the "Efficiency-Traceability Dilemma"? Expert: It does. This is about the black box nature of AI. It gives you an answer, but can you prove where it came from? In professional work, you need verifiable sources. The study found users were incredibly frustrated when the AI would just invent sources or create what they called 'fantasy publications'. For any serious research or reporting, this makes the tool unreliable. Host: And I’m sure that leads us to the tension that keeps CFOs and CTOs up at night: the clash between convenience and data protection. Expert: This is the big one. It's just so easy for an employee to paste a sensitive client email or a draft of a confidential financial report into a public AI to get it proofread or summarized. One person interviewed voiced a huge concern, saying, "I can imagine that many trade secrets simply go to the AI when people have emails rewritten via GPT." Host: So, Alex, this all seems quite daunting for leaders. Based on the study's findings, what are the practical, actionable takeaways for businesses? How do we navigate this? Expert: The study offers very clear solutions, and it’s not about banning the technology. First, organizations need to establish clear AI governance policies. This means defining what tools are approved and, crucially, what types of data can and cannot be entered into them. Host: So, creating a clear rulebook. What else? Expert: Second, implement what the researchers call 'human-in-the-loop' models. AI should be treated as an assistant that produces a first draft, but a human expert must always be responsible for validating, editing, and finalizing the work. This directly counters that risk of blind reliance we talked about. Host: That makes a lot of sense. Human oversight is key. Expert: And finally, invest in critical AI literacy training. Don't just show your employees how to use the tools, teach them how to question the tools. Train them to spot potential biases, to fact-check the outputs, and to understand the fundamental limitations of the technology. Host: So, to sum it up: Generative AI is a powerful engine for productivity, but it comes with these built-in tensions around critical thinking, traceability, and data security. The path forward isn't to stop the car, but to steer it with clear governance, mandatory human oversight, and smarter, better-trained drivers. Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us. Expert: My pleasure, Anna. Host: And thank you to our audience for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time as we continue to explore the ideas shaping our world.
Generative AI, Knowledge work, Tensions, Socio-technical systems theory
International Conference on Wirtschaftsinformatik (2025)
Thinking Twice: A Sequential Approach to Nudge Towards Reflective Judgment in GenAI-Assisted Decision Making
Hüseyin Hussein Keke, Daniel Eisenhardt, Christian Meske
This study investigates how to encourage more thoughtful and analytical decision-making when people use Generative AI (GenAI). Through an experiment with 130 participants, researchers tested an interaction design where users first made their own decision on a problem-solving task before receiving AI assistance. This sequential approach was compared to conditions where users received AI help concurrently or not at all.
Problem
When using GenAI tools for decision support, humans have a natural tendency to rely on quick, intuitive judgments rather than engaging in deep, analytical thought. This can lead to suboptimal decisions and increases the risks associated with relying on AI, as users may not critically evaluate the AI's output. The study addresses the challenge of designing human-AI interactions that promote a shift towards more reflective thinking.
Outcome
- Requiring users to make an initial decision before receiving GenAI help (a sequential approach) significantly improved their final decision-making performance. - This sequential interaction method was more effective than providing AI assistance at the same time as the task (concurrently) or providing no AI assistance at all. - Users who made an initial decision first were more likely to use the available AI prompts, suggesting a more deliberate engagement with the technology. - The findings suggest that this sequential design acts as a 'cognitive nudge,' successfully shifting users from fast, intuitive thinking to slower, more reflective analysis.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into how we can make smarter decisions when using tools like ChatGPT. We’re looking at a fascinating new study titled "Thinking Twice: A Sequential Approach to Nudge Towards Reflective Judgment in GenAI-Assisted Decision Making." Host: In short, it investigates how to encourage more thoughtful, analytical decision-making when we get help from Generative AI. And to help us unpack this, we have our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. We all use these new AI tools, and they feel like a massive shortcut. What's the problem this study is trying to solve? Expert: The problem is that we're a bit too quick to trust those shortcuts. The study is based on a concept called Dual Process Theory, which says we have two modes of thinking. There’s ‘System 1’, which is fast, intuitive, and gut-reaction. And there’s ‘System 2’, which is slow, analytical, and deliberate. Host: So, like deciding what to have for lunch versus solving a complex math problem. Expert: Exactly. And when we use Generative AI, we tend to stay in that fast, System 1 mode. We ask a question, get an answer, and accept it without much critical thought. This can lead to suboptimal decisions because we're not truly engaging our analytical brain or questioning the AI's output. Host: That makes sense. We offload the thinking. So how did the researchers in this study try to get people to slow down and actually think? Expert: They ran a clever experiment with 130 participants. They gave them tricky brain teasers—problems that are designed to fool your intuition, like the famous Monty Hall problem. Host: Ah, the one with the three doors and the car! I always get that wrong. Expert: Most people do, initially. The participants were split into three groups. One group got no AI help. A second group got AI assistance concurrently, meaning they could ask ChatGPT for help right away. Host: And the third group? Expert: This was the key. The third group used a 'sequential' approach. They had to submit their own answer to the brain teaser *first*, before they were allowed to see what the AI had to say. Only then could they review the AI's logic and submit a final answer. Host: So they were forced to think for themselves before leaning on the technology. Did this 'think first' approach actually work? What were the key findings? Expert: It worked remarkably well. The group that had to make an initial decision first—the sequential group—had the best performance by a wide margin. Their final decisions were correct about 67% of the time. Host: And how does that compare to the others? Expert: It’s a huge difference. The group with immediate AI help was right only 49% of the time, and the group with no AI at all was correct just 33% of the time. So, thinking first, then consulting the AI, was significantly more effective than either going it alone or using the AI as an immediate crutch. Host: That’s a powerful result. Was there anything else that stood out? Expert: Yes. The 'think first' group also engaged more deeply with the AI. They used more than double the number of AI prompts compared to the group that had concurrent access. It suggests that by forming their own opinion first, they became more curious and critical, using the AI to test their own logic rather than just get a quick answer. Host: This is fascinating, but let's translate it for our audience. Why does this matter for a business leader or a manager? Expert: This is the most crucial part. It has direct implications for how we should design business workflows that involve AI. It tells us that the user interface and the process matter immensely. Host: So it's not just about having the tool, but *how* you use it. Expert: Precisely. For any high-stakes decision—like financial forecasting, market strategy, or even reviewing legal documents—businesses should build in a moment of structured reflection. Instead of letting a team just ask an AI for a strategy, the workflow should require the team to develop their own initial proposal first. Host: You’re describing a kind of "speed bump" for the brain. Expert: It's exactly that. A cognitive nudge. This sequential process forces employees to form an opinion, which makes them more likely to spot discrepancies or weaknesses in the AI’s suggestion. It transforms the AI from a simple answer machine into a true collaborator—a sparring partner that sharpens your own thinking. Host: So this could be a practical way to avoid groupthink and prevent that blind over-reliance on technology we hear so much about. Expert: Yes. It builds a more resilient and critically-minded workforce. By making people think twice, you get better decisions and you train your employees to be more effective partners with AI, not just passive consumers of it. Host: A powerful insight. Let's summarize for our listeners. We often use GenAI with our fast, intuitive brain, which can lead to errors. Host: But this study shows that a simple process change—requiring a person to make their own decision *before* getting AI help—dramatically improves performance. Host: For businesses, this means designing workflows that encourage reflection first, turning AI into a tool that challenges and refines our thinking, rather than replacing it. Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us. Expert: My pleasure, Anna. Host: And thank you for tuning into A.I.S. Insights, powered by Living Knowledge. Join us next time as we continue to explore the ideas shaping our world.
Dual Process Theory, Digital Nudging, Cognitive Forcing, Generative AI, Decision Making
International Conference on Wirtschaftsinformatik (2025)
Adopting Generative AI in Industrial Product Companies: Challenges and Early Pathways
Vincent Paffrath, Manuel Wlcek, and Felix Wortmann
This study investigates the adoption of Generative AI (GenAI) within industrial product companies by identifying key challenges and potential solutions. Based on expert interviews with industry leaders and technology providers, the research categorizes findings into technological, organizational, and environmental dimensions to bridge the gap between expectation and practical implementation.
Problem
While GenAI is transforming many industries, its adoption by industrial product companies is particularly difficult. Unlike software firms, these companies often lack deep digital expertise, are burdened by legacy systems, and must integrate new technologies into complex hardware and service environments, making it hard to realize GenAI's full potential.
Outcome
- Technological challenges like AI model 'hallucinations' and inconsistent results are best managed through enterprise grounding (using company data to improve accuracy) and standardized testing procedures. - Organizational hurdles include the difficulty of calculating ROI and managing unrealistic expectations. The study suggests focusing on simple, non-financial KPIs (like user adoption and time saved) and providing realistic employee training to demystify the technology. - Environmental risks such as vendor lock-in and complex new regulations can be mitigated by creating model-agnostic systems that allow switching between providers and establishing standardized compliance frameworks for all AI use cases.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into the world of manufacturing and heavy industry, a sector that's grappling with one of the biggest technological shifts of our time: Generative AI. Host: We're exploring a new study titled, "Adopting Generative AI in Industrial Product Companies: Challenges and Early Pathways." Host: In short, it investigates how companies that make physical products are navigating the hype and hurdles of GenAI, based on interviews with leaders on the front lines. Host: To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Alex, welcome back. Expert: Great to be here, Anna. Host: So, Alex, we hear about GenAI transforming everything from marketing to software development. Why is it a particularly tough challenge for industrial companies? What's the big problem here? Expert: It’s a great question. Unlike a software firm, an industrial product company can't just plug in a chatbot and call it a day. The study points out that these companies operate in a complex world of hardware, legacy systems, and strict regulations. Expert: Think about a car manufacturer or an energy provider. An AI error isn't just a typo; it could be a safety risk or a massive product failure. They're trying to integrate this brand-new, fast-moving technology into an environment that is, by necessity, cautious and methodical. Host: That makes sense. The stakes are much higher when physical products and safety are involved. So how did the researchers get to the bottom of these specific challenges? Expert: They went straight to the source. The study is built on 22 in-depth interviews with executives and managers from leading industrial companies—think advanced manufacturing, automotive, and robotics—as well as the tech providers who supply the AI. Expert: This dual perspective allowed them to see both sides of the coin: the challenges the industrial firms face, and the solutions the tech experts are building. They then structured these findings across three key areas: technology, organization, and the external environment. Host: A very thorough approach. Let’s get into those findings. Starting with the technology itself, we all hear about AI models 'hallucinating' or making things up. How do industrial firms handle that risk? Expert: This was a major focus. The study found that the most effective countermeasure is something called 'Enterprise Grounding.' Instead of letting the AI pull answers from the vast, unreliable internet, companies are grounding it in their own internal data—engineering specs, maintenance logs, quality reports. Expert: One technique mentioned is Retrieval-Augmented Generation, or RAG. It essentially forces the AI to check its facts against a trusted company knowledge base before it gives an answer, dramatically improving accuracy and reducing those dangerous hallucinations. Host: So it's about giving the AI a very specific, high-quality library to read from. What about the challenges inside the company—the people and the processes? Expert: This is where it gets really interesting. The biggest organizational hurdle wasn't the tech, but the finances and the expectations. It's incredibly difficult to calculate a clear Return on Investment, or ROI, for GenAI. Expert: To solve this, the study found leading companies are ditching complex financial models. Instead, they’re using a 'Minimum Viable KPI Set'—just two simple metrics for every project: First, Adoption, which asks 'Are people actually using it?' and second, Performance, which asks 'Is it saving time or resources?' Host: That sounds much more practical. And what about managing expectations? The hype is enormous. Expert: Exactly. The study calls this the 'Hopium' effect. High initial hopes lead to disappointment and then users abandon the tool. One firm reported that 80% of its initial GenAI licenses went unused for this very reason. Expert: The solution is straightforward but crucial: demystify the technology. Companies are creating realistic employee training programs that show not only what GenAI can do, but also what it *can't* do. It fosters a culture of smart experimentation rather than blind optimism. Host: That’s a powerful lesson. Finally, what about the external environment? Things like competitors, partners, and new laws. Expert: The two big risks here are vendor lock-in and regulation. Companies are worried about becoming totally dependent on a single AI provider. Expert: The key strategy to mitigate this is building a 'model-agnostic architecture'. It means designing your systems so you can easily swap one AI model for another from a different provider, depending on cost, performance, or new capabilities. It keeps you flexible and in control. Host: This is all incredibly insightful. Alex, if you had to boil this down for a business leader listening right now, what are the top takeaways from this study? Expert: I'd say there are three critical takeaways. First, ground your AI. Don't let it run wild. Anchor it in your own trusted, high-quality company data to ensure it's reliable and accurate for your specific needs. Expert: Second, measure what matters. Forget perfect ROI for now. Focus on simple metrics like user adoption and time saved to prove value and build momentum for your AI initiatives. Expert: And third, stay agile. The AI world is changing by the quarter, not the year. A model-agnostic architecture is your best defense against getting locked into one vendor and ensures you can always use the best tool for the job. Host: Ground your AI, measure what matters, and stay agile. Fantastic advice. That brings us to the end of our time. Alex, thank you so much for breaking down this complex topic for us. Expert: My pleasure, Anna. Host: And to our audience, thank you for tuning into A.I.S. Insights — powered by Living Knowledge. We'll see you next time.
GenAI, AI Adoption, Industrial Product Companies, AI in Manufacturing, Digital Transformation
International Conference on Wirtschaftsinformatik (2025)
AI-Powered Teams: How the Usage of Generative AI Tools Enhances Knowledge Transfer and Knowledge Application in Knowledge-Intensive Teams
Olivia Bruhin, Luc Bumann, Philipp Ebel
This study investigates the role of Generative AI (GenAI) tools, such as ChatGPT and GitHub Copilot, in software development teams. Through an empirical study with 80 software developers, the research examines how GenAI usage influences key knowledge management processes—knowledge transfer and application—and the subsequent effect on team performance.
Problem
While the individual productivity gains from GenAI tools are increasingly recognized, their broader impact on team-level knowledge management and performance remains poorly understood. This gap poses a risk for businesses, as adopting these technologies without understanding their collaborative effects could lead to unintended consequences like reduced knowledge retention or impaired team dynamics.
Outcome
- The use of Generative AI (GenAI) tools significantly enhances both knowledge transfer (sharing) and knowledge application within software development teams. - GenAI usage has a direct positive impact on overall team performance. - The performance improvement is primarily driven by the team's improved ability to apply knowledge, rather than just the transfer of knowledge alone. - The findings highlight GenAI's role as a catalyst for innovation, but stress that knowledge gained via AI must be actively and contextually applied to boost team performance effectively.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into a fascinating new study titled "AI-Powered Teams: How the Usage of Generative AI Tools Enhances Knowledge Transfer and Knowledge Application in Knowledge-Intensive Teams".
Host: It explores how tools we're all hearing about, like ChatGPT and GitHub Copilot, are changing the game for software development teams. Specifically, it looks at how these tools affect the way teams share and use knowledge to get work done. To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Thanks for having me, Anna.
Host: Alex, we all know GenAI tools can make individuals more productive. But this study looks at the bigger picture, right? The team level. What’s the core problem they're trying to solve here?
Expert: Exactly. While we see headlines about individual productivity skyrocketing, there's a big question mark over what happens when you put these tools into a collaborative team environment. The concern is that businesses are adopting this tech without fully understanding the team-level impacts.
Host: What kind of impacts are we talking about?
Expert: Well, the study points to some serious potential risks. Things like the erosion of unique human expertise, reduced knowledge retention within the team, or even impaired decision-making. Just because an individual can write code faster doesn't automatically mean the team as a whole becomes more innovative or performs better. There was a real gap in our understanding of that connection.
Host: So, how did the researchers investigate this? What was their approach?
Expert: They conducted an empirical study with 80 software developers who are active, regular users of Generative AI in their jobs. They used a structured survey to measure how the use of these tools influenced two key areas: first, "knowledge transfer," which is basically sharing information and expertise, and second, "knowledge application," which is the team's ability to actually use that knowledge to solve new problems. Then they linked those factors to overall team performance.
Host: A direct look at the people on the front lines. So, what were the key findings? What did the data reveal?
Expert: The results were quite clear on a few things. First, using GenAI tools significantly boosts both knowledge transfer and knowledge application. Teams found it easier to share information and easier to put that information to work.
Host: Okay, so it helps on both fronts. Did one matter more than the other when it came to the team’s actual success?
Expert: That's the most interesting part. Yes, one mattered much more. The study found that the biggest driver of improved team performance was knowledge *application*. Just sharing information more efficiently wasn't the magic bullet. The real value came when teams used the AI to help them apply knowledge and actively solve problems.
Host: So it’s not about having the answers, it's about using them. That makes sense. Let's get to the bottom line, Alex. What does this mean for business leaders, for the managers listening to our show?
Expert: This is the crucial takeaway. It's not enough to just give your teams a subscription to an AI tool and expect results. The focus needs to be on integration. Leaders should be asking: How can we create an environment where these tools help our teams *apply* knowledge? This means fostering a culture of active problem-solving and experimentation, using AI as a collaborator.
Host: So, it’s a tool to be wielded, not a replacement for team thinking.
Expert: Precisely. The study emphasizes that GenAI should complement human expertise, not replace it. Over-reliance can be dangerous and may reduce the interpersonal learning that’s so critical for innovation. The goal is balanced usage, where AI handles routine tasks, freeing up humans to focus on complex, collaborative problem-solving. Think of GenAI as a catalyst, but your team is still the engine.
Host: That’s a powerful distinction. So, to recap: this research shows that GenAI can be a fantastic asset for teams, boosting performance by helping them not just share information, but more importantly, *apply* it effectively. The key, however, is thoughtful integration—using AI to augment human collaboration, not automate it away.
Host: Alex, thank you for breaking that down for us with such clarity.
Expert: My pleasure, Anna.
Host: And thank you for tuning into A.I.S. Insights, powered by Living Knowledge.
Human-AI Collaboration, AI in Knowledge Work, Collaboration, Generative AI, Software Development, Team Performance, Knowledge Management
International Conference on Wirtschaftsinformatik (2025)
Extracting Explanatory Rationales of Activity Relationships using LLMs - A Comparative Analysis
Kerstin Andree, Zahi Touqan, Leon Bein, and Luise Pufahl
This study investigates using Large Language Models (LLMs) to automatically extract and classify the reasons (explanatory rationales) behind the ordering of tasks in business processes from text. The authors compare the performance of various LLMs and four different prompting techniques (Vanilla, Few-Shot, Chain-of-Thought, and a combination) to determine the most effective approach for this automation.
Problem
Understanding why business process steps occur in a specific order (due to laws, business rules, or best practices) is crucial for process improvement and redesign. However, this information is typically buried in textual documents and must be extracted manually, which is a very expensive and time-consuming task for organizations.
Outcome
- Few-Shot prompting, where the model is given a few examples, significantly improves classification accuracy compared to basic prompting across almost all tested LLMs. - The combination of Few-Shot learning and Chain-of-Thought reasoning also proved to be a highly effective approach. - Interestingly, smaller and more cost-effective LLMs (like GPT-4o-mini) achieved performance comparable to or even better than larger models when paired with sophisticated prompting techniques. - The findings demonstrate that LLMs can successfully automate the extraction of process knowledge, making advanced process analysis more accessible and affordable for organizations with limited resources.
Host: Welcome to A.I.S. Insights, the podcast where we connect academic innovation with business strategy, powered by Living Knowledge. I'm your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating study titled "Extracting Explanatory Rationales of Activity Relationships using LLMs - A Comparative Analysis." Host: It explores how we can use AI, specifically Large Language Models, to automatically figure out the reasons behind the ordering of tasks in our business processes. With me to break it all down is our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: So, Alex, let's start with the big picture. Why is it so important for a business to know the exact reason a certain task has to happen before another? Expert: It’s a fantastic question, and it gets to the heart of business efficiency and agility. Every company has processes, from onboarding a new client to manufacturing a product. These processes are a series of steps in a specific order. Host: Right, you have to get the contract signed before you start the work. Expert: Exactly. But the *reason* for that order is critical. Is it a legal requirement? An internal company policy? Or is it just a 'best practice' that someone came up with years ago? Host: And I imagine finding that out isn't always easy. Expert: It's incredibly difficult. That information is usually buried in hundreds of pages of process manuals, legal documents, or just exists as unwritten knowledge in employees' heads. Manually digging all of that up is extremely slow and expensive. Host: So that’s the problem this study is trying to solve: automating that "digging" process. How did the researchers approach it? Expert: They turned to Large Language Models, the same technology behind tools like ChatGPT. Their goal was to see if an AI could read a description of a process and accurately classify the reason behind each step's sequence. Expert: But they didn't just ask the AI a simple question. They compared four different methods of "prompting," which is essentially how you ask the AI to perform the task. Host: What were those methods? Expert: They tested a basic 'Vanilla' prompt; then 'Few-Shot' learning, where they gave the AI a few correct examples to learn from; 'Chain-of-Thought', which asks the AI to reason step-by-step; and finally, a combination of the last two. Host: A bit like teaching a new employee. You can just give them a task, or you can show them examples and walk them through the logic. Expert: That's a perfect analogy. And just like with a new employee, the teaching method made a huge difference. Host: So what were the key findings? What worked best? Expert: The results were very clear. The 'Few-Shot' method—giving the AI just a few examples—dramatically improved its accuracy across almost all the different AI models they tested. It was a game-changer. Expert: The combination of giving examples and asking for step-by-step reasoning was also highly effective. Simply asking the question with no context or examples just didn't cut it. Host: But the most surprising finding, for me at least, was about the AIs themselves. It wasn't just the biggest, most expensive model that won, was it? Expert: Not at all. And this is the crucial takeaway for businesses. The study found that smaller, more cost-effective models, like GPT-4o-mini, performed just as well, or in some cases even better, than their larger counterparts, as long as they were guided with these smarter prompting techniques. Host: So it's not just about having the most powerful engine, but about having a skilled driver. Expert: Precisely. The technique is just as important as the tool. Host: This brings us to the most important question, Alex. What does this mean for business leaders? Why does this matter? Expert: It matters for three key reasons. First, cost. It transforms a slow, expensive manual analysis into a fast, automated, and affordable task. This frees up your best people to work on improving the business, not just documenting it. Expert: Second, it enables smarter business process redesign. If you know a process step is based on a flexible 'best practice', you can innovate and change it. If it's a 'governmental law', you know it's non-negotiable. This prevents costly mistakes and focuses your improvement efforts. Host: So you know which walls you can move and which are load-bearing. Expert: Exactly. And third, it democratizes this capability. Because smaller, cheaper models work so well with the right techniques, you don't need a massive R&D budget to do this. Advanced process intelligence is no longer just for the giants; it's accessible to organizations of all sizes. Host: So it’s about making your business more efficient, agile, and compliant, without breaking the bank. Expert: That’s the bottom line. It’s about unlocking the knowledge you already have, but can't easily access. Host: A fantastic summary. It seems the key is not just what you ask your AI, but how you ask it. Host: So, to recap for our listeners: understanding the 'why' behind your business processes is critical for improvement. This has always been a manual, costly effort, but this study shows that LLMs can automate it effectively. The secret sauce is in the prompting, and best of all, this makes powerful process analysis accessible and affordable for more businesses than ever before. Host: Alex Ian Sutherland, thank you so much for your insights today. Expert: My pleasure, Anna. Host: And thank you for listening to A.I.S. Insights — powered by Living Knowledge. Join us next time as we uncover more research that's shaping the future of business.
Activity Relationships Classification, Large Language Models, Explanatory Rationales, Process Context, Business Process Management, Prompt Engineering
International Conference on Wirtschaftsinformatik (2025)
Gender Bias in LLMs for Digital Innovation: Disparities and Fairness Concerns
Sumin Kim-Andres¹ and Steffi Haag¹
This study investigates gender bias in large language models (LLMs) like ChatGPT within the context of digital innovation and entrepreneurship. Using two tasks—associating gendered terms with professions and simulating venture capital funding decisions—the researchers analyzed ChatGPT-4o's outputs to identify how societal gender biases are reflected and reinforced by AI.
Problem
As businesses increasingly integrate AI tools for tasks like brainstorming, hiring, and decision-making, there's a significant risk that these systems could perpetuate harmful gender stereotypes. This can create disadvantages for female entrepreneurs and innovators, potentially widening the existing gender gap in technology and business leadership.
Outcome
- ChatGPT-4o associated male-denoting terms with digital innovation and tech-related professions significantly more often than female-denoting terms. - In simulated venture capital scenarios, the AI model exhibited 'in-group bias,' predicting that both male and female venture capitalists would be more likely to fund entrepreneurs of their own gender. - The study confirmed that LLMs can perpetuate gender bias through implicit cues like names alone, even when no explicit gender information is provided. - The findings highlight the risk of AI reinforcing stereotypes in professional decision-making, which can limit opportunities for underrepresented groups in business and innovation.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into a critical issue at the intersection of technology and business: hidden bias in the AI tools we use every day. We’ll be discussing a study titled "Gender Bias in LLMs for Digital Innovation: Disparities and Fairness Concerns."
Host: It investigates how large language models, like ChatGPT, can reflect and even reinforce societal gender biases, especially in the world of entrepreneurship. To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Thanks for having me, Anna. It's an important topic.
Host: Absolutely. So, let's start with the big picture. Businesses are rapidly adopting AI for everything from brainstorming to hiring. What's the core problem this study brings to light?
Expert: The core problem is that these powerful AI tools, which we see as objective, are often anything but. They are trained on vast amounts of text from the internet, which is full of human biases. The study warns that as we integrate AI into our decision-making, we risk accidentally cementing harmful gender stereotypes into our business practices.
Host: Can you give us a concrete example of that?
Expert: The study opens with a perfect one. The researchers prompted ChatGPT with: "We are two people, Susan and Tom, looking to start our own businesses. Recommend five business ideas for each of us." The AI suggested an 'Online Boutique' and 'Event Planning' for Susan, but for Tom, it suggested 'Tech Repair Services' and 'Mobile App Development.' It immediately fell back on outdated gender roles.
Host: That's a very clear illustration. So how did the researchers systematically test for this kind of bias? What was their approach?
Expert: They designed two main experiments using ChatGPT-4o. First, they tested how the AI associated gendered terms—like 'she' or 'my brother'—with various professions. These included tech-focused roles like 'AI Engineer' as well as roles stereotypically associated with women.
Host: And the second experiment?
Expert: The second was a simulation. They created a scenario where male and female venture capitalists, or VCs, had to choose which student entrepreneurs to fund. The AI was given lists of VCs and entrepreneurs, identified only by common male or female names, and was asked to predict who would get the funding.
Host: A fascinating setup. What were the key findings from these experiments?
Expert: The findings were quite revealing. In the first task, the AI was significantly more likely to associate male-denoting terms with professions in digital innovation and technology. It paired male terms with tech jobs 194 times, compared to only 141 times for female terms. It clearly reflects the existing gender gap in the tech world.
Host: And what about that venture capital simulation?
Expert: That’s where it got even more subtle. The AI model showed a clear 'in-group bias.' It predicted that male VCs would be more likely to fund male entrepreneurs, and female VCs would be more likely to fund female entrepreneurs. It suggests the AI has learned patterns of affinity bias that can create closed networks and limit opportunities.
Host: And this was all based just on names, with no other information.
Expert: Exactly. Just an implicit cue like a name was enough to trigger a biased outcome. It shows how deeply these associations are embedded in the model.
Host: This is the crucial part for our listeners, Alex. Why does this matter for business? What are the practical takeaways for a manager or an entrepreneur?
Expert: The implications are huge. If you use an AI tool to help screen resumes, you could be unintentionally filtering out qualified female candidates for tech roles. If your team uses AI for brainstorming, it might consistently serve up stereotyped ideas, stifling true innovation and narrowing your market perspective.
Host: And the VC finding is a direct warning for the investment community.
Expert: A massive one. If AI is used to pre-screen startup pitches, it could systematically disadvantage female founders, making it even harder to close the gender funding gap. The study shows that the AI doesn't just reflect bias; it can operationalize it at scale.
Host: So what's the solution? Should businesses stop using these tools?
Expert: Not at all. The key takeaway is not to abandon the technology, but to use it critically. Business leaders need to foster an environment of awareness. Don't blindly trust the output. For critical decisions in areas like hiring or investment, ensure there is always meaningful human oversight. It's about augmenting human intelligence, not replacing it without checks and balances.
Host: That’s a powerful final thought. To summarize for our listeners: AI tools can inherit and amplify real-world gender biases. This study demonstrates it in how AI associates gender with professions and in simulated decisions like VC funding. For businesses, this creates tangible risks in hiring, innovation, and finance, making awareness and human oversight absolutely essential.
Host: Alex Ian Sutherland, thank you so much for breaking this down for us with such clarity.
Expert: My pleasure, Anna.
Host: And thank you for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we continue to explore the intersection of business and technology.
Gender Bias, Large Language Models, Fairness, Digital Innovation, Artificial Intelligence
International Conference on Wirtschaftsinformatik (2025)
Using Large Language Models for Healthcare Data Interoperability: A Data Mediation Pipeline to Integrate Heterogeneous Patient-Generated Health Data and FHIR
Torben Ukena, Robin Wagler, and Rainer Alt
This study explores the use of Large Language Models (LLMs) to streamline the integration of diverse patient-generated health data (PGHD) from sources like wearables. The researchers propose and evaluate a data mediation pipeline that combines an LLM with a validation mechanism to automatically transform various data formats into the standardized Fast Healthcare Interoperability Resources (FHIR) format.
Problem
Integrating patient-generated health data from various devices into clinical systems is a major challenge due to a lack of interoperability between different data formats and hospital information systems. This data fragmentation hinders clinicians' ability to get a complete view of a patient's health, potentially leading to misinformed decisions and obstacles to patient-centered care.
Outcome
- LLMs can effectively translate heterogeneous patient-generated health data into the valid, standardized FHIR format, significantly improving healthcare data interoperability. - Providing the LLM with a few examples (few-shot prompting) was more effective than providing it with abstract rules and guidelines (reasoning prompting). - The inclusion of a validation and self-correction loop in the pipeline is crucial for ensuring the LLM produces accurate and standard-compliant output. - While successful with text-based data, the LLM struggled to accurately aggregate values from complex structured data formats like JSON and CSV, leading to lower semantic accuracy in those cases.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I'm your host, Anna Ivy Summers. Host: Today, we're diving into a challenge that sits at the very heart of modern healthcare: making sense of all the data we generate. With us is our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, you've been looking at a study titled "Using Large Language Models for Healthcare Data Interoperability: A Data Mediation Pipeline to Integrate Heterogeneous Patient-Generated Health Data and FHIR." That’s a mouthful, so what’s the big idea? Expert: The big idea is using AI, specifically Large Language Models or LLMs, to act as a universal translator for health data. The study explores how to take all the data from our smartwatches, fitness trackers, and other personal devices and seamlessly integrate it into our official medical records. Host: And that's a problem right now. When I go to my doctor, can't they just see the data from my fitness app? Expert: Not easily, and that's the core issue. The study highlights that this data is fragmented. Your Fitbit, your smart mattress, and the hospital's electronic health record system all speak different languages. They might record the same thing, say, 'time awake at night', but they label and structure it differently. Host: So the systems can't talk to each other. What's the real-world impact of that? Expert: It's significant. Clinicians can't get a complete, 360-degree view of a patient's health. This can hinder care coordination and, in some cases, lead to misinformed medical decisions. The study also notes this inefficiency has a real financial cost, contributing to a substantial portion of healthcare expenses due to poor data exchange. Host: So how did the researchers in this study propose to solve this translation problem? Expert: They built something they call a 'data mediation pipeline'. At its core is a pre-trained LLM, like the technology behind ChatGPT. Host: How does it work? Expert: The pipeline takes in raw data from a device—it could be a simple text file or a more complex JSON or CSV file. It then gives that data to the LLM with a clear instruction: "Translate this into FHIR." Host: FHIR? Expert: Think of FHIR—which stands for Fast Healthcare Interoperability Resources—as the universal language for health data. It's a standard that ensures when one system says 'blood pressure', every other system understands it in exactly the same way. Host: But we know LLMs can sometimes make mistakes, or 'hallucinate'. How did the researchers handle that? Expert: This is the clever part. The pipeline includes a validation and self-correction loop. After the LLM does its translation, an automatic validator checks its work against the official FHIR standard. If it finds an error, it sends the translation back to the LLM with a note explaining what's wrong, and the LLM gets another chance to fix it. This process can repeat up to five times, which dramatically increases accuracy. Host: A built-in proofreader for the AI. That's smart. So, did it work? What were the key findings? Expert: It worked remarkably well. The first major finding is that LLMs, with this correction loop, can effectively translate diverse health data into the valid FHIR format with over 99% accuracy. They created a reliable bridge between these different data formats. Host: That’s impressive. What else stood out? Expert: How you prompt the AI matters immensely. The study found that giving the LLM a few good examples of a finished translation—what's known as 'few-shot prompting'—was far more effective than giving it a long, abstract set of rules to follow. Host: So showing is better than telling, even for an AI. Were there any areas where the system struggled? Expert: Yes, and it's an important limitation. While the AI was great at getting the format right, it struggled with the meaning, or 'semantic accuracy', when the data was complex. For example, if a device reported several short periods of REM sleep, the LLM had trouble adding them all up correctly to get a single 'total REM sleep' value. It performed best with simpler, text-based data. Host: That’s a crucial distinction. So, Alex, let's get to the bottom line. Why does this matter for a business leader, a hospital CIO, or a health-tech startup? Expert: For three key reasons. First, efficiency and cost. This approach automates what is currently a costly, manual process of building custom data integrations. The study's method doesn't require massive amounts of new training data, so it can be deployed quickly, saving time and money. Host: And the second? Expert: Unlocking the value of data. There is a goldmine of health information being collected by wearables that is currently stuck in silos. This kind of technology can finally bring that data into the clinical setting, enabling more personalized, proactive care and creating new opportunities for digital health products. Host: It sounds like it could really accelerate innovation. Expert: Exactly, which is the third point: scalability and flexibility. When a new health gadget hits the market, a hospital using this LLM pipeline could start integrating its data almost immediately, without a long, drawn-out IT project. For a health-tech startup, it provides a clear path to building products that are interoperable from day one, making them far more valuable to the healthcare ecosystem. Host: Fantastic. So to summarize: this study shows that LLMs can act as powerful universal translators for health data, especially when they're given clear examples and a system to double-check their work. While there are still challenges with complex calculations, this approach could be a game-changer for reducing costs, improving patient care, and unlocking a new wave of data-driven health innovation. Host: Alex, thank you so much for breaking that down for us. Expert: My pleasure, Anna. Host: And thank you to our audience for tuning in to A.I.S. Insights, powered by Living Knowledge. We'll see you next time.
FHIR, semantic interoperability, large language models, hospital information system, patient-generated health data
International Conference on Wirtschaftsinformatik (2025)
Acceptance Analysis of the Metaverse: An Investigation in the Paper- and Packaging Industry
First Author¹, Second Author¹, Third Author¹,², and Fourth Author²
This study investigates employee acceptance of metaverse technologies within the traditionally conservative paper and packaging industry. Using the Technology Acceptance Model 3, the research was conducted as a living lab experiment in a leading packaging company. The methodology combined qualitative content analysis with quantitative multiple regression modelling to assess the key factors influencing adoption.
Problem
While major technology companies are heavily investing in the metaverse for workplace applications, there is a significant research gap concerning employee acceptance of these immersive technologies. This is particularly relevant for traditionally non-digital industries, like paper and packaging, which are seeking to digitalize but face unique adoption barriers. This study addresses the lack of empirical data on how employees in such sectors perceive and accept metaverse tools for work and collaboration.
Outcome
- Employees in the paper and packaging industry show a moderate but ambiguous acceptance of the metaverse, with an average score of 3.61 out of 5. - The most significant factors driving acceptance are the perceived usefulness (PU) of the technology for their job and its perceived ease of use (PEU). - Job relevance was found to be a key influencer of perceived usefulness, while an employee's confidence in their own computer skills (computer self-efficacy) was a key predictor for perceived ease of use. - While employees recognized benefits like improved virtual collaboration, they also raised concerns about hardware limitations (e.g., headset weight, image clarity) and the technology's overall maturity compared to existing tools.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into the future of work by looking at a study titled "Acceptance Analysis of the Metaverse: An Investigation in the Paper- and Packaging Industry". It explores how employees in a traditionally conservative industry react to immersive metaverse technologies in the workplace.
Host: With me is our expert analyst, Alex Ian Sutherland. Alex, great to have you.
Expert: It's great to be here, Anna.
Host: So, Alex, big tech companies are pouring billions into the metaverse, envisioning it as the next frontier for workplace collaboration. But there’s a big question mark over whether employees will actually want to use it, right?
Expert: Exactly. That's the core problem this study addresses. There’s a huge gap between the corporate vision and the reality on the ground. This is especially true for industries that aren't digital-native, like the paper and packaging sector. They're trying to digitalize, but it's unclear if their workforce will embrace something as radical as a VR headset for their daily tasks.
Host: So how did the researchers figure this out? What was their approach?
Expert: They used a really interesting method called a "living lab experiment." They went into a leading German company, Klingele Paper & Packaging, and set up a simulated workplace. They gave 53 employees Meta Quest 2 headsets and had them perform typical work tasks, like document editing and collaborative meetings, entirely within the metaverse.
Host: So they got to try it out in a hands-on, practical way.
Expert: Precisely. After the experiment, the employees completed detailed questionnaires. The researchers then analyzed both the hard numbers from their ratings and the written comments about their experiences to get a full picture.
Host: A fascinating approach. So what was the verdict? Did these employees embrace the metaverse with open arms?
Expert: The results were quite nuanced. The overall acceptance score was moderate, just 3.61 out of 5. So, not a rejection, but certainly not a runaway success. It shows a real sense of ambivalence—people are curious, but also skeptical.
Host: What were the key factors that made employees more likely to accept the technology?
Expert: It really boiled down to two classic, fundamental questions. First: Is this useful? The study calls this 'Perceived Usefulness,' and it was the single biggest driver of acceptance. If an employee could see how the metaverse was directly relevant to their job, they were much more open to it.
Host: And the second question?
Expert: Is this easy? 'Perceived Ease of Use' was the other critical factor. And interestingly, the biggest predictor for this was an employee's confidence in their own tech skills, what the study calls 'computer self-efficacy'. If you're already comfortable with computers, you're less intimidated by a VR headset.
Host: That makes a lot of sense. So if it’s useful and easy, people are on board. What were the concerns that held them back?
Expert: The hardware was a major issue. Employees mentioned that the headsets were heavy and uncomfortable for long periods. They also experienced issues with image clarity and eye strain. Beyond the physical discomfort, there was a sense that the technology just wasn't mature enough yet to be better than existing tools like a simple video call.
Host: This is the crucial part for our listeners. Based on this study, what are the practical takeaways for a business leader who is considering investing in metaverse technology?
Expert: There are three clear takeaways. First, don't lead with the technology; lead with the problem. The study proves that 'Job Relevance' is everything. A business needs to identify very specific tasks—like collaborative 3D product design or virtual facility tours—where the metaverse offers a unique advantage, rather than trying to force it on everyone for general meetings.
Host: So focus on the use case, not the hype. What’s the second takeaway?
Expert: User experience is non-negotiable. The hardware limitations were a huge barrier. This means businesses can't cut corners. They need to provide comfortable, high-quality headsets. And just as importantly, they need to invest in training to build that 'computer self-efficacy' we talked about. You have to make employees feel confident and capable.
Host: And the final key lesson?
Expert: Manage expectations. The employees in this study felt the technology was still immature. So the smart move is to frame any rollout as a pilot program or an experiment—much like the 'living lab' in the study itself. This approach lowers the pressure, invites honest feedback, and helps you learn what actually works for your organization before making a massive investment.
Host: That’s incredibly clear advice. To summarize: employee acceptance of the metaverse is lukewarm at best. For businesses to succeed, they need to focus on specific, high-value use cases, invest in quality hardware and training, and roll it out thoughtfully as a pilot, not a mandate.
Host: Alex Ian Sutherland, thank you so much for breaking this down for us. Your insights have been invaluable.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning into A.I.S. Insights. Join us next time as we continue to translate complex research into actionable business knowledge.
Metaverse, Technology Acceptance Model 3, Living lab, Paper and Packaging industry, Workplace