AIS Logo
Living knowledge for digital leadership
All AI Governance & Ethics Digital Transformation & Innovation Supply Chain & Operations AI Adoption & Implementation Platform Ecosystems & Strategy SME & Entrepreneurship Cybersecurity & Risk AI Applications & Technologies Digital Health & Well-being Digital Work & Collaboration Education & Training
Generative AI Usage of University Students: Navigating Between Education and Business
International Conference on Wirtschaftsinformatik (2025)

Generative AI Usage of University Students: Navigating Between Education and Business

Fabian Walke, Veronika Föller
This study investigates how university students who also work professionally use Generative AI (GenAI) in both their academic and business lives. Using a grounded theory approach, the researchers interviewed eleven part-time students from a distance learning university to understand the characteristics, drivers, and challenges of their GenAI usage.

Problem While much research has explored GenAI in education or in business separately, there is a significant gap in understanding its use at the intersection of these two domains. Specifically, the unique experiences of part-time students who balance professional careers with their studies have been largely overlooked.

Outcome - GenAI significantly enhances productivity and learning for students balancing work and education, helping with tasks like writing support, idea generation, and summarizing content.
- Students express concerns about the ethical implications, reliability of AI-generated content, and the risk of academic misconduct or being falsely accused of plagiarism.
- A key practical consequence is that GenAI tools like ChatGPT are replacing traditional search engines for many information-seeking tasks due to their speed and directness.
- The study highlights a strong need for universities to provide clear guidelines, regulations, and formal training on using GenAI effectively and ethically.
- User experience is a critical factor; a positive, seamless interaction with a GenAI tool promotes continuous usage, while a poor experience diminishes willingness to use it.
Artificial Intelligence, ChatGPT, Enterprise, Part-time students, Generative AI, Higher Education
The GenAI Who Knew Too Little – Revisiting Transactive Memory Systems in Human GenAI Collaboration
International Conference on Wirtschaftsinformatik (2025)

The GenAI Who Knew Too Little – Revisiting Transactive Memory Systems in Human GenAI Collaboration

Christian Meske, Tobias Hermanns, Florian Brachten
This study investigates how traditional models of team collaboration, known as Transactive Memory Systems (TMS), manifest when humans work with Generative AI. Through in-depth interviews with 14 knowledge workers, the research analyzes the unique dynamics of expertise recognition, trust, and coordination that emerge in these partnerships.

Problem While Generative AI is increasingly used as a collaborative tool, our understanding of teamwork is based on human-to-human interaction. This creates a knowledge gap, as the established theories do not account for an AI partner that operates on algorithms rather than social cues, potentially leading to inefficient and frustrating collaborations.

Outcome - Human-AI collaboration is asymmetrical: Humans learn the AI's capabilities, but the AI fails to recognize and remember human expertise beyond a single conversation.
- Trust in GenAI is ambivalent and requires verification: Users simultaneously see the AI as an expert yet doubt its reliability, forcing them to constantly verify its outputs, a step not typically taken with trusted human colleagues.
- Teamwork is hierarchical, not mutual: Humans must always take the lead and direct a passive AI that lacks initiative, creating a 'boss-employee' dynamic rather than a reciprocal partnership where both parties contribute ideas.
Generative AI, Transactive Memory Systems, Human-AI Collaboration, Knowledge Work, Trust in AI, Expertise Recognition, Coordination
LLMs for Intelligent Automation - Insights from a Systematic Literature Review
International Conference on Wirtschaftsinformatik (2025)

LLMs for Intelligent Automation - Insights from a Systematic Literature Review

David Sonnabend, Mahei Manhai Li and Christoph Peters
This study conducts a systematic literature review to examine how Large Language Models (LLMs) can enhance Intelligent Automation (IA). The research aims to overcome the limitations of traditional Robotic Process Automation (RPA), such as handling unstructured data and workflow changes, by systematically investigating the integration of LLMs.

Problem Traditional Robotic Process Automation (RPA) struggles with complex tasks involving unstructured data and dynamic workflows. While Large Language Models (LLMs) show promise in addressing these issues, there has been no systematic investigation into how they can specifically advance the field of Intelligent Automation (IA), creating a significant research gap.

Outcome - LLMs are primarily used to process complex inputs, such as unstructured text, within automation workflows.
- They are leveraged to generate automation workflows directly from natural language commands, simplifying the creation process.
- LLMs are also used to guide goal-oriented Graphical User Interface (GUI) navigation, making automation more adaptable to interface changes.
- A key research gap was identified in the lack of systems that combine these different capabilities and enable continuous learning at runtime.
Large Language Models (LLMs), Intelligent Process Automation (IPA), Intelligent Automation (IA), Cognitive Automation (CA), Tool Learning, Systematic Literature Review, Robotic Process Automation (RPA)
Unveiling the Influence of Personality, Identity, and Organizational Culture on Generative AI Adoption in the Workplace
International Conference on Wirtschaftsinformatik (2025)

Unveiling the Influence of Personality, Identity, and Organizational Culture on Generative AI Adoption in the Workplace

Dugaxhin Xhigoli
This qualitative study examines how an employee's personality, professional identity, and company culture influence their engagement with generative AI (GenAI). Through 23 expert interviews, the research explores the underlying factors that shape different AI adoption behaviors, from transparent integration to strategic concealment.

Problem As companies rapidly adopt generative AI, they encounter a wide range of employee responses, yet there is limited understanding of what drives this variation. This study addresses the research gap by investigating why employees differ in their AI usage, specifically focusing on how individual psychology and the organizational environment interact to shape these behaviors.

Outcome - The study identified four key dimensions influencing GenAI adoption: Personality-driven usage behavior, AI-driven changes to professional identity, organizational culture factors, and the organizational risks of unmanaged AI use.
- Four distinct employee archetypes were identified: 'Innovative Pioneers' who openly use and identify with AI, 'Hidden Users' who identify with AI but conceal its use for competitive advantage, 'Transparent Users' who openly use AI as a tool, and 'Critical Skeptics' who remain cautious and avoid it.
- Personality traits, particularly those from the 'Dark Triad' like narcissism, and competitive work environments significantly drive the strategic concealment of AI use.
- A company's culture is critical; open, innovative cultures foster ethical and transparent AI adoption, whereas rigid, hierarchical cultures encourage concealment and the rise of risky 'Shadow AI'.
Generative AI, Personality Traits, AI Identity, Organizational Culture, AI Adoption
The Role of Generative AI in P2P Rental Platforms: Investigating the Effects of Timing and Interactivity on User Reliance in Content (Co-)Creation Processes
International Conference on Wirtschaftsinformatik (2025)

The Role of Generative AI in P2P Rental Platforms: Investigating the Effects of Timing and Interactivity on User Reliance in Content (Co-)Creation Processes

Niko Spatscheck, Myriam Schaschek, Christoph Tomitza, and Axel Winkelmann
This study investigates how Generative AI can best assist users on peer-to-peer (P2P) rental platforms like Airbnb in writing property listings. Through an experiment with 244 participants, the researchers tested how the timing of when AI suggestions are offered and the level of interactivity (automatic vs. user-prompted) influence how much a user relies on the AI.

Problem While Generative AI offers a powerful way to help property hosts create compelling listings, platforms don't know the most effective way to implement these tools. It's unclear if AI assistance is more impactful at the beginning or end of the writing process, or if users prefer to actively ask for help versus receiving it automatically. This study addresses this knowledge gap to provide guidance for designing better AI co-writing assistants.

Outcome - Offering AI suggestions earlier in the writing process significantly increases how much users rely on them.
- Allowing users to actively prompt the AI for assistance leads to a slightly higher reliance compared to receiving suggestions automatically.
- Higher cognitive load (mental effort) reduces a user's reliance on AI-generated suggestions.
- For businesses like Airbnb, these findings suggest that AI writing tools should be designed to engage users at the very beginning of the content creation process to maximize their adoption and impact.
Human-genAI collaboration, Co-writing, P2P rental platforms, Reliance, Generative AI, Cognitive Load
A Framework for Context-Specific Theorizing on Trust and Reliance in Collaborative Human-AI Decision-Making Environments
International Conference on Wirtschaftsinformatik (2025)

A Framework for Context-Specific Theorizing on Trust and Reliance in Collaborative Human-AI Decision-Making Environments

Niko Spatscheck
This study analyzes 59 empirical research papers to understand why findings on human trust in AI have been inconsistent. It synthesizes this research into a single framework that identifies the key factors influencing how people decide to trust and rely on AI systems for decision-making. The goal is to provide a more unified and context-aware understanding of the complex relationship between humans and AI.

Problem Effective collaboration between humans and AI is often hindered because people either trust AI too much (overreliance) or too little (underreliance), leading to poor outcomes. Existing research offers conflicting explanations for this behavior, creating a knowledge gap for developers and organizations. This study addresses the problem that prior research has largely ignored the specific context—such as the user's expertise, the AI's design, and the nature of the task—which is crucial for explaining these inconsistencies.

Outcome - The study created a comprehensive framework that categorizes the factors influencing trust and reliance on AI into three main groups: human-related (e.g., user expertise, cognitive biases), AI-related (e.g., performance, explainability), and decision-related (e.g., risk, complexity).
- It concludes that trust is not static but is dynamically shaped by the interaction of these various contextual factors.
- This framework provides a practical tool for researchers and businesses to better predict how users will interact with AI and to design systems that foster appropriate levels of trust, leading to better collaborative performance.
AI Systems, Trust, Reliance, Collaborative Decision-Making, Human-AI Collaboration, Contextual Factors, Conceptual Framework
Generative Al in Business Process Optimization: A Maturity Analysis of Business Applications
International Conference on Wirtschaftsinformatik (2025)

Generative Al in Business Process Optimization: A Maturity Analysis of Business Applications

Ralf Mengele
This study analyzes the current state of Generative AI (GAI) in the business world by systematically reviewing scientific literature. It identifies where GAI applications have been explored or implemented across the value chain and evaluates the maturity of these use cases. The goal is to provide managers and researchers with a clear overview of which business areas can already benefit from GAI and which require further development.

Problem While Generative AI holds enormous potential for companies, its recent emergence means it is often unclear where the technology can be most effectively applied. Businesses lack a comprehensive, systematic overview that evaluates the maturity of GAI use cases across different business processes, making it difficult to prioritize investment and adoption.

Outcome - The most mature and well-researched applications of Generative AI are in product development and in maintenance and repair within the manufacturing sector.
- The manufacturing segment as a whole exhibits the most mature GAI use cases compared to other parts of the business value chain.
- Technical domains show a higher level of GAI maturity and successful implementation than process areas dominated by interpersonal interactions, such as marketing and sales.
- GAI models like Generative Adversarial Networks (GANs) are particularly mature, proving highly effective for tasks like generating synthetic data for early damage detection in machinery.
- Research into GAI is still in its early stages for many business areas, with fields like marketing, sales, and human resources showing low implementation and maturity.
Generative AI, Business Processes, Optimization, Maturity Analysis, Literature Review, Manufacturing
How Siemens Democratized Artificial Intelligence
MIS Quarterly Executive (2023)

How Siemens Democratized Artificial Intelligence

Benjamin van Giffen, Helmuth Ludwig
This paper presents an in-depth case study on how the global technology company Siemens successfully moved artificial intelligence (AI) projects from pilot stages to full-scale, value-generating applications. The study analyzes Siemens' journey through three evolutionary stages, focusing on the concept of 'AI democratization', which involves integrating the unique skills of domain experts, data scientists, and IT professionals. The findings provide a framework for how other organizations can build the necessary capabilities to adopt and scale AI technologies effectively.

Problem Many companies invest in artificial intelligence but struggle to progress beyond small-scale prototypes and pilot projects. This failure to scale prevents them from realizing the full business value of AI. The core problem is the difficulty in making modern AI technologies broadly accessible to employees, which is necessary to identify, develop, and implement valuable applications across the organization.

Outcome - Siemens successfully scaled AI by evolving through three stages: 1) Tactical AI pilots, 2) Strategic AI enablement, and 3) AI democratization for business transformation.
- Democratizing AI, defined as the collaborative integration of domain experts, data scientists, and IT professionals, is crucial for overcoming key adoption challenges such as defining AI tasks, managing data, accepting probabilistic outcomes, and addressing 'black-box' fears.
- Key initiatives that enabled this transformation included establishing a central AI Lab to foster co-creation, an AI Academy for upskilling employees, and developing a global AI platform to support scaling.
- This approach allowed Siemens to transform manufacturing processes with predictive quality control and create innovative healthcare products like the AI-Rad Companion.
- The study concludes that democratizing AI creates value by rooting AI exploration in deep domain knowledge and reduces costs by creating scalable infrastructures and processes.
Artificial Intelligence, AI Democratization, Digital Transformation, Organizational Capability, Case Study, AI Adoption, Siemens
Promises and Perils of Generative AI in Cybersecurity
MIS Quarterly Executive (2025)

Promises and Perils of Generative AI in Cybersecurity

Pratim Datta, Tom Acton
This paper presents a case study of a fictional insurance company, based on real-life events, to illustrate how generative artificial intelligence (GenAI) can be used for both offensive and defensive cybersecurity purposes. It explores the dual nature of GenAI as a tool for both attackers and defenders, presenting a significant dilemma for IT executives. The study provides actionable recommendations for developing a comprehensive cybersecurity strategy in the age of GenAI.

Problem With the rapid adoption of Generative AI by both cybersecurity defenders and malicious actors, IT leaders face a critical challenge. GenAI significantly enhances the capabilities of attackers to create sophisticated, large-scale, and automated cyberattacks, while also offering powerful new tools for defense. This creates a high-stakes 'AI arms race,' forcing organizations to decide how to strategically embrace GenAI for defense without being left vulnerable to adversaries armed with the same technology.

Outcome - GenAI is a double-edged sword, capable of both triggering and defending against sophisticated cyberattacks, requiring a proactive, not reactive, security posture.
- Organizations must integrate a 'Defense in Depth' (DiD) strategy that extends beyond technology to include processes, a security-first culture, and continuous employee education.
- Robust data governance is crucial to manage and protect data, the primary target of attacks, by classifying its value and implementing security controls accordingly.
- A culture of continuous improvement is essential, involving regular simulations of real-world attacks (red-team/blue-team exercises) and maintaining a zero-trust mindset.
- Companies must fortify defenses against AI-powered social engineering by combining advanced technical filtering with employee training focused on skepticism and verification.
- Businesses should embrace proactive, AI-driven defense mechanisms like AI-powered threat hunting and adaptive honeypots to anticipate and neutralize threats before they escalate.
Generative AI, Cybersecurity, Black-hat AI, White-hat AI, Threat Hunting, Social Engineering, Defense in Depth
Successfully Mitigating AI Management Risks to Scale AI Globally
MIS Quarterly Executive (2025)

Successfully Mitigating AI Management Risks to Scale AI Globally

Thomas Hutzschenreuter, Tim Lämmermann, Alexander Sake, Helmuth Ludwig
This study presents an in-depth case study of the industrial AI pioneer Siemens AG to understand how companies can effectively scale artificial intelligence systems. It identifies five critical technology management risks associated with both generative and predictive AI and provides practical recommendations for mitigating them to create company-wide business impact.

Problem Many companies struggle to effectively scale modern AI systems, with over 70% of implementation projects failing to create a measurable business impact. These failures stem from machine learning's unique characteristics, which amplify existing technology management challenges and introduce entirely new ones that firms are often unprepared to handle.

Outcome - Missing or falsely evaluated potential AI use case opportunities.
- Algorithmic training and data quality issues.
- Task-specific system complexities.
- Mismanagement of system stakeholders.
- Threats from provider and system dependencies.
AI management, risk mitigation, scaling AI, generative AI, predictive AI, technology management, case study
The Promise and Perils of Low-Code AI Platforms
MIS Quarterly Executive (2024)

The Promise and Perils of Low-Code AI Platforms

Maria Kandaurova, Daniel A. Skog, Petra M. Bosch-Sijtsema
This study investigates the adoption of a low-code conversational Artificial Intelligence (AI) platform within four multinational corporations. Through a case study approach, the research identifies significant challenges that arise from fundamental, yet incorrect, assumptions about low-code technologies. The paper offers recommendations for companies to better navigate the implementation process and unlock the full potential of these platforms.

Problem As businesses increasingly turn to AI for process automation, they often encounter significant hurdles during adoption. Low-code AI platforms are marketed as a solution to simplify this process, but there is limited research on their real-world application. This study addresses the gap by showing how companies' false assumptions about the ease of use, adaptability, and integration of these platforms can limit their effectiveness and return on investment.

Outcome - The usability of low-code AI platforms is often overestimated; non-technical employees typically face a much steeper learning curve than anticipated and still require a foundational level of coding and AI knowledge.
- Adapting low-code AI applications to specific, complex business contexts is challenging and time-consuming, contrary to the assumption of easy tailoring. It often requires significant investment in standardizing existing business processes first.
- Integrating low-code platforms with existing legacy systems and databases is not a simple 'plug-and-play' process. Companies face significant challenges due to incompatible data formats, varied interfaces, and a lack of a comprehensive data strategy.
- Successful implementation requires cross-functional collaboration between IT and business teams, thorough platform testing before procurement, and a strategic approach to reengineering business processes to align with AI capabilities.
Low-Code AI Platforms, Artificial Intelligence, Conversational AI, Implementation Challenges, Digital Transformation, Business Process Automation, Case Study
How GuideCom Used the Cognigy.AI Low-Code Platform to Develop an AI-Based Smart Assistant
MIS Quarterly Executive (2024)

How GuideCom Used the Cognigy.AI Low-Code Platform to Develop an AI-Based Smart Assistant

Imke Grashoff, Jan Recker
This case study investigates how GuideCom, a medium-sized German software provider, utilized the Cognigy.AI low-code platform to create an AI-based smart assistant. The research follows the company's entire development process to identify the key ways in which low-code platforms enable and constrain AI development. The study illustrates the strategic trade-offs companies face when adopting this approach.

Problem Small and medium-sized enterprises (SMEs) often lack the extensive resources and specialized expertise required for in-house AI development, while off-the-shelf solutions can be too rigid. Low-code platforms are presented as a solution to democratize AI, but there is a lack of understanding regarding their real-world impact. This study addresses the gap by examining the practical enablers and constraints that firms encounter when using these platforms for AI product development.

Outcome - Low-code platforms enable AI development by reducing complexity through visual interfaces, facilitating cross-functional collaboration between IT and business experts, and preserving resources.
- Key constraints of using low-code AI platforms include challenges with architectural integration into existing systems, ensuring the product is expandable for different clients and use cases, and managing security and data privacy concerns.
- Contrary to the 'no-code' implication, existing software development skills are still critical for customizing solutions, re-engineering code, and overcoming platform limitations, especially during testing and implementation.
- Establishing a strong knowledge network with the platform provider (for technical support) and innovation partners like clients (for domain expertise and data) is a crucial factor for success.
- The decision to use a low-code platform is a strategic trade-off; it significantly lowers the barrier to entry for AI innovation but requires careful management of platform dependencies and inherent constraints.
low-code development, AI development, smart assistant, conversational AI, case study, digital transformation, SME
Showing all 30 podcasts