AIS Logo
Discovering the Impact of Regulation Changes on Processes: Findings from a Process Science Study in Finance

Discovering the Impact of Regulation Changes on Processes: Findings from a Process Science Study in Finance

Antonia Wurzer, Sophie Hartl, Sandro Franzoi, Jan vom Brocke
This study investigates how regulatory changes, once embedded in a company's information systems, affect the dynamics of business processes. Using digital trace data from a European financial institution's trade order process combined with qualitative interviews, the researchers identified patterns between the implementation of new regulations and changes in process performance indicators.

Problem In highly regulated industries like finance, organizations must constantly adapt their operations to evolving external regulations. However, there is little understanding of the dynamic, real-world effects that implementing these regulatory changes within IT systems has on the execution and performance of business processes over time.

Outcome - Implementing regulatory changes in IT systems dynamically affects business processes, causing performance indicators to shift immediately or with a time delay.
- Contextual factors, such as employee experience and the quality of training, significantly shape how processes adapt; insufficient training after a change can lead to more errors, process loops, and violations.
- Different types of regulations (e.g., content-based vs. function-based) produce distinct impacts, with some streamlining processes and others increasing rework and complexity for employees.
- The study highlights the need for businesses to move beyond a static view of compliance and proactively manage the dynamic interplay between regulation, system design, and user behavior.
Process Science, Regulation, Change, Business Processes, Digital Trace Data, Dynamics
Implementing AI into ERP Software

Implementing AI into ERP Software

Siar Sarferaz
This study investigates how to systematically integrate Artificial Intelligence (AI) into complex Enterprise Resource Planning (ERP) systems. Through an analysis of real-world use cases, the author identifies key challenges and proposes a comprehensive DevOps (Development and Operations) framework to standardize and streamline the entire lifecycle of AI applications within an ERP environment.

Problem While integrating AI into ERP software offers immense potential for automation and optimization, organizations lack a systematic approach to do so. This absence of a standardized framework leads to inconsistent, inefficient, and costly implementations, creating significant barriers to adopting AI capabilities at scale within enterprise systems.

Outcome - Identified 20 specific, recurring gaps in the development and operation of AI applications within ERP systems, including complex setup, heterogeneous development, and insufficient monitoring.
- Developed a comprehensive DevOps framework that standardizes the entire AI lifecycle into six stages: Create, Check, Configure, Train, Deploy, and Monitor.
- The proposed framework provides a systematic, self-service approach for business users to manage AI models, reducing the reliance on specialized technical teams and lowering the total cost of ownership.
- A quantitative evaluation across 10 real-world AI scenarios demonstrated that the framework reduced processing time by 27%, increased cost savings by 17%, and improved outcome quality by 15%.
Enterprise Resource Planning, Artificial Intelligence, DevOps, Software Integration, AI Development, AI Operations, Enterprise AI
Process science: the interdisciplinary study of socio-technical change

Process science: the interdisciplinary study of socio-technical change

Jan vom Brocke, Wil M. P. van der Aalst, Nicholas Berente, Boudewijn van Dongen, Thomas Grisold, Waldemar Kremser, Jan Mendling, Brian T. Pentland, Maximilian Roeglinger, Michael Rosemann and Barbara Weber
This paper introduces and defines "Process science" as a new interdisciplinary field for studying socio-technical processes, which are the interactions between humans and digital technologies over time. It proposes a framework based on four key principles, leveraging digital trace data and advanced analytics to describe, explain, and ultimately intervene in how these processes unfold.

Problem Many contemporary phenomena, from business operations to societal movements, are complex, dynamic processes rather than static entities. Traditional scientific approaches often fail to capture this continuous change, creating a gap in our ability to understand and influence the evolving world, especially in an era rich with digital data.

Outcome - Defines Process Science as the interdisciplinary study of socio-technical processes, focusing on how coherent series of changes involving humans and technology occur over time.
- Proposes four core principles for the field: (1) centering on socio-technical processes, (2) using scientific investigation, (3) embracing multiple disciplines, and (4) aiming to create real-world impact.
- Emphasizes the use of digital trace data and advanced computational techniques, like process mining, to gain unprecedented insights into process dynamics.
- Argues that the goal of Process Science is not only to observe and explain change but also to actively shape and intervene in processes to solve real-world problems.
Process science, Socio-technical processes, Digital trace data, Interdisciplinary research, Process mining, Change management, Computational social science
Trust Me, I'm a Tax Advisor: Influencing Factors for Adopting Generative AI Assistants in Tax Law

Trust Me, I'm a Tax Advisor: Influencing Factors for Adopting Generative AI Assistants in Tax Law

Ben Möllmann, Leonardo Banh, Jan Laufer, and Gero Strobel
This study explores the critical role of user trust in the adoption of Generative AI assistants within the specialized domain of tax law. Employing a mixed-methods approach, researchers conducted quantitative questionnaires and qualitative interviews with legal experts using two different AI prototypes. The goal was to identify which design factors are most effective at building trust and encouraging use.

Problem While Generative AI can assist in fields like tax law that require up-to-date research, its adoption is hindered by issues like lack of transparency, potential for bias, and inaccurate outputs (hallucinations). These problems undermine user trust, which is essential for collaboration in high-stakes professional settings where accuracy is paramount.

Outcome - Transparency, such as providing clear source citations, was a key factor in building user trust.
- Human-like features (anthropomorphism), like a conversational greeting and layout, positively influenced user perception and trust.
- Compliance with social and ethical norms, including being upfront about the AI's limitations, was also found to enhance trustworthiness.
- A higher level of trust in the AI assistant directly leads to an increased intention among professionals to use the tool in their work.
Generative Artificial Intelligence, Human-GenAI Collaboration, Trust, GenAI Adoption
The Double-Edged Sword: Empowerment and Risks of Platform-Based Work for Women

The Double-Edged Sword: Empowerment and Risks of Platform-Based Work for Women

Tatjana Hödl and Irina Boboschko
This conceptual paper explores how platform-based work, which offers flexible arrangements, can empower women, particularly those with caregiving responsibilities. Using case examples like mum bloggers, OnlyFans creators, and crowd workers, the study examines both the benefits and the inherent risks of this type of employment, highlighting its dual nature.

Problem Traditional employment structures are often too rigid for women, who disproportionately handle unpaid caregiving and domestic tasks, creating significant barriers to career advancement and financial independence. While platform-based work presents a flexible alternative, it is crucial to understand whether this model truly empowers women or introduces new forms of precariousness that reinforce existing gender inequalities.

Outcome - Platform-based work empowers women by offering financial independence, skill development, and the flexibility to manage caregiving responsibilities.
- This form of work is a 'double-edged sword,' as the benefits are accompanied by significant risks, including job insecurity, lack of social protections, and unpredictable income.
- Women in platform-based work face substantial mental health risks from online harassment and financial instability due to reliance on opaque platform algorithms and online reputations.
- Rather than dismantling unequal power structures, platform-based work can reinforce traditional gender roles, confine women to the domestic sphere, and perpetuate financial dependency.
Women, platform-based work, empowerment, risks, gig economy, digital labor, gender inequality
Education and Migration of Entrepreneurial and Technical Skill Profiles of German University Graduates

Education and Migration of Entrepreneurial and Technical Skill Profiles of German University Graduates

David Blomeyer and Sebastian Köffer
This study examines the supply of entrepreneurial and technical talent from German universities and analyzes their migration patterns after graduation. Using LinkedIn alumni data for 43 universities, the research identifies key locations for talent production and evaluates how effectively different cities and federal states retain or attract these skilled workers.

Problem Amidst a growing demand for skilled workers, particularly for startups, companies and policymakers lack clear data on talent distribution and mobility in Germany. This information gap makes it difficult to devise effective recruitment strategies, choose business locations, and create policies that foster regional talent retention and economic growth.

Outcome - Universities in major cities, especially TU München and LMU München, produce the highest number of graduates with entrepreneurial and technical skills.
- Talent retention varies significantly by location; universities in major metropolitan areas like Berlin, Munich, and Hamburg are most successful at keeping their graduates locally, with FU Berlin retaining 68.8% of its entrepreneurial alumni.
- The tech hotspots of North Rhine-Westphalia (NRW), Bavaria, and Berlin retain an above-average number of their own graduates while also attracting a large share of talent from other regions.
- Bavaria is strong in both educating and attracting talent, whereas NRW, the largest producer of talent, also loses a significant number of graduates to other hotspots.
- The analysis reveals that hotspot regions are generally better at retaining entrepreneurial profiles than technical profiles, highlighting the influence of local startup ecosystems on talent mobility.
Entrepreneurship, Location factors, Skills, STEM, Universities
Corporate Governance for Digital Responsibility: A Company Study

Corporate Governance for Digital Responsibility: A Company Study

Anna-Sophia Christ
This study examines how ten German companies translate the principles of Corporate Digital Responsibility (CDR) into actionable practices. Using qualitative content analysis of public data, the paper analyzes these companies' approaches from a corporate governance perspective to understand their accountability structures, risk regulation measures, and overall implementation strategies.

Problem As companies rapidly adopt digital technologies for productivity gains, they also face new and complex ethical and societal responsibilities. A significant gap exists between the high-level principles of Corporate Digital Responsibility (CDR) and their concrete operationalization, leaving businesses without clear guidance on how to manage digital risks and impacts effectively.

Outcome - The study identified seventeen key learnings for implementing Corporate Digital Responsibility (CDR) through corporate governance.
- Companies are actively bridging the gap from principles to practice, often adapting existing governance structures rather than creating entirely new ones.
- Key implementation strategies include assigning central points of contact for CDR, ensuring C-level accountability, and developing specific guidelines and risk management processes.
- The findings provide a benchmark and actionable examples for practitioners seeking to integrate digital responsibility into their business operations.
Corporate Digital Responsibility, Corporate Governance, Digital Transformation, Principles-to-Practice, Company Study
Design of PharmAssistant: A Digital Assistant For Medication Reviews

Design of PharmAssistant: A Digital Assistant For Medication Reviews

Laura Melissa Virginia Both, Laura Maria Fuhr, Fatima Zahra Marok, Simeon Rüdesheim, Thorsten Lehr, and Stefan Morana
This study presents the design and initial evaluation of PharmAssistant, a digital assistant created to support pharmacists by gathering patient data before a medication review. Using a Design Science Research approach, the researchers developed a prototype based on interviews with pharmacists and then tested it with pharmacy students in focus groups to identify areas for improvement. The goal is to make the time-intensive process of medication reviews more efficient.

Problem Many patients, particularly older adults, take multiple medications, which can lead to adverse drug-related problems. While pharmacists can conduct medication reviews to mitigate these risks, the process is very time-consuming, which limits its widespread use in practice. This study addresses the lack of efficient tools to streamline the data collection phase of these crucial reviews.

Outcome - The study successfully designed and developed a prototype digital assistant, PharmAssistant, to streamline the collection of patient data for medication reviews.
- Pharmacists interviewed had mixed opinions; some saw the potential to reduce workload, while others were concerned about usability for older patients and the loss of direct patient contact.
- Evaluation by pharmacy students confirmed the tool's potential to save time, highlighting strengths like scannable medication numbers and predefined answers.
- Key weaknesses and threats identified included potential accessibility issues for older users, data privacy concerns, and patients' inability to ask clarifying questions during the automated process.
- The research identified essential design principles for such assistants, including the need for user-friendly interfaces, empathetic communication, and support for various data entry methods.
Pharmacy, Medication Reviews, Digital Assistants, Design Science, Polypharmacy, Digital Health
There is AI in SustAInability – A Taxonomy Structuring AI For Environmental Sustainability

There is AI in SustAInability – A Taxonomy Structuring AI For Environmental Sustainability

Feline Schnaak, Katharina Breiter, Henner Gimpel
This study develops a structured framework to organize the growing field of artificial intelligence for environmental sustainability (AIfES). Through an iterative process involving literature reviews and real-world examples, the researchers created a multi-layer taxonomy. This framework is designed to help analyze and categorize AI systems based on their context, technical setup, and usage.

Problem Artificial intelligence is recognized as a powerful tool for promoting environmental sustainability, but the existing research and applications are fragmented and lack a cohesive structure. This disorganization makes it difficult for researchers and businesses to holistically understand, compare, and develop effective AI solutions. There is a clear need for a systematic framework to guide the analysis and deployment of AI in this critical domain.

Outcome - The study introduces a comprehensive, multi-layer taxonomy for AI systems for environmental sustainability (AIfES).
- This taxonomy is structured into three layers: context (the sustainability challenge), AI setup (the technology and data), and usage (risks and end-users).
- It provides a systematic tool for researchers, developers, and policymakers to analyze, classify, and benchmark AI applications, enhancing transparency and understanding.
- The framework supports the responsible design and development of impactful AI solutions by highlighting key dimensions and characteristics for evaluation.
Artificial Intelligence, AI for Sustainability, Environmental Sustainability, Green IS, Taxonomy
Agile design options for IT organizations and resulting performance effects: A systematic literature review

Agile design options for IT organizations and resulting performance effects: A systematic literature review

Oliver Hohenreuther
This study provides a comprehensive framework for making IT organizations more adaptable by systematically reviewing 57 academic papers. It identifies and categorizes 20 specific 'design options' that companies can implement to increase agility. The research consolidates fragmented literature to offer a structured overview of these options and their resulting performance benefits.

Problem In the fast-paced digital age, traditional IT departments often struggle to keep up with market changes and drive business innovation. While the need for agility is widely recognized, business leaders lack a clear, consolidated guide on the practical options available to restructure their IT organizations and a clear understanding of the specific performance outcomes of each choice.

Outcome - Identified and structured 20 distinct agile design options (DOs) for IT organizations.
- Clustered these options into four key dimensions: Processes, Structure, People & Culture, and Governance.
- Mapped the specific performance effects for each design option, such as increased delivery speed, improved business-IT alignment, greater innovativeness, and higher team autonomy.
- Created a foundational framework to help managers make informed, cost-benefit decisions when transforming their IT organizations.
Agile IT organization design, agile design options, agility benefits
Overcoming Legal Complexity for Commercializing Digital Technologies: The Digital Health Regulatory Navigator as a Regulatory Support Tool

Overcoming Legal Complexity for Commercializing Digital Technologies: The Digital Health Regulatory Navigator as a Regulatory Support Tool

Sascha Noel Weimar, Rahel Sophie Martjan, and Orestis Terzidis
This study introduces a new type of tool called a regulatory support tool, designed to assist digital health startups in navigating complex European Union regulations. Using a Design Science Research methodology, the authors developed and evaluated the 'Digital Health Regulatory Navigator (EU)', a practical tool that helps startups understand medical device rules and strategically plan for market entry.

Problem Digital health startups face a major challenge from increasing regulatory complexity, particularly within the European Union's medical device market. These young companies often have limited resources and legal expertise, making it difficult to navigate the intricate legal requirements, which can create significant barriers to commercializing innovative technologies.

Outcome - The study successfully developed the 'Digital Health Regulatory Navigator (EU)', a practical tool that helps digital health startups navigate the complexities of EU medical device regulations.
- The tool was evaluated by experts and entrepreneurs and confirmed to be a valuable and effective resource for simplifying early-stage decision-making and developing a regulatory strategy.
- It particularly benefits resource-constrained startups by helping them understand requirements and strategically leverage regulatory opportunities for smoother market entry.
- The research contributes generalizable design principles for creating similar regulatory support tools in other highly regulated domains, emphasizing their potential to enhance entrepreneurial activity.
digital health technology, regulatory requirements, design science research, medical device regulations, regulatory support tools
Towards the Acceptance of Virtual Reality Technology for Cyclists

Towards the Acceptance of Virtual Reality Technology for Cyclists

Sophia Elsholz, Paul Neumeyer, and Rüdiger Zarnekow
This study investigates the factors that influence cyclists' willingness to adopt virtual reality (VR) for indoor training. Using a survey of 314 recreational and competitive cyclists, the research applies an extended Technology Acceptance Model (TAM) to determine what makes VR appealing for platforms like Zwift.

Problem While digital indoor cycling platforms exist, they lack the full immersion that VR can offer. However, it is unclear whether cyclists would actually accept and use VR technology, as its potential in sports remains largely theoretical and the specific factors driving adoption in cycling are unknown.

Outcome - Perceived enjoyment is the single most important factor determining if a cyclist will adopt VR for training.
- Perceived usefulness, or the belief that VR will improve training performance, is also a strong predictor of acceptance.
- Surprisingly, the perceived ease of use of the VR technology did not significantly influence a cyclist's intention to use it.
- Social factors, such as the opinions of other athletes and trainers, along with a cyclist's general openness to new technology, positively contribute to their acceptance of VR.
- Both recreational and competitive cyclists showed similar levels of acceptance, indicating a broad potential market, but both groups are currently skeptical about VR's ability to improve performance.
Technology Acceptance, TAM, Cycling, Extended Reality, XR
Designing Change Project Monitoring Systems: Insights from the German Manufacturing Industry

Designing Change Project Monitoring Systems: Insights from the German Manufacturing Industry

Bastian Brechtelsbauer
This study details the design of a system to monitor organizational change projects, using insights from an action design research project with two large German manufacturing companies. The methodology involved developing and evaluating a prototype system, which includes a questionnaire-based survey and an interactive dashboard for data visualization and analysis.

Problem Effectively managing organizational change is crucial for company survival, yet it is notoriously difficult to track and oversee. There is a significant research gap and lack of practical guidance on how to design information technology systems that can successfully monitor change projects to improve transparency and support decision-making for managers.

Outcome - Developed a prototype change project monitoring system consisting of surveys and an interactive dashboard to track key indicators like change readiness, acceptance, and implementation.
- Identified four key design challenges: balancing user effort vs. insight depth, managing standardization vs. adaptability, creating a realistic understanding of data quantification, and establishing a shared vision for the tool.
- Proposed three generalized requirements for change monitoring systems: they must provide information tailored to different user groups, be usable for various types of change projects, and conserve scarce resources during organizational change.
- Outlined eight design principles to guide development, focusing on both the system's features (e.g., modularity, intuitive visualizations) and the design process (e.g., involving stakeholders, communicating a clear vision).
Change Management, Monitoring, Action Design Research, Design Science, Industry
Navigating Generative AI Usage Tensions in Knowledge Work: A Socio-Technical Perspective

Navigating Generative AI Usage Tensions in Knowledge Work: A Socio-Technical Perspective

Anna Gieß, Sofia Schöbel, and Frederik Möller
This study explores the complex challenges and advantages of integrating Generative Artificial Intelligence (GenAI) into knowledge-based work. Using socio-technical systems theory, the researchers conducted a systematic literature review and qualitative interviews with 18 knowledge workers to identify key points of conflict. The paper proposes solutions like human-in-the-loop models and robust AI governance policies to foster responsible and efficient GenAI usage.

Problem As organizations rapidly adopt GenAI to boost productivity, they face significant tensions between efficiency, reliability, and data privacy. There is a need to understand these conflicting forces to develop strategies that maximize the benefits of GenAI while mitigating risks related to ethics, data protection, and over-reliance on the technology.

Outcome - Productivity-Reflection Tension: GenAI increases efficiency but can lead to blind reliance and reduced critical thinking on the content it generates.
- Availability-Reliability Contradiction: While GenAI offers constant access to information, its output is not always reliable, increasing the risk of misinformation.
- Efficiency-Traceability Dilemma: Content is produced quickly, but the lack of clear source references makes verification difficult in professional settings.
- Usefulness-Transparency Tension: The utility of GenAI is limited by a lack of transparency in how it generates outputs, which reduces user trust.
- Convenience-Data Protection Tension: GenAI simplifies tasks but creates significant concerns about the privacy and security of sensitive information.
Generative AI, Knowledge work, Tensions, Socio-technical systems theory
Discerning Truth: A Qualitative Comparative Analysis of Reliance on AI Advice in Deepfake Detection

Discerning Truth: A Qualitative Comparative Analysis of Reliance on AI Advice in Deepfake Detection

Christiane Ernst
This study investigates how individuals rely on AI advice when trying to detect deepfake videos. Using a judge-advisor system, participants first made their own judgment about a video's authenticity and then were shown an AI tool's evaluation, after which they could revise their decision. The research used Qualitative Comparative Analysis to explore how factors like AI literacy, trust, and algorithm aversion influence the decision to rely on the AI's advice.

Problem Recent advancements in AI have led to the creation of hyper-realistic deepfakes, making it increasingly difficult for people to distinguish between real and manipulated media. This poses serious threats, including the rapid spread of misinformation, reputational damage, and the potential destabilization of political systems. There is a need to understand how humans interact with AI detection tools to build more effective countermeasures.

Outcome - A key finding is that participants only changed their initial decision when the AI tool indicated that a video was genuine, not when it flagged a deepfake.
- This suggests users are more likely to use AI tools to confirm authenticity rather than to reliably detect manipulation, raising concerns about unreflective acceptance of AI advice.
- Reliance on the AI's advice that a video was genuine was driven by specific combinations of factors, occurring when individuals had either high aversion to algorithms, low trust, or high AI literacy.
Deepfake, Reliance on AI Advice, Qualitative Comparative Analysis (QCA), Human-AI Collaboration
Thinking Twice: A Sequential Approach to Nudge Towards Reflective Judgment in GenAI-Assisted Decision Making

Thinking Twice: A Sequential Approach to Nudge Towards Reflective Judgment in GenAI-Assisted Decision Making

Hüseyin Hussein Keke, Daniel Eisenhardt, Christian Meske
This study investigates how to encourage more thoughtful and analytical decision-making when people use Generative AI (GenAI). Through an experiment with 130 participants, researchers tested an interaction design where users first made their own decision on a problem-solving task before receiving AI assistance. This sequential approach was compared to conditions where users received AI help concurrently or not at all.

Problem When using GenAI tools for decision support, humans have a natural tendency to rely on quick, intuitive judgments rather than engaging in deep, analytical thought. This can lead to suboptimal decisions and increases the risks associated with relying on AI, as users may not critically evaluate the AI's output. The study addresses the challenge of designing human-AI interactions that promote a shift towards more reflective thinking.

Outcome - Requiring users to make an initial decision before receiving GenAI help (a sequential approach) significantly improved their final decision-making performance.
- This sequential interaction method was more effective than providing AI assistance at the same time as the task (concurrently) or providing no AI assistance at all.
- Users who made an initial decision first were more likely to use the available AI prompts, suggesting a more deliberate engagement with the technology.
- The findings suggest that this sequential design acts as a 'cognitive nudge,' successfully shifting users from fast, intuitive thinking to slower, more reflective analysis.
Dual Process Theory, Digital Nudging, Cognitive Forcing, Generative AI, Decision Making
Bias Measurement in Chat-optimized LLM Models for Spanish and English

Bias Measurement in Chat-optimized LLM Models for Spanish and English

Ligia Amparo Vergara Brunal, Diana Hristova, and Markus Schaal
This study develops and applies a method to evaluate social biases in advanced AI language models (LLMs) for both English and Spanish. Researchers tested three state-of-the-art models on two datasets designed to expose stereotypical thinking, comparing performance across languages and contexts.

Problem As AI language models are increasingly used for critical decisions in areas like healthcare and human resources, there's a risk they could spread harmful social biases. While bias in English AI has been extensively studied, there is a significant lack of research on how these biases manifest in other widely spoken languages, such as Spanish.

Outcome - Models were generally worse at identifying and refusing to answer biased questions in Spanish compared to English.
- However, when the models did provide an answer to a biased prompt, their responses were often fairer (less stereotypical) in Spanish.
- Models provided fairer answers when the questions were direct and unambiguous, as opposed to indirect or vague.
LLM, bias, multilingual, Spanish, AI ethics, fairness
Adopting Generative AI in Industrial Product Companies: Challenges and Early Pathways

Adopting Generative AI in Industrial Product Companies: Challenges and Early Pathways

Vincent Paffrath, Manuel Wlcek, and Felix Wortmann
This study investigates the adoption of Generative AI (GenAI) within industrial product companies by identifying key challenges and potential solutions. Based on expert interviews with industry leaders and technology providers, the research categorizes findings into technological, organizational, and environmental dimensions to bridge the gap between expectation and practical implementation.

Problem While GenAI is transforming many industries, its adoption by industrial product companies is particularly difficult. Unlike software firms, these companies often lack deep digital expertise, are burdened by legacy systems, and must integrate new technologies into complex hardware and service environments, making it hard to realize GenAI's full potential.

Outcome - Technological challenges like AI model 'hallucinations' and inconsistent results are best managed through enterprise grounding (using company data to improve accuracy) and standardized testing procedures.
- Organizational hurdles include the difficulty of calculating ROI and managing unrealistic expectations. The study suggests focusing on simple, non-financial KPIs (like user adoption and time saved) and providing realistic employee training to demystify the technology.
- Environmental risks such as vendor lock-in and complex new regulations can be mitigated by creating model-agnostic systems that allow switching between providers and establishing standardized compliance frameworks for all AI use cases.
GenAI, AI Adoption, Industrial Product Companies, AI in Manufacturing, Digital Transformation
AI-Powered Teams: How the Usage of Generative AI Tools Enhances Knowledge Transfer and Knowledge Application in Knowledge-Intensive Teams

AI-Powered Teams: How the Usage of Generative AI Tools Enhances Knowledge Transfer and Knowledge Application in Knowledge-Intensive Teams

Olivia Bruhin, Luc Bumann, Philipp Ebel
This study investigates the role of Generative AI (GenAI) tools, such as ChatGPT and GitHub Copilot, in software development teams. Through an empirical study with 80 software developers, the research examines how GenAI usage influences key knowledge management processes—knowledge transfer and application—and the subsequent effect on team performance.

Problem While the individual productivity gains from GenAI tools are increasingly recognized, their broader impact on team-level knowledge management and performance remains poorly understood. This gap poses a risk for businesses, as adopting these technologies without understanding their collaborative effects could lead to unintended consequences like reduced knowledge retention or impaired team dynamics.

Outcome - The use of Generative AI (GenAI) tools significantly enhances both knowledge transfer (sharing) and knowledge application within software development teams.
- GenAI usage has a direct positive impact on overall team performance.
- The performance improvement is primarily driven by the team's improved ability to apply knowledge, rather than just the transfer of knowledge alone.
- The findings highlight GenAI's role as a catalyst for innovation, but stress that knowledge gained via AI must be actively and contextually applied to boost team performance effectively.
Human-AI Collaboration, AI in Knowledge Work, Collaboration, Generative AI, Software Development, Team Performance, Knowledge Management
Metrics for Digital Group Workspaces: A Replication Study

Metrics for Digital Group Workspaces: A Replication Study

Petra Schubert and Martin Just
This study replicates a 2014 paper by Jeners and Prinz to test if their metrics for analyzing user activity in digital workspaces are still valid and generalizable. Using data from a modern academic collaboration system, the researchers re-applied metrics like activity, productivity, and cooperativity, and developed an analytical dashboard to visualize the findings.

Problem With the rise of remote and hybrid work, digital collaboration tools are more important than ever. However, these tools generate vast amounts of user activity data ('digital traces') but offer little support for analyzing it, leaving managers without a clear understanding of how teams are collaborating and using these digital spaces.

Outcome - The original metrics for measuring activity, productivity, and cooperativity in digital workspaces were confirmed to be effective and applicable to modern collaboration software.
- The study confirmed that a small percentage of users (around 20%) typically account for the majority of activity (around 80%) in project and organizational workspaces, following a Pareto distribution.
- The researchers extended the original method by incorporating Collaborative Work Codes (CWC), which provide a more detailed and nuanced way to identify different types of work happening in a space (e.g., retrieving information vs. discussion).
- Combining time-based activity profiles with these new work codes proved to be a robust method for accurately identifying and profiling different types of workspaces, such as projects, organizational units, and teaching courses.
Collaboration Analytics, Enterprise Collaboration Systems, Group Workspaces, Digital Traces, Replication Study
Configurations of Digital Choice Environments: Shaping Awareness of the Impact of Context on Choices

Configurations of Digital Choice Environments: Shaping Awareness of the Impact of Context on Choices

Phillip Oliver Gottschewski-Meyer, Fabian Lang, Paul-Ferdinand Steuck, Marco DiMaria, Thorsten Schoormann, and Ralf Knackstedt
This study investigates how the layout and components of digital environments, like e-commerce websites, influence consumer choices. Through an online experiment in a fictional store with 421 participants, researchers tested how the presence and placement of website elements, such as a chatbot, interact with marketing nudges like 'bestseller' tags.

Problem Businesses often use 'nudges' like bestseller tags to steer customer choices, but little is known about how the overall website design affects the success of these nudges. It's unclear if other website components, such as chatbots, can interfere with or enhance these marketing interventions, leading to unpredictable consumer behavior and potentially ineffective strategies.

Outcome - The mere presence of a website component, like a chatbot, significantly alters user product choices. In the study, adding a chatbot doubled the odds of participants selecting a specific product.
- The position of a component matters. Placing a chatbot on the right side of the screen led to different product choices compared to placing it on the left.
- The chatbot's presence did not weaken the effect of a 'bestseller' nudge. Instead, the layout component (chatbot) and the nudge (bestseller tag) influenced user choice independently of each other.
- Website design directly influences user decisions. Even simple factors like the presence and placement of elements can bias user selections, separate from intentional marketing interventions.
Digital choice environments, digital interventions, configuration, nudging, e-commerce, user interface design, consumer behavior
Digital Detox: Understanding Knowledge Workers' Motivators and Requirements for Technostress Relief

Digital Detox: Understanding Knowledge Workers' Motivators and Requirements for Technostress Relief

Marie Langer, Milad Mirbabaie, Chiara Renna
This study investigates how knowledge workers use "digital detox" to manage technology-related stress, known as technostress. Through 16 semi-structured interviews, the research explores the motivations for and requirements of practicing digital detox in a professional environment, understanding it as a coping behavior that enables psychological detachment from work.

Problem In the modern digital workplace, constant connectivity through information and communication technologies (ICT) frequently causes technostress, which negatively affects employee well-being and productivity. While the concept of digital detox is becoming more popular, there is a significant research gap regarding why knowledge workers adopt it and what individual or organizational support they need to do so effectively.

Outcome - The primary motivators for knowledge workers to engage in digital detox are the desires to improve work performance by minimizing distractions and to enhance personal well-being by mentally disconnecting from work.
- Key drivers of technostress that a digital detox addresses are 'techno-overload' (the increased pace and volume of work) and 'techno-invasion' (the blurring of boundaries between work and private life).
- Effective implementation of digital detox requires both individual responsibility (e.g., self-control, transparent communication about availability) and organizational support (e.g., creating clear policies, fostering a supportive culture).
- Digital detox serves as both a reactive and proactive coping strategy for technostress, but its success is highly dependent on supportive social norms and organizational adjustments.
Digital Detox, Technostress, Knowledge Worker, ICT, Psychological Detachment, Work-Life Balance
Revisiting the Responsibility Gap in Human-AI Collaboration from an Affective Agency Perspective

Revisiting the Responsibility Gap in Human-AI Collaboration from an Affective Agency Perspective

Jonas Rieskamp, Annika Küster, Bünyamin Kalyoncuoglu, Paulina Frieda Saffer, and Milad Mirbabaie
This study investigates how responsibility is understood and assigned when artificial intelligence (AI) systems influence decision-making processes. Using qualitative interviews with experts across various sectors, the research explores how human oversight and emotional engagement (affective agency) shape accountability in human-AI collaboration.

Problem As AI systems become more autonomous in fields from healthcare to finance, a 'responsibility gap' emerges. It becomes difficult to assign accountability for errors or outcomes, as responsibility is diffused among developers, users, and the AI itself, challenging traditional models of liability.

Outcome - Using AI does not diminish human responsibility; instead, it often intensifies it, requiring users to critically evaluate and validate AI outputs.
- Most professionals view AI as a supportive tool or 'sparring partner' rather than an autonomous decision-maker, maintaining that humans must have the final authority.
- The uncertainty surrounding how AI works encourages users to be more cautious and critical, which helps bridge the responsibility gap rather than leading to blind trust.
- Responsibility remains anchored in human oversight, with users feeling accountable not only for the final decision but also for how the AI was used to reach it.
Artificial Intelligence (AI), Responsibility Gap, Responsibility in Human-AI collaboration, Decision-Making, Sociomateriality, Affective Agency
To Leave or Not to Leave: A Configurational Approach to Understanding Digital Service Users' Responses to Privacy Violations Through Secondary Use

To Leave or Not to Leave: A Configurational Approach to Understanding Digital Service Users' Responses to Privacy Violations Through Secondary Use

Christina Wagner, Manuel Trenz, Chee-Wee Tan, and Daniel Veit
This study investigates how users respond when their personal information, collected by a digital service, is used for a secondary purpose by an external party—a practice known as External Secondary Use (ESU). Using a qualitative comparative analysis (QCA), the research identifies specific combinations of user perceptions and emotions that lead to different protective behaviors, such as restricting data collection or ceasing to use the service.

Problem Digital services frequently reuse user data in ways that consumers don't expect, leading to perceptions of privacy violations. It is unclear what specific factors and emotional responses drive a user to either limit their engagement with a service or abandon it completely. This study addresses this gap by examining the complex interplay of factors that determine a user's reaction to such privacy breaches.

Outcome - Users are likely to restrict their information sharing but continue using a service when they feel anxiety, believe the data sharing is an ongoing issue, and the violation is related to web ads.
- Users are more likely to stop using a service entirely when they feel angry about the privacy violation.
- The decision to leave a service is often triggered by more severe incidents, such as receiving unsolicited contact, combined with a strong sense of personal ability to act (self-efficacy) or having their privacy expectations disconfirmed.
- The study provides distinct 'recipes' of conditions that lead to specific user actions, helping businesses understand the nuanced triggers behind user responses to their data practices.
Privacy Violation, Secondary Use, Qualitative Comparative Analysis, QCA, User Behavior, Digital Services, Data Privacy
Actor-Value Constellations in Circular Ecosystems

Actor-Value Constellations in Circular Ecosystems

Linda Sagnier Eckert, Marcel Fassnacht, Daniel Heinz, Sebastian Alamo Alonso and Gerhard Satzger
This study analyzes 48 real-world examples of circular economies to understand how different companies and organizations collaborate to create sustainable value. Using e³-value modeling, the researchers identified common patterns of interaction, creating a framework of eight distinct business constellations. This research provides a practical guide for organizations aiming to transition to a circular economy.

Problem While the circular economy offers a promising alternative to traditional 'take-make-dispose' models, there is a lack of clear understanding of how the various actors within these systems (like producers, consumers, and recyclers) should interact and exchange value. This ambiguity makes it difficult for businesses to effectively design and implement circular strategies, leading to missed opportunities and inefficiencies.

Outcome - The study identified eight recurring patterns, or 'constellations,' of collaboration in circular ecosystems, providing clear models for how businesses can work together.
- These constellations are grouped into three main dimensions: 1) innovation driven by producers, services, or regulations; 2) optimizing resource efficiency through sharing or redistribution; and 3) recovering and processing end-of-life products and materials.
- The research reveals distinct roles that different organizations play (e.g., scavengers, decomposers, producers) and provides strategic blueprints for companies to select partners and define value exchanges to successfully implement circular principles.
circular economy, circular ecosystems, actor-value constellations, e³-value modeling, sustainability
To VR or not to VR? A Taxonomy for Assessing the Suitability of VR in Higher Education

To VR or not to VR? A Taxonomy for Assessing the Suitability of VR in Higher Education

Nadine Bisswang, Georg Herzwurm, Sebastian Richter
This study proposes a taxonomy to help educators in higher education systematically assess whether virtual reality (VR) is suitable for specific learning content. The taxonomy is grounded in established theoretical frameworks and was developed through a multi-stage process involving literature reviews and expert interviews. Its utility is demonstrated through an illustrative scenario where an educator uses the framework to evaluate a specific course module.

Problem Despite the increasing enthusiasm for using virtual reality (VR) in education, its suitability for specific topics remains unclear. University lecturers, particularly those without prior VR experience, lack a structured approach to decide when and why VR would be an effective teaching tool. This gap leads to uncertainty about its educational benefits and hinders its effective adoption.

Outcome - Developed a taxonomy that structures the reasons for and against using VR in higher education across five dimensions: learning objective, learning activities, learning assessment, social influence, and hedonic motivation.
- The taxonomy provides a balanced overview by organizing 24 distinct characteristics into factors that favor VR use ('+') and factors that argue against it ('-').
- This framework serves as a practical decision-support tool for lecturers to make an informed initial assessment of VR's suitability for their specific learning content without needing prior technical experience.
- The study demonstrates the taxonomy's utility through an application to a 'warehouse logistics management' learning scenario, showing how it can guide educators' decisions.
Virtual Reality Suitability, Learning Content, Taxonomy, Higher Education, Educational Technology, Decision Support Framework
An Automated Identification of Forward Looking Statements on Financial Metrics in Annual Reports

An Automated Identification of Forward Looking Statements on Financial Metrics in Annual Reports

Khanh Le Nguyen, Diana Hristova
This study presents a three-phase automated Decision Support System (DSS) designed to extract and analyze forward-looking statements on financial metrics from corporate 10-K annual reports. The system uses Natural Language Processing (NLP) to identify relevant text, machine learning models to predict future metric growth, and Generative AI to summarize the findings for users. The goal is to transform unstructured narrative disclosures into actionable, metric-level insights for investors and analysts.

Problem Manually extracting useful information from lengthy and increasingly complex 10-K reports is a significant challenge for investors seeking to predict a company's future performance. This difficulty creates a need for an automated system that can reliably identify, interpret, and forecast financial metrics based on the narrative sections of these reports, thereby improving the efficiency and accuracy of financial decision-making.

Outcome - The system extracted forward-looking statements related to financial metrics with 94% accuracy, demonstrating high reliability.
- A Random Forest model outperformed a more complex FinBERT model in predicting future financial growth, indicating that simpler, interpretable models can be more effective for this task.
- AI-generated summaries of the company's outlook achieved a high average rating of 3.69 out of 4 for factual consistency and readability, enhancing transparency for decision-makers.
- The overall system successfully provides an automated pipeline to convert dense corporate text into actionable financial predictions, empowering investors with transparent, data-driven insights.
forward-looking statements, 10-K, financial performance prediction, XAI, GenAI
Algorithmic Management: An MCDA-Based Comparison of Key Approaches

Algorithmic Management: An MCDA-Based Comparison of Key Approaches

Arne Jeppe, Tim Brée, and Erik Karger
This study employs Multi-Criteria Decision Analysis (MCDA) to evaluate and compare four distinct approaches for governing algorithmic management systems: principle-based, rule-based, risk-based, and auditing-based. The research gathered preferences from 27 experts regarding each approach's effectiveness, feasibility, adaptability, and stakeholder acceptability to determine the most preferred strategy.

Problem As organizations increasingly use algorithms to manage workers, they face the challenge of governing these systems to ensure fairness, transparency, and accountability. While several governance models have been proposed conceptually, there is a significant research gap regarding which approach is empirically preferred by experts and most practical for balancing innovation with responsible implementation.

Outcome - Experts consistently and strongly preferred a hybrid, risk-based approach for governing algorithmic management systems.
- This approach was perceived as the most effective in mitigating risks (like bias and privacy violations) while also demonstrating good adaptability to new technologies and high stakeholder acceptability.
- The findings suggest that a 'one-size-fits-all' strategy is ineffective; instead, a pragmatic approach that tailors the intensity of governance to the level of potential harm is most suitable.
- Purely rule-based approaches were seen as too rigid and slow to adapt, while purely principle-based approaches were considered difficult to enforce.
Algorithmic Management, Multi-Criteria Decision Analysis (MCDA), Risk Management, Organizational Control, Governance, AI Ethics
Service Innovation through Data Ecosystems – Designing a Recombinant Method

Service Innovation through Data Ecosystems – Designing a Recombinant Method

Philipp Hansmeier, Philipp zur Heiden, and Daniel Beverungen
This study designs a new method, RE-SIDE (recombinant service innovation through data ecosystems), to guide service innovation within complex, multi-actor data environments. Using a design science research approach, the paper develops and applies a framework that accounts for the broader repercussions of service system changes at an ecosystem level, demonstrated through an innovative service enabled by a cultural data space.

Problem Traditional methods for service innovation are designed for simple systems, typically involving just a provider and a customer. These methods are inadequate for today's complex 'service ecosystems,' which are driven by shared data spaces and involve numerous interconnected actors. There is a lack of clear, actionable methods for companies to navigate this complexity and design new services effectively at an ecosystem level.

Outcome - The study develops the RE-SIDE method, a new framework specifically for designing services within complex data ecosystems.
- The method extends existing service engineering standards by adding two critical phases: an 'ecosystem analysis phase' for identifying partners and opportunities, and an 'ecosystem transformation phase' for adapting to ongoing changes.
- It provides businesses with a structured process to analyze the broader ecosystem, understand their own role, and systematically co-create value with other actors.
- The paper demonstrates the method's real-world applicability by designing a 'Culture Wallet' service, which uses shared data from cultural institutions to offer personalized recommendations and rewards to users.
Service Ecosystem, Data Ecosystem, Data Space, Service Engineering, Design Science Research
The App, the Habit, and the Change: Digital Tools for Multidomain Behavior Change

The App, the Habit, and the Change: Digital Tools for Multidomain Behavior Change

Felix Reinsch, Maren Kählig, Maria Neubauer, Jeannette Stark, Hannes Schlieter
This study analyzed 36 popular habit-forming mobile apps to understand how they encourage positive lifestyle changes across multiple domains. Researchers examined 585 different behavior recommendations within these apps, classifying them into 20 distinct categories to see which habits are most common and how they are interconnected.

Problem It is known that developing a positive habit in one area of life can create a ripple effect, leading to improvements in other areas. However, there was little research on whether digital habit-tracking apps are designed to leverage this interconnectedness to help users achieve comprehensive and lasting lifestyle changes.

Outcome - Physical Exercise is the most dominant and central habit recommended by apps, often linked with Nutrition and Leisure Activities.
- On average, habit apps suggest behaviors across nearly 13 different lifestyle domains, indicating a move towards a holistic approach to well-being.
- Apps that offer recommendations in more lifestyle domains also tend to provide more advanced features to support habit formation.
- Simply offering a wide variety of habits and features does not guarantee high user satisfaction, suggesting that other factors like user experience are critical for an app's success.
Digital Behavior Change Application, Habit Formation, Behavior Change Support System, Mobile Application, Lifestyle Improvement, Multidomain Behavior Change
AI Agents as Governance Actors in Data Trusts – A Normative and Design Framework

AI Agents as Governance Actors in Data Trusts – A Normative and Design Framework

Arnold F. Arz von Straussenburg, Jens J. Marga, Timon T. Aldenhoff, and Dennis M. Riehle
This study proposes a design theory to safely and ethically integrate Artificial Intelligence (AI) agents into the governance of data trusts. The paper introduces a normative framework that unifies fiduciary principles, institutional trust, and AI ethics. It puts forward four specific design principles to guide the development of AI systems that can act as responsible governance actors within these trusts, ensuring they protect beneficiaries' interests.

Problem Data trusts are frameworks for responsible data management, but integrating powerful AI systems creates significant ethical and security challenges. AI can be opaque and may have goals that conflict with the interests of data owners, undermining the fairness and accountability that data trusts are designed to protect. This creates a critical need for a governance model that allows organizations to leverage AI's benefits without compromising their fundamental duties to data owners.

Outcome - The paper establishes a framework to guide the integration of AI into data trusts, ensuring AI actions align with ethical and fiduciary responsibilities.
- It introduces four key design principles for AI agents: 1) Fiduciary alignment to prioritize beneficiary interests, 2) Accountability through complete traceability and oversight, 3) Transparent explainability for all AI decisions, and 4) Autonomy-preserving oversight to maintain robust human supervision.
- The research demonstrates that AI can enhance efficiency in data governance without eroding stakeholder trust or ethical standards if implemented correctly.
- It provides actionable recommendations, such as automated audits and dynamic consent mechanisms, to ensure the responsible use of AI within data ecosystems for the common good.
Data Trusts, Normative Framework, AI Governance, Fairness, AI Agents
Generative AI Value Creation in Business-IT Collaboration: A Social IS Alignment Perspective

Generative AI Value Creation in Business-IT Collaboration: A Social IS Alignment Perspective

Lukas Grützner, Moritz Goldmann, Michael H. Breitner
This study empirically assesses the impact of Generative AI (GenAI) on the social aspects of business-IT collaboration. Using a literature review, an expert survey, and statistical modeling, the research explores how GenAI influences communication, mutual understanding, and knowledge sharing between business and technology departments.

Problem While aligning IT with business strategy is crucial for organizational success, the social dimension of this alignment—how people communicate and collaborate—is often underexplored. With the rapid integration of GenAI into workplaces, there is a significant research gap concerning how these new tools reshape the critical human interactions between business and IT teams.

Outcome - GenAI significantly improves formal business-IT collaboration by enhancing structured knowledge sharing, promoting the use of a common language, and increasing formal interactions.
- The technology helps bridge knowledge gaps by making technical information more accessible to business leaders and business context clearer to IT leaders.
- GenAI has no significant impact on informal social interactions, such as networking and trust-building, which remain dependent on human-driven leadership and engagement.
- Management must strategically integrate GenAI to leverage its benefits for formal communication while actively fostering an environment that supports crucial interpersonal collaboration.
Information systems alignment, social, GenAI, PLS-SEM
Value Propositions of Personal Digital Assistants for Process Knowledge Transfer

Value Propositions of Personal Digital Assistants for Process Knowledge Transfer

Paula Elsensohn, Mara Burger, Marleen Voß, and Jan vom Brocke
This study investigates the value propositions of Personal Digital Assistants (PDAs), a type of AI tool, for improving how knowledge about business processes is transferred within organizations. Using qualitative interviews with professionals across diverse sectors, the research identifies nine specific benefits of using PDAs in the context of Business Process Management (BPM). The findings are structured into three key dimensions: accessibility, understandability, and guidance.

Problem In modern businesses, critical knowledge about how work gets done is often buried in large amounts of data, making it difficult for employees to access and use effectively. This inefficient transfer of 'process knowledge' leads to errors, inconsistent outcomes, and missed opportunities for improvement. The study addresses the challenge of making this vital information readily available and understandable to the right people at the right time.

Outcome - The study identified nine key value propositions for using PDAs to transfer process knowledge, grouped into three main categories: accessibility, understandability, and guidance.
- PDAs improve accessibility by automating tasks and enabling employees to find knowledge and documentation much faster than through manual searching.
- They enhance understandability by facilitating user education, simplifying the onboarding of new employees, and performing context-aware analysis of processes.
- PDAs provide active guidance by offering real-time process advice, helping to optimize and standardize workflows, and supporting better decision-making with relevant data.
Personal Digital Assistant, Value Proposition, Process Knowledge, Business Process Management, Guidance
Exploring the Design of Augmented Reality for Fostering Flow in Running: A Design Science Study

Exploring the Design of Augmented Reality for Fostering Flow in Running: A Design Science Study

Julia Pham, Sandra Birnstiel, Benedikt Morschheuser
This study explores how to design Augmented Reality (AR) interfaces for sport glasses to help runners achieve a state of 'flow,' or peak performance. Using a Design Science Research approach, the researchers developed and evaluated an AR prototype over two iterative design cycles, gathering feedback from nine runners through field tests and interviews to derive design recommendations.

Problem Runners often struggle to achieve and maintain a state of flow due to the difficulty of monitoring performance without disrupting their rhythm, especially in dynamic outdoor environments. While AR glasses offer a potential solution by providing hands-free feedback, there is a significant research gap on how to design effective, non-intrusive interfaces that support, rather than hinder, this immersive state.

Outcome - AR interfaces can help runners achieve flow by providing continuous, non-intrusive feedback directly in their field of view, fulfilling the need for clear goals and unambiguous feedback.
- Non-numeric visual cues, such as expanding circles or color-coded warnings, are more effective than raw numbers for conveying performance data without causing cognitive overload.
- Effective AR design for running must be adaptive and customizable, allowing users to choose the metrics they see and control when the display is active to match personal goals and minimize distractions.
- The study produced four key design recommendations: provide easily interpretable feedback beyond numbers, ensure a seamless and embodied interaction, allow user customization, and use a curiosity-inducing design to maintain engagement.
Flow, AR, Sports, Endurance Running, Design Recommendations
Overcoming Algorithm Aversion with Transparency: Can Transparent Predictions Change User Behavior?

Overcoming Algorithm Aversion with Transparency: Can Transparent Predictions Change User Behavior?

Lasse Bohlen, Sven Kruschel, Julian Rosenberger, Patrick Zschech, and Mathias Kraus
This study investigates whether making a machine learning (ML) model's reasoning transparent can help overcome people's natural distrust of algorithms, known as 'algorithm aversion'. Through a user study with 280 participants, researchers examined how transparency interacts with the previously established method of allowing users to adjust an algorithm's predictions.

Problem People often hesitate to rely on algorithms for decision-making, even when the algorithms are superior to human judgment. While giving users control to adjust algorithmic outputs is known to reduce this aversion, it has been unclear whether making the algorithm's 'thinking process' transparent would also help, or perhaps even be more effective.

Outcome - Giving users the ability to adjust an algorithm's predictions significantly reduces their reluctance to use it, confirming findings from previous research.
- In contrast, simply making the algorithm transparent by showing its decision logic did not have a statistically significant effect on users' willingness to choose the model.
- The ability to adjust the model's output (adjustability) appears to be a more powerful tool for encouraging algorithm adoption than transparency alone.
- The effects of transparency and adjustability were found to be largely independent of each other, rather than having a combined synergistic effect.
Algorithm Aversion, Adjustability, Transparency, Interpretable Machine Learning, Replication Study
Bridging Mind and Matter: A Taxonomy of Embodied Generative AI

Bridging Mind and Matter: A Taxonomy of Embodied Generative AI

Jan Laufer, Leonardo Banh, Gero Strobel
This study develops a comprehensive classification system, or taxonomy, for Embodied Generative AI—AI that can perceive, reason, and act in physical systems like robots. The taxonomy was created through a systematic literature review and an analysis of 40 real-world examples of this technology. The resulting framework provides a structured way to understand and categorize the various dimensions of AI integrated into physical forms.

Problem As Generative AI (GenAI) moves from digital content creation to controlling physical agents, there has been a lack of systematic classification and evaluation methods. While many studies focus on specific applications, a clear framework for understanding the core characteristics and capabilities of these embodied AI systems has been missing. This gap makes it difficult for researchers and practitioners to compare, analyze, and optimize emerging applications in fields like robotics and automation.

Outcome - The study created a detailed taxonomy for Embodied Generative AI to systematically classify its characteristics.
- This taxonomy is structured into three main categories (meta-characteristics): Embodiment, Intelligence, and System.
- It further breaks down these categories into 16 dimensions and 50 specific characteristics, providing a comprehensive framework for analysis.
- The framework serves as a foundational tool for future research and helps businesses and developers make informed decisions when designing or implementing embodied AI systems in areas like service robotics and industrial automation.
Generative Artificial Intelligence, Embodied AI, Autonomous Agents, Human-GenAI Collaboration
Synthesising Catalysts of Digital Innovation: Stimuli, Tensions, and Interrelationships

Synthesising Catalysts of Digital Innovation: Stimuli, Tensions, and Interrelationships

Julian Beer, Tobias Moritz Guggenberger, Boris Otto
This study provides a comprehensive framework for understanding the forces that drive or impede digital innovation. Through a structured literature review, the authors identify five key socio-technical catalysts and analyze how each one simultaneously stimulates progress and introduces countervailing tensions. The research synthesizes these complex interdependencies to offer a consolidated analytical lens for both scholars and managers.

Problem Digital innovation is critical for business competitiveness, yet there is a significant research gap in understanding the integrated forces that shape its success. Previous studies have often examined catalysts like platform ecosystems or product design in isolation, providing a fragmented view that hinders managers' ability to effectively navigate the associated opportunities and risks.

Outcome - The study identifies five primary catalysts for digital innovation: Data Objects, Layered Modular Architecture, Product Design, IT and Organisational Alignment, and Platform Ecosystems.
- Each catalyst presents a duality of stimuli (drivers) and tensions (barriers); for example, data monetization (stimulus) raises privacy concerns (tension).
- Layered modular architecture accelerates product evolution but can lead to market fragmentation if proprietary standards are imposed.
- Effective product design can redefine a product's meaning and value, but risks user confusion and complexity if not aligned with user needs.
- The framework maps the interrelationships between these catalysts, showing how they collectively influence the digital innovation process and guiding managers in balancing these trade-offs.
Digital Innovation, Data Objects, Layered Modular Architecture, Product Design, Platform Ecosystems
Understanding Affordances in Health Apps for Cardiovascular Care through Topic Modeling of User Reviews

Understanding Affordances in Health Apps for Cardiovascular Care through Topic Modeling of User Reviews

Aleksandra Flok
This study analyzed over 37,000 user reviews from 22 health apps designed for cardiovascular care and heart failure. Using a technique called topic modeling, the researchers identified common themes and patterns in user experiences. The goal was to understand which app features users find most valuable and how they interact with them to manage their health.

Problem Cardiovascular disease is a leading cause of death, and mobile health apps offer a promising way for patients to monitor their condition and share data with doctors. However, for these apps to be effective, they must be designed to meet patient needs. There is a lack of understanding regarding what features and functionalities users actually perceive as helpful, which hinders the development of truly effective digital health solutions.

Outcome - The study identified six key patterns in user experiences: Data Management and Documentation, Measurement and Monitoring, Vital Data Analysis and Evaluation, Sensor-Based Functions & Usability, Interaction and System Optimization, and Business Model and Monetization.
- Users value apps that allow them to easily track, store, and share their health data (e.g., heart rate, blood pressure) with their doctors.
- Key functionalities that users focus on include accurate measurement, real-time monitoring, data visualization (graphs), and user-friendly interfaces.
- The findings provide a roadmap for developers to create more patient-centric health apps, focusing on the features that matter most for managing cardiovascular conditions effectively.
topic modeling, heart failure, affordance theory, health apps, cardiovascular care, user reviews, mobile health
Towards an AI-Based Therapeutic Assistant to Enhance Well-Being: Preliminary Results from a Design Science Research Project

Towards an AI-Based Therapeutic Assistant to Enhance Well-Being: Preliminary Results from a Design Science Research Project

Katharina-Maria Illgen, Enrico Kochon, Sergey Krutikov, and Oliver Thomas
This study introduces ELI, an AI-based therapeutic assistant designed to complement traditional therapy and enhance well-being by providing accessible, evidence-based psychological strategies. Using a Design Science Research (DSR) approach, the authors conducted a literature review and expert evaluations to derive six core design objectives and develop a simulated prototype of the assistant.

Problem Many individuals lack timely access to professional psychological support, which has increased the demand for digital interventions. However, the growing reliance on general AI tools for psychological advice presents risks of misinformation and lacks a therapeutic foundation, highlighting the need for scientifically validated, evidence-based AI solutions.

Outcome - The study established six core design objectives for AI-based therapeutic assistants, focusing on empathy, adaptability, ethical standards, integration, evidence-based algorithms, and dependable support.
- A simulated prototype, named ELI (Empathic Listening Intelligence), was developed to demonstrate the implementation of these design principles.
- Expert evaluations rated ELI positively for its accessibility, usability, and empathic support, viewing it as a beneficial tool for addressing less severe psychological issues and complementing traditional therapy.
- Key areas for improvement were identified, primarily concerning data privacy, crisis response capabilities, and the need for more comprehensive therapeutic approaches.
AI Therapeutics, Well-Being, Conversational Assistant, Design Objectives, Design Science Research
Trapped by Success – A Path Dependence Perspective on the Digital Transformation of Mittelstand Enterprises

Trapped by Success – A Path Dependence Perspective on the Digital Transformation of Mittelstand Enterprises

Linus Lischke
This study investigates why German Mittelstand enterprises (MEs), or mid-sized companies, often implement incremental rather than radical digital transformation. Using path dependence theory and a multiple-case study methodology, the research explores how historical success anchors strategic decisions in established business models, limiting the pursuit of new digital opportunities.

Problem Successful mid-sized companies are often cautious when it comes to digital transformation, preferring minor upgrades over fundamental changes. This creates a research gap in understanding why these firms remain on a slow, incremental path, even when faced with significant digital opportunities that could drive growth.

Outcome - Successful business models create a 'functional lock-in,' where companies become trapped by their own success, reinforcing existing strategies and discouraging radical digital change.
- This lock-in manifests in three ways: ingrained routines (normative), deeply held assumptions about the business (cognitive), and investment priorities that favor existing operations (resource-based).
- MEs tend to adopt digital technologies primarily to optimize current processes and enhance existing products, rather than to create new digital business models.
- As a result, even promising digital innovations are often rejected if they do not seamlessly align with the company's traditional operations and core products.
Digital Transformation, Path Dependence, Mittelstand Enterprises
Workarounds—A Domain-Specific Modeling Language

Workarounds—A Domain-Specific Modeling Language

Carolin Krabbe, Agnes Aßbrock, Malte Reineke, and Daniel Beverungen
This study introduces a new visual modeling language called Workaround Modeling Notation (WAMN) designed to help organizations identify, analyze, and manage employee workarounds. Using a design science approach, the researchers developed this notation and demonstrated its practical application using a real-world case from a manufacturing company. The goal is to provide a structured method for understanding the complex effects of these informal process deviations.

Problem Employees often create 'workarounds' to bypass inefficient or problematic standard procedures, but companies lack a systematic way to assess their impact. This makes it difficult to understand the complex chain reactions these workarounds can cause, leading to missed opportunities for innovation and unresolved underlying issues. Without a clear framework, organizations struggle to make consistent decisions about whether to adopt, modify, or prevent these employee-driven solutions.

Outcome - The primary outcome is the Workaround Modeling Notation (WAMN), a domain-specific modeling language designed to map the causes, actions, and consequences of workarounds.
- WAMN enables managers to visualize the entire 'workaround-to-innovation' lifecycle, treating workarounds not just as deviations but as potential bottom-up process improvements.
- The notation uses clear visual cues, such as color-coding for positive and negative effects, to help decision-makers quickly assess the risks and benefits of a workaround.
- By applying WAMN to a manufacturing case, the study demonstrates its ability to untangle complex interconnections between multiple workarounds and their cascading effects on different organizational levels.
Workaround, Business Process Management, Domain-Specific Modeling Language, Design Science Research, Process Innovation, Organizational Decision-Making
Systematizing Different Types of Interfaces to Interact with Data Trusts

Systematizing Different Types of Interfaces to Interact with Data Trusts

David Acev, Florian Rieder, Dennis M. Riehle, and Maria A. Wimmer
This study conducts a systematic literature review to analyze the various types of interfaces used for interaction with Data Trusts, which are organizations that manage data on behalf of others. The research categorizes these interfaces into human-system (e.g., user dashboards) and system-system (e.g., APIs) interactions. The goal is to provide a clear classification and highlight existing gaps in research to support the future implementation of trustworthy Data Trusts.

Problem As the volume of data grows, there is an increasing need for trustworthy data sharing mechanisms like Data Trusts. However, for these trusts to function effectively, the interactions between data providers, users, and the trust itself must be seamless and standardized. The problem is a lack of clear understanding and systematization of the different interfaces required, which creates ambiguity and hinders the development of reliable and interoperable Data Trust ecosystems.

Outcome - The study categorizes interfaces for Data Trusts into two primary groups: Human-System Interfaces (user interfaces like GUIs, CLIs) and System-System Interfaces (technical interfaces like APIs).
- A significant gap exists in the current literature, which often lacks specific details and clear definitions for how these interfaces are implemented within Data Trusts.
- The research highlights a scarcity of standardized and interoperable technical interfaces, which is crucial for ensuring trustworthy and efficient data sharing.
- The paper concludes that developing robust, well-defined interfaces is a vital and foundational step for building functional and widely adopted Data Trusts.
Data Trust, user interface, API, interoperability, data sharing
Understanding How Freelancers in the Design Domain Collaborate with Generative Artificial Intelligence

Understanding How Freelancers in the Design Domain Collaborate with Generative Artificial Intelligence

Fabian Helms, Lisa Gussek, and Manuel Wiesche
This study explores how generative AI (GenAI), specifically text-to-image generation (TTIG) systems, impacts the creative work of freelance designers. Through qualitative interviews with 10 designers, the researchers conducted a thematic analysis to understand the nuances of this new form of human-AI collaboration.

Problem While the impact of GenAI on creative fields is widely discussed, there is little specific research on how it affects freelance designers. This group is uniquely vulnerable to technological disruption due to their direct market exposure and lack of institutional support, creating an urgent need to understand how these tools are changing their work processes and job security.

Outcome - The research identified four key tradeoffs freelancers face when using GenAI: creativity can be enhanced (inspiration) but also risks becoming generic (standardization).
- Efficiency is increased, but this can be undermined by 'overprecision', a form of perfectionism where too much time is spent on minor AI-driven adjustments.
- The interaction with AI is viewed dually: either as a helpful 'sparring partner' for ideas or as an unpredictable tool causing a frustrating lack of control.
- For the future of work, GenAI is seen as forcing a job transition where designers must adapt new skills, while also posing a direct threat of job loss, particularly for junior roles.
Generative Artificial Intelligence, Online Freelancing, Human-AI collaboration, Freelance designers, Text-to-image generation, Creative process
Extracting Explanatory Rationales of Activity Relationships using LLMs - A Comparative Analysis

Extracting Explanatory Rationales of Activity Relationships using LLMs - A Comparative Analysis

Kerstin Andree, Zahi Touqan, Leon Bein, and Luise Pufahl
This study investigates using Large Language Models (LLMs) to automatically extract and classify the reasons (explanatory rationales) behind the ordering of tasks in business processes from text. The authors compare the performance of various LLMs and four different prompting techniques (Vanilla, Few-Shot, Chain-of-Thought, and a combination) to determine the most effective approach for this automation.

Problem Understanding why business process steps occur in a specific order (due to laws, business rules, or best practices) is crucial for process improvement and redesign. However, this information is typically buried in textual documents and must be extracted manually, which is a very expensive and time-consuming task for organizations.

Outcome - Few-Shot prompting, where the model is given a few examples, significantly improves classification accuracy compared to basic prompting across almost all tested LLMs.
- The combination of Few-Shot learning and Chain-of-Thought reasoning also proved to be a highly effective approach.
- Interestingly, smaller and more cost-effective LLMs (like GPT-4o-mini) achieved performance comparable to or even better than larger models when paired with sophisticated prompting techniques.
- The findings demonstrate that LLMs can successfully automate the extraction of process knowledge, making advanced process analysis more accessible and affordable for organizations with limited resources.
Activity Relationships Classification, Large Language Models, Explanatory Rationales, Process Context, Business Process Management, Prompt Engineering
Building Digital Transformation Competence: Insights from a Media and Technology Company

Building Digital Transformation Competence: Insights from a Media and Technology Company

Mathias Bohrer and Thomas Hess
This study investigates how a large media and technology company successfully built the necessary skills and capabilities for its digital transformation. Through a qualitative case study, the research identifies a clear sequence and specific tools that organizations can use to develop competencies for managing digital innovations.

Problem Many organizations struggle with digital transformation because they lack the right internal skills, or 'competencies', to manage new digital technologies and innovations effectively. Existing research on this topic is often too abstract, offering little practical guidance on how companies can actually build these crucial competencies from the ground up.

Outcome - Organizations build digital transformation competence in a three-stage sequence: 1) Expanding foundational IT skills, 2) Developing 'meta' competencies like agility and a digital mindset, and 3) Fostering 'transformation' competencies focused on innovation and business model development.
- Effective competence building moves beyond traditional classroom training to include a diverse set of instruments like hackathons, coding camps, product development events, and experimental learning.
- The study proposes a model categorizing competence-building tools into three types: technology-specific (for IT skills), agility-nurturing (for organizational flexibility), and technology-agnostic (for innovation and strategy).
Competencies, Competence Building, Organizational Learning, Digital Transformation, Digital Innovation
Dynamic Equilibrium Strategies in Two-Sided Markets

Dynamic Equilibrium Strategies in Two-Sided Markets

Janik Bürgermeister, Martin Bichler, and Maximilian Schiffer
This study investigates when predatory pricing is a rational strategy for platforms competing in two-sided markets. The researchers develop a multi-stage Bayesian game model, which accounts for real-world factors like uncertainty about competitors' costs and risk aversion. Using deep reinforcement learning, they simulate competitive interactions to identify equilibrium strategies and market outcomes.

Problem Traditional economic models of platform competition often assume that companies have complete information about each other's costs, which is rarely true in reality. This simplification makes it difficult to explain why aggressive strategies like predatory pricing occur and under what conditions they lead to monopolies. This study addresses this gap by creating a more realistic model that incorporates uncertainty to better understand competitive platform dynamics.

Outcome - Uncertainty is a key driver of monopolization; when platforms are unsure of their rivals' costs, monopolies form in roughly 60% of scenarios, even if the platforms are otherwise symmetric.
- In contrast, under conditions of complete information (where costs are known), monopolies only emerge when one platform has a clear cost advantage over the other.
- Cost advantages (asymmetries) further increase the likelihood of a single platform dominating the market.
- When platform decision-makers are risk-averse, they are less likely to engage in aggressive pricing, which reduces the tendency for monopolies to form.
Two-sided markets, Predatory Pricing, Bayesian multi-stage games, Learning in games, Platform competition, Equilibrium strategies
Gender Bias in LLMs for Digital Innovation: Disparities and Fairness Concerns

Gender Bias in LLMs for Digital Innovation: Disparities and Fairness Concerns

Sumin Kim-Andres¹ and Steffi Haag¹
This study investigates gender bias in large language models (LLMs) like ChatGPT within the context of digital innovation and entrepreneurship. Using two tasks—associating gendered terms with professions and simulating venture capital funding decisions—the researchers analyzed ChatGPT-4o's outputs to identify how societal gender biases are reflected and reinforced by AI.

Problem As businesses increasingly integrate AI tools for tasks like brainstorming, hiring, and decision-making, there's a significant risk that these systems could perpetuate harmful gender stereotypes. This can create disadvantages for female entrepreneurs and innovators, potentially widening the existing gender gap in technology and business leadership.

Outcome - ChatGPT-4o associated male-denoting terms with digital innovation and tech-related professions significantly more often than female-denoting terms.
- In simulated venture capital scenarios, the AI model exhibited 'in-group bias,' predicting that both male and female venture capitalists would be more likely to fund entrepreneurs of their own gender.
- The study confirmed that LLMs can perpetuate gender bias through implicit cues like names alone, even when no explicit gender information is provided.
- The findings highlight the risk of AI reinforcing stereotypes in professional decision-making, which can limit opportunities for underrepresented groups in business and innovation.
Gender Bias, Large Language Models, Fairness, Digital Innovation, Artificial Intelligence
The Impact of Digital Platform Acquisition on Firm Value: Does Buying Really Help?

The Impact of Digital Platform Acquisition on Firm Value: Does Buying Really Help?

Yongli Huang, Maximilian Schreieck, Alexander Kupfer
This study examines investor reactions to corporate announcements of digital platform acquisitions to understand their impact on firm value. Using an event study methodology on a global sample of 157 firms, the research analyzes how the stock market responds based on the acquisition's motivation (innovation-focused vs. efficiency-focused) and the target platform's maturity.

Problem While acquiring digital platforms is an increasingly popular corporate growth strategy, little is known about its actual effectiveness and financial impact. Companies and investors lack clear guidance on which types of platform acquisitions are most likely to create value, leading to uncertainty and potentially poor strategic decisions.

Outcome - Generally, the announcement of a digital platform acquisition leads to a negative stock market return, indicating investor concerns about integration risks and high costs.
- Acquisitions motivated by 'exploration' (innovation and new opportunities) face a less negative market reaction than those motivated by 'exploitation' (efficiency and optimization).
- Acquiring mature platforms with established user bases mitigates negative stock returns more effectively than acquiring nascent (new) platforms.
Digital Platform Acquisition, Event Study, Exploration vs. Exploitation, Mature vs. Nascent, Chicken-and-Egg Problem
Using Large Language Models for Healthcare Data Interoperability: A Data Mediation Pipeline to Integrate Heterogeneous Patient-Generated Health Data and FHIR

Using Large Language Models for Healthcare Data Interoperability: A Data Mediation Pipeline to Integrate Heterogeneous Patient-Generated Health Data and FHIR

Torben Ukena, Robin Wagler, and Rainer Alt
This study explores the use of Large Language Models (LLMs) to streamline the integration of diverse patient-generated health data (PGHD) from sources like wearables. The researchers propose and evaluate a data mediation pipeline that combines an LLM with a validation mechanism to automatically transform various data formats into the standardized Fast Healthcare Interoperability Resources (FHIR) format.

Problem Integrating patient-generated health data from various devices into clinical systems is a major challenge due to a lack of interoperability between different data formats and hospital information systems. This data fragmentation hinders clinicians' ability to get a complete view of a patient's health, potentially leading to misinformed decisions and obstacles to patient-centered care.

Outcome - LLMs can effectively translate heterogeneous patient-generated health data into the valid, standardized FHIR format, significantly improving healthcare data interoperability.
- Providing the LLM with a few examples (few-shot prompting) was more effective than providing it with abstract rules and guidelines (reasoning prompting).
- The inclusion of a validation and self-correction loop in the pipeline is crucial for ensuring the LLM produces accurate and standard-compliant output.
- While successful with text-based data, the LLM struggled to accurately aggregate values from complex structured data formats like JSON and CSV, leading to lower semantic accuracy in those cases.
FHIR, semantic interoperability, large language models, hospital information system, patient-generated health data
Acceptance Analysis of the Metaverse: An Investigation in the Paper- and Packaging Industry

Acceptance Analysis of the Metaverse: An Investigation in the Paper- and Packaging Industry

First Author¹, Second Author¹, Third Author¹,², and Fourth Author²
This study investigates employee acceptance of metaverse technologies within the traditionally conservative paper and packaging industry. Using the Technology Acceptance Model 3, the research was conducted as a living lab experiment in a leading packaging company. The methodology combined qualitative content analysis with quantitative multiple regression modelling to assess the key factors influencing adoption.

Problem While major technology companies are heavily investing in the metaverse for workplace applications, there is a significant research gap concerning employee acceptance of these immersive technologies. This is particularly relevant for traditionally non-digital industries, like paper and packaging, which are seeking to digitalize but face unique adoption barriers. This study addresses the lack of empirical data on how employees in such sectors perceive and accept metaverse tools for work and collaboration.

Outcome - Employees in the paper and packaging industry show a moderate but ambiguous acceptance of the metaverse, with an average score of 3.61 out of 5.
- The most significant factors driving acceptance are the perceived usefulness (PU) of the technology for their job and its perceived ease of use (PEU).
- Job relevance was found to be a key influencer of perceived usefulness, while an employee's confidence in their own computer skills (computer self-efficacy) was a key predictor for perceived ease of use.
- While employees recognized benefits like improved virtual collaboration, they also raised concerns about hardware limitations (e.g., headset weight, image clarity) and the technology's overall maturity compared to existing tools.
Metaverse, Technology Acceptance Model 3, Living lab, Paper and Packaging industry, Workplace
Generative AI Usage of University Students: Navigating Between Education and Business

Generative AI Usage of University Students: Navigating Between Education and Business

Fabian Walke, Veronika Föller
This study investigates how university students who also work professionally use Generative AI (GenAI) in both their academic and business lives. Using a grounded theory approach, the researchers interviewed eleven part-time students from a distance learning university to understand the characteristics, drivers, and challenges of their GenAI usage.

Problem While much research has explored GenAI in education or in business separately, there is a significant gap in understanding its use at the intersection of these two domains. Specifically, the unique experiences of part-time students who balance professional careers with their studies have been largely overlooked.

Outcome - GenAI significantly enhances productivity and learning for students balancing work and education, helping with tasks like writing support, idea generation, and summarizing content.
- Students express concerns about the ethical implications, reliability of AI-generated content, and the risk of academic misconduct or being falsely accused of plagiarism.
- A key practical consequence is that GenAI tools like ChatGPT are replacing traditional search engines for many information-seeking tasks due to their speed and directness.
- The study highlights a strong need for universities to provide clear guidelines, regulations, and formal training on using GenAI effectively and ethically.
- User experience is a critical factor; a positive, seamless interaction with a GenAI tool promotes continuous usage, while a poor experience diminishes willingness to use it.
Artificial Intelligence, ChatGPT, Enterprise, Part-time students, Generative AI, Higher Education
Exploring Algorithmic Management Practices in Healthcare – Use Cases along the Hospital Value Chain

Exploring Algorithmic Management Practices in Healthcare – Use Cases along the Hospital Value Chain

Maximilian Kempf, Filip Simić, Maria Doerr, and Alexander Benlian
This study explores how algorithmic management (AM), the use of algorithms for tasks typically done by human managers, is being applied in hospitals. Through nine semi-structured interviews with doctors and software providers, the research identifies and analyzes specific use cases for AM across the hospital's operational value chain, from patient admission to administration.

Problem While AM is well-studied in low-skill, platform-based work like ride-hailing, its application in traditional, high-skill industries such as healthcare is not well understood. This research addresses the gap by investigating how these algorithmic systems are embedded in complex hospital environments to manage skilled professionals and critical patient care processes.

Outcome - The study identified five key use cases of algorithmic management in hospitals: patient intake management, bed management, doctor-to-patient assignment, workforce management, and performance monitoring.
- In admissions, algorithms help prioritize patients by urgency and automate bed assignments, significantly improving efficiency and reducing staff's administrative workload.
- For treatment and administration, AM systems assign doctors to patients based on expertise and availability, manage staff schedules to ensure fairer workloads, and track performance through key metrics (KPIs).
- While AM can increase efficiency, reduce stress through fairer task distribution, and optimize resource use, it also introduces pressures like rigid schedules and raises concerns about the transparency of performance evaluations for medical staff.
Algorithmic Management, Healthcare, Hospital Value Chain, Qualitative Interview Study, Hospital Management, Workflow Automation
Designing for Digital Inclusion: Iterative Enhancement of a Process Guidance User Interface for Senior Citizens

Designing for Digital Inclusion: Iterative Enhancement of a Process Guidance User Interface for Senior Citizens

Michael Stadler, Markus Noeltner, Julia Kroenung
This study developed and tested a user interface designed to help senior citizens use online services more easily. Using a travel booking website as a case study, the researchers combined established design principles with a step-by-step visual guide and refined the design over three rounds of testing with senior participants.

Problem As more essential services like banking, shopping, and booking appointments move online, many senior citizens face significant barriers to participation due to complex and poorly designed interfaces. This digital divide can lead to both technological and social disadvantages for the growing elderly population, a problem many businesses fail to address.

Outcome - A structured, visual process guide significantly helps senior citizens navigate and complete online tasks.
- Iteratively refining the user interface based on direct feedback from seniors led to measurable improvements in performance, with users completing tasks faster in each subsequent round.
- Simple design adaptations, such as reducing complexity, using clear instructions, and ensuring high-contrast text, effectively reduce the cognitive load on older users.
- The findings confirm that designing digital services with seniors in mind is crucial for creating a more inclusive digital world and can help businesses reach a larger customer base.
Usability for Seniors, Process Guidance, Digital Accessibility, Digital Inclusion, Senior Citizens, Heuristic Evaluation, User Interface Design
Designing Digital Service Innovation Hubs: An Ecosystem Perspective on the Challenges and Requirements of SMEs and the Public Sector

Designing Digital Service Innovation Hubs: An Ecosystem Perspective on the Challenges and Requirements of SMEs and the Public Sector

Jannika Marie Schäfer, Jonas Liebschner, Polina Rajko, Henrik Cohnen, Nina Lugmair, and Daniel Heinz
This study investigates the design of a Digital Service Innovation Hub (DSIH) to facilitate and orchestrate service innovation for small and medium-sized enterprises (SMEs) and public organizations. Using a design science research approach, the authors conducted 17 expert interviews and focus group validations to analyze challenges and derive specific design requirements. The research aims to create a blueprint for a hub that moves beyond simple networking to actively manage innovation ecosystems.

Problem Small and medium-sized enterprises (SMEs) and public organizations often struggle to innovate within service ecosystems due to resource constraints, knowledge gaps, and difficulties finding the right partners. Existing Digital Innovation Hubs (DIHs) typically focus on specific technological solutions and matchmaking but fail to provide the comprehensive orchestration needed for sustained service innovation. This gap leaves many organizations unable to leverage the full potential of collaborative innovation.

Outcome - The study identifies four key challenge areas for SMEs and public organizations: exogenous factors (e.g., market speed, regulations), intraorganizational factors (e.g., resistant culture, outdated systems), knowledge and skill gaps, and partnership difficulties.
- It proposes a set of design requirements for Digital Service Innovation Hubs (DSIHs) centered on three core functions: (1) orchestrating actors by facilitating matchmaking, collaboration, and funding opportunities.
- (2) Facilitating structured knowledge transfer by sharing best practices, providing tailored content, and creating interorganizational learning formats.
- (3) Ensuring effective implementation and provision of the hub itself through user-friendly design, clear operational frameworks, and tangible benefits for participants.
service innovation, ecosystem, innovation hubs, SMEs, public sector
The GenAI Who Knew Too Little – Revisiting Transactive Memory Systems in Human GenAI Collaboration

The GenAI Who Knew Too Little – Revisiting Transactive Memory Systems in Human GenAI Collaboration

Christian Meske, Tobias Hermanns, Florian Brachten
This study investigates how traditional models of team collaboration, known as Transactive Memory Systems (TMS), manifest when humans work with Generative AI. Through in-depth interviews with 14 knowledge workers, the research analyzes the unique dynamics of expertise recognition, trust, and coordination that emerge in these partnerships.

Problem While Generative AI is increasingly used as a collaborative tool, our understanding of teamwork is based on human-to-human interaction. This creates a knowledge gap, as the established theories do not account for an AI partner that operates on algorithms rather than social cues, potentially leading to inefficient and frustrating collaborations.

Outcome - Human-AI collaboration is asymmetrical: Humans learn the AI's capabilities, but the AI fails to recognize and remember human expertise beyond a single conversation.
- Trust in GenAI is ambivalent and requires verification: Users simultaneously see the AI as an expert yet doubt its reliability, forcing them to constantly verify its outputs, a step not typically taken with trusted human colleagues.
- Teamwork is hierarchical, not mutual: Humans must always take the lead and direct a passive AI that lacks initiative, creating a 'boss-employee' dynamic rather than a reciprocal partnership where both parties contribute ideas.
Generative AI, Transactive Memory Systems, Human-AI Collaboration, Knowledge Work, Trust in AI, Expertise Recognition, Coordination
A Survey on Citizens' Perceptions of Social Risks in Smart Cities

A Survey on Citizens' Perceptions of Social Risks in Smart Cities

Elena Fantino, Sebastian Lins, and Ali Sunyaev
This study identifies 15 key social risks associated with the development of smart cities, such as privacy violations and increased surveillance. It then examines public perception of these risks through a quantitative survey of 310 participants in Germany and Italy. The research aims to understand how citizens view the balance between the benefits and potential harms of smart city technologies.

Problem While the digital transformation of cities promises benefits like enhanced efficiency and quality of life, it often overlooks significant social risks. Issues like data privacy, cybersecurity threats, and growing social divides can undermine human security and well-being, yet citizens' perspectives on these dangers are frequently ignored in the planning and implementation process.

Outcome - Citizens rate both the probability and severity of social risks in smart cities as relatively high.
- Despite recognizing these significant risks, participants generally maintain a positive attitude towards the concept of smart cities, highlighting a duality in public perception.
- The risk perceived as most probable by citizens is 'profiling', while 'cybersecurity threats' are seen as having the most severe impact.
- Risk perception differs based on demographic factors like age and nationality; for instance, older participants and Italian citizens reported higher risk perceptions than their younger and German counterparts.
- The findings underscore the necessity of a participatory and ethical approach to smart city development that actively involves citizens to mitigate risks and ensure equitable benefits.
smart cities, social risks, citizens' perception, AI ethics, social impact
Aisle be Back: State-of-the-Art Adoption of Retail Service Robots in Brick-and-Mortar Retail

Aisle be Back: State-of-the-Art Adoption of Retail Service Robots in Brick-and-Mortar Retail

Luisa Strelow, Michael Dominic Harr, and Reinhard Schütte
This study analyzes the current state of Retail Service Robot (RSR) adoption in physical, brick-and-mortar (B&M) stores. Using a dual research method that combines a systematic literature review with a multi-case study of major European retailers, the paper synthesizes how these robots are currently being used for various operational tasks.

Problem Brick-and-mortar retailers are facing significant challenges, including acute staff shortages and intense competition from online stores, which threaten their operational efficiency. While service robots offer a potential solution to sustain operations and transform the customer experience, a comprehensive understanding of their current adoption in retail environments is lacking.

Outcome - Retail Service Robots (RSRs) are predominantly adopted for tasks related to information exchange and goods transportation, which improves both customer service and operational efficiency.
- The potential for more advanced, human-like (anthropomorphic) interaction between robots and customers has not yet been fully utilized by retailers.
- The adoption of RSRs in the B&M retail sector is still in its infancy, with most robots being used for narrowly defined, single-purpose tasks rather than leveraging their full multi-functional potential.
- Research has focused more on customer-robot interactions than on employee-robot interactions, leaving a gap in understanding employee acceptance and collaboration.
- Many robotic systems discussed in academic literature are prototypes tested in labs, with few long-term, real-world deployments reported, especially in customer service roles.
Retail Service Robot, Brick-and-Mortar, Technology Adoption, Artificial Intelligence, Automation
Fostering Active Student Engagement in Flipped Classroom Teaching with Social Normative Feedback Research Paper

Fostering Active Student Engagement in Flipped Classroom Teaching with Social Normative Feedback Research Paper

Maximilian May, Konstantin Hopf, Felix Haag, Thorsten Staake, and Felix Wortmann
This study examines the effectiveness of social normative feedback in improving student engagement within a flipped classroom setting. Through a randomized controlled trial with 140 undergraduate students, researchers provided one group with emails comparing their assignment progress to their peers, while a control group received no such feedback during the main study period.

Problem The flipped classroom model requires students to be self-regulated, but many struggle with procrastination, leading to late submissions of graded assignments and underuse of voluntary learning materials. This behavior negatively affects academic performance, creating a need for scalable digital interventions that can encourage more timely and active student participation.

Outcome - The social normative feedback intervention significantly reduced late submissions of graded assignments by 8.4 percentage points (an 18.5% decrease) compared to the control group.
- Submitting assignments earlier was strongly correlated with higher correctness rates and better academic performance.
- The feedback intervention helped mitigate the decline in assignment quality that was observed in later course modules for the control group.
- The intervention did not have a significant effect on students' engagement with optional, voluntary assignments during the semester.
Flipped Classroom, Social Normative Feedback, Self Regulated Learning, Digital Interventions, Student Engagement, Higher Education
A Multi-Level Strategy for Deepfake Content Moderation under EU Regulation

A Multi-Level Strategy for Deepfake Content Moderation under EU Regulation

Luca Deck, Max-Paul Förster, Raimund Weidlich, and Niklas Kühl
This study reviews existing methods for marking, detecting, and labeling deepfakes to assess their effectiveness under new EU regulations. Based on a multivocal literature review, the paper finds that individual methods are insufficient. Consequently, it proposes a novel multi-level strategy that combines the strengths of existing approaches for more scalable and practical content moderation on online platforms.

Problem The increasing availability of deepfake technology poses a significant risk to democratic societies by enabling the spread of political disinformation. While the European Union has enacted regulations to enforce transparency, there is a lack of effective industry standards for implementation. This makes it challenging for online platforms to moderate deepfake content at scale, as current individual methods fail to meet regulatory and practical requirements.

Outcome - Individual methods for marking, detecting, and labeling deepfakes are insufficient to meet EU regulatory and practical requirements alone.
- The study proposes a multi-level strategy that combines the strengths of various methods (e.g., technical detection, trusted sources) to create a more robust and effective moderation process.
- A simple scoring mechanism is introduced to ensure the strategy is scalable and practical for online platforms managing massive amounts of content.
- The proposed framework is designed to be adaptable to new types of deepfake technology and allows for context-specific risk assessment, such as for political communication.
Deepfakes, EU Regulation, Online Platforms, Content Moderation, Political Communication
Ensembling vs. Delegating: Different Types of AI-Involved Decision-Making and Their Effects on Procedural Fairness Perceptions

Ensembling vs. Delegating: Different Types of AI-Involved Decision-Making and Their Effects on Procedural Fairness Perceptions

Christopher Diebel, Akylzhan Kassymova, Mari-Klara Stein, Martin Adam, and Alexander Benlian
This study investigates how employees perceive the fairness of decisions that involve artificial intelligence (AI). Using an online experiment with 79 participants, researchers compared scenarios where a performance evaluation was conducted by a manager alone, fully delegated to an AI, or made by a manager and an AI working together as an 'ensemble'.

Problem As companies increasingly use AI for important workplace decisions like hiring and performance reviews, it's crucial to understand how employees react. Prior research suggests that AI-driven decisions can be perceived as unfair, but it was unclear how different methods of AI integration—specifically, fully handing over a decision to AI versus a collaborative human-AI approach—affect employee perceptions of fairness and their trust in management.

Outcome - Decisions fully delegated to an AI are perceived as significantly less fair than decisions made solely by a human manager.
- This perceived unfairness in AI-delegated decisions leads to a lower level of trust in the manager who made the delegation.
- Importantly, these negative effects on fairness and trust do not occur when a human-AI 'ensemble' method is used, where both the manager and the AI are equally involved in the decision-making process.
Decision-Making, Al Systems, Procedural Fairness, Ensemble, Delegation
The Value of Blockchain-Verified Micro-Credentials in Hiring Decisions

The Value of Blockchain-Verified Micro-Credentials in Hiring Decisions

Lyuba Stafyeyeva
This study investigates how blockchain verification and the type of credential-issuing institution (university vs. learning academy) influence employer perceptions of a job applicant's trustworthiness, expertise, and salary expectations. Using an experimental design with 200 participants, the research evaluated how different credential formats affected hiring assessments.

Problem Verifying academic credentials is often slow, expensive, and prone to fraud, undermining trust in the system. While new micro-credentials (MCs) offer an alternative, their credibility is often unclear to employers, and it is unknown if technologies like blockchain can effectively solve this trust issue in real-world hiring scenarios.

Outcome - Blockchain verification did not significantly increase employers' perceptions of an applicant's trustworthiness or expertise.
- Employers showed no significant preference for credentials issued by traditional universities over those from alternative learning academies, suggesting a shift toward competency-based hiring.
- Applicants with blockchain-verified credentials were offered lower minimum starting salaries, indicating that while verification may reduce hiring risk for employers, it does not increase the candidate's perceived value.
- The results suggest that institutional prestige is becoming less important than verifiable skills in the hiring process.
micro-credentials, blockchain, trust, verification, employer decision-making
Design Principles for SME-focused Maturity Models in Information Systems

Design Principles for SME-focused Maturity Models in Information Systems

Stefan Rösl, Daniel Schallmo, and Christian Schieder
This study addresses the limited practical application of maturity models (MMs) among small and medium-sized enterprises (SMEs). Through a structured analysis of 28 relevant academic articles, the researchers developed ten actionable design principles (DPs) to improve the usability and strategic impact of MMs for SMEs. These principles were subsequently validated by 18 recognized experts to ensure their practical relevance.

Problem Maturity models are valuable tools for assessing organizational capabilities, but existing frameworks are often too complex, resource-intensive, and not tailored to the specific constraints of SMEs. This misalignment leads to low adoption rates, preventing smaller businesses from effectively using these models to guide their transformation and innovation efforts.

Outcome - The study developed and validated ten actionable design principles (DPs) for creating maturity models specifically tailored for Small and Medium-sized Enterprises (SMEs).
- These principles, confirmed by experts as highly useful, provide a structured foundation for researchers and designers to build MMs that are more accessible, relevant, and usable for SMEs.
- The research bridges the gap between MM theory and real-world applicability, enabling the development of tools that better support SMEs in strategic planning and capability improvement.
Design Principles, Maturity Model, Capability Assessment, SME, Information Systems, SME-specific MMs
Evaluating Consumer Decision-Making Trade-Offs in Smart Service Systems in the Smart Home Domain

Evaluating Consumer Decision-Making Trade-Offs in Smart Service Systems in the Smart Home Domain

Björn Konopka and Manuel Wiesche
This study investigates the trade-offs consumers make when purchasing smart home devices. Using a choice-based conjoint analysis, the research evaluates the relative importance of eight attributes related to performance (e.g., reliability), privacy (e.g., data storage), and market factors (e.g., price and provider).

Problem While smart home technology is increasingly popular, there is limited understanding of how consumers weigh different factors, particularly how they balance privacy concerns against product performance and cost. This study addresses this gap by quantifying which features consumers prioritize when making purchasing decisions for smart home systems.

Outcome - Reliability and the device provider are the most influential factors in consumer decision-making, significantly outweighing other attributes.
- Price and privacy-related attributes (such as data collection scope, purpose, and user controls) play a comparatively lesser role.
- Consumers strongly prefer products that are reliable and made by a trusted (in this case, domestic) provider.
- The findings indicate that consumers are willing to trade off privacy concerns for tangible benefits in performance and trust in the manufacturer.
Smart Service Systems, Smart Home, Conjoint, Consumer Preferences, Privacy
LLMs for Intelligent Automation - Insights from a Systematic Literature Review

LLMs for Intelligent Automation - Insights from a Systematic Literature Review

David Sonnabend, Mahei Manhai Li and Christoph Peters
This study conducts a systematic literature review to examine how Large Language Models (LLMs) can enhance Intelligent Automation (IA). The research aims to overcome the limitations of traditional Robotic Process Automation (RPA), such as handling unstructured data and workflow changes, by systematically investigating the integration of LLMs.

Problem Traditional Robotic Process Automation (RPA) struggles with complex tasks involving unstructured data and dynamic workflows. While Large Language Models (LLMs) show promise in addressing these issues, there has been no systematic investigation into how they can specifically advance the field of Intelligent Automation (IA), creating a significant research gap.

Outcome - LLMs are primarily used to process complex inputs, such as unstructured text, within automation workflows.
- They are leveraged to generate automation workflows directly from natural language commands, simplifying the creation process.
- LLMs are also used to guide goal-oriented Graphical User Interface (GUI) navigation, making automation more adaptable to interface changes.
- A key research gap was identified in the lack of systems that combine these different capabilities and enable continuous learning at runtime.
Large Language Models (LLMs), Intelligent Process Automation (IPA), Intelligent Automation (IA), Cognitive Automation (CA), Tool Learning, Systematic Literature Review, Robotic Process Automation (RPA)
Label Error Detection in Defect Classification using Area Under the Margin (AUM) Ranking on Tabular Data

Label Error Detection in Defect Classification using Area Under the Margin (AUM) Ranking on Tabular Data

Pavlos Rath-Manakidis, Kathrin Nauth, Henry Huick, Miriam Fee Unger, Felix Hoenig, Jens Poeppelbuss, and Laurenz Wiskott
This study introduces an efficient method using Area Under the Margin (AUM) ranking with gradient-boosted decision trees to detect labeling errors in tabular data. The approach is designed to improve data quality for machine learning models used in industrial quality control, specifically for flat steel defect classification. The method's effectiveness is validated on both public and real-world industrial datasets, demonstrating it can identify problematic labels in a single training run.

Problem Automated surface inspection systems in manufacturing rely on machine learning models trained on large datasets. The performance of these models is highly dependent on the quality of the data labels, but errors frequently occur due to annotator mistakes or ambiguous defect definitions. Existing methods for finding these label errors are often computationally expensive and not optimized for the tabular data formats common in industrial applications.

Outcome - The proposed AUM method is as effective as more complex, computationally expensive techniques for detecting label errors but requires only a single model training run.
- The method successfully identifies both synthetically created and real-world label errors in industrial datasets related to steel defect classification.
- Integrating this method into quality control workflows significantly reduces the manual effort required to find and correct mislabeled data, improving the overall quality of training datasets and subsequent model performance.
- In a real-world test, the method flagged suspicious samples for expert review, where 42% were confirmed to be labeling errors.
Label Error Detection, Automated Surface Inspection System (ASIS), Machine Learning, Gradient Boosting, Data-centric AI
Taking a Sociotechnical Perspective on Self-Sovereign Identity – A Systematic Literature Review

Taking a Sociotechnical Perspective on Self-Sovereign Identity – A Systematic Literature Review

Lukas Florian Bossler, Teresa Huber, and Julia Kroenung
This study provides a comprehensive analysis of academic literature on Self-Sovereign Identity (SSI), a system that aims to give individuals control over their digital data. Through a systematic literature review, the paper identifies and categorizes the key sociotechnical challenges—both technical and social—that affect the implementation and widespread adoption of SSI. The goal is to map the current research landscape and highlight underexplored areas.

Problem As individuals use more internet services, they lose control over their personal data, which is often managed and monetized by large tech companies. While Self-Sovereign Identity (SSI) is a promising solution to restore user control, academic research has disproportionately focused on technical aspects like security. This has created a significant knowledge gap regarding the crucial social challenges, such as user acceptance, trust, and usability, which are vital for SSI's real-world success.

Outcome - Security and privacy are the most frequently discussed challenges in SSI literature, often linked to the use of blockchain technology.
- Social factors essential for adoption, including user acceptance, trust, usability, and control, are significantly overlooked in current academic research.
- Over half of the analyzed papers discuss SSI in a general sense, with a lack of focus on specific application domains like e-government, healthcare, or finance.
- A potential mismatch exists between SSI's privacy needs and the inherent properties of blockchain, suggesting that alternative technologies should be explored.
- The paper concludes there is a strong need for more domain-specific and design-oriented research to address the social hurdles of SSI adoption.
self-sovereign identity, decentralized identity, blockchain, sociotechnical challenges, digital identity, systematic literature review
Measuring AI Literacy of Future Knowledge Workers: A Mediated Model of AI Experience and AI Knowledge

Measuring AI Literacy of Future Knowledge Workers: A Mediated Model of AI Experience and AI Knowledge

Sarah Hönigsberg, Sabrine Mallek, Laura Watkowski, and Pauline Weritz
This study investigates how future professionals develop AI literacy, which is the ability to effectively use and understand AI tools. Using a survey of 352 business school students, the researchers examined how hands-on experience with AI (both using and designing it) and theoretical knowledge about AI work together to build overall proficiency. The research proposes a new model showing that knowledge acts as a critical bridge between simply using AI and truly understanding it.

Problem As AI becomes a standard tool in professional settings, simply knowing how to use it isn't enough; professionals need a deeper understanding, or "AI literacy," to use it effectively and responsibly. The study addresses the problem that current frameworks for teaching AI skills often overlook the specific needs of knowledge workers and don't clarify how hands-on experience translates into true competence. This gap makes it difficult for companies and universities to design effective training programs to prepare the future workforce.

Outcome - Hands-on experience with AI is crucial, but it doesn't directly create AI proficiency; instead, it serves to build a foundation of AI knowledge.
- This structured AI knowledge is the critical bridge that turns practical experience into true AI literacy, allowing individuals to critique and apply AI insights effectively.
- Experience in designing or configuring AI systems has a significantly stronger positive impact on developing AI literacy than just using AI tools.
- The findings suggest that education and corporate training should combine practical, hands-on projects with structured learning about how AI works to build a truly AI-literate workforce.
knowledge worker, Al literacy, digital intelligence, digital literacy, AI knowledge
Mapping Digitalization in the Crafts Industry: A Systematic Literature Review

Mapping Digitalization in the Crafts Industry: A Systematic Literature Review

Pauline Désirée Gantzer, Audris Pulanco Umel, and Christoph Lattemann
This study challenges the perception that the craft industry lags in digital transformation by conducting a systematic literature review of 141 scientific and practitioner papers. It aims to map the application and influence of specific digital technologies across various craft sectors. The findings are used to identify patterns of adoption, highlight gaps, and recommend future research directions.

Problem The craft and skilled trades industry, despite its significant economic and cultural role, is often perceived as traditional and slow to adopt digital technologies. This view suggests the sector is missing out on crucial business opportunities and innovations, creating a knowledge gap about the actual extent and nature of digitalization within these businesses.

Outcome - The degree and type of digital technology adoption vary significantly across different craft sectors.
- Contrary to the perception of being laggards, craft businesses are actively applying a wide range of digital technologies to improve efficiency, competitiveness, and customer engagement.
- Many businesses (47.7% of cases analyzed) use digital tools primarily for value creation, such as optimizing production processes and operational efficiency.
- Sectors like construction and textiles integrate sophisticated technologies (e.g., AI, IoT, BIM), while more traditional crafts prioritize simpler tools like social media and e-commerce for marketing.
- Digital transformation in the craft industry is not a one-size-fits-all process but is shaped by sector-specific needs, resource constraints, and cultural values.
crafts, digital transformation, digitalization, skilled trades, systematic literature review
Typing Less, Saying More? – The Effects of Using Generative AI in Online Consumer Review Writing

Typing Less, Saying More? – The Effects of Using Generative AI in Online Consumer Review Writing

Maximilian Habla
This study investigates how using Generative AI (GenAI) impacts the quality and informativeness of online consumer reviews. Through a scenario-based online experiment, the research compares reviews written with and without GenAI assistance, analyzing factors like the writer's cognitive load and the resulting review's detail, complexity, and sentiment.

Problem Writing detailed, informative online reviews is a mentally demanding task for consumers, which often results in less helpful content for others making purchasing decisions. While platforms use templates to help, these still require significant effort from the reviewer. This study addresses the gap in understanding whether new GenAI tools can make it easier for people to write better, more useful reviews.

Outcome - Using GenAI significantly reduces the perceived cognitive load (mental effort) for people writing reviews.
- Reviews written with the help of GenAI are more informative, covering a greater number and a wider diversity of product aspects and topics.
- GenAI-assisted reviews tend to exhibit higher linguistic complexity and express a more positive sentiment, even when the star rating given by the user is the same.
- Contrary to the initial hypothesis, the reduction in cognitive load did not directly account for the increase in review informativeness, suggesting other mechanisms are at play.
Online Reviews, Informativeness, GenAI, Cognitive Load Theory, Linguistic Complexity, Sentiment Analysis
Unveiling the Influence of Personality, Identity, and Organizational Culture on Generative AI Adoption in the Workplace

Unveiling the Influence of Personality, Identity, and Organizational Culture on Generative AI Adoption in the Workplace

Dugaxhin Xhigoli
This qualitative study examines how an employee's personality, professional identity, and company culture influence their engagement with generative AI (GenAI). Through 23 expert interviews, the research explores the underlying factors that shape different AI adoption behaviors, from transparent integration to strategic concealment.

Problem As companies rapidly adopt generative AI, they encounter a wide range of employee responses, yet there is limited understanding of what drives this variation. This study addresses the research gap by investigating why employees differ in their AI usage, specifically focusing on how individual psychology and the organizational environment interact to shape these behaviors.

Outcome - The study identified four key dimensions influencing GenAI adoption: Personality-driven usage behavior, AI-driven changes to professional identity, organizational culture factors, and the organizational risks of unmanaged AI use.
- Four distinct employee archetypes were identified: 'Innovative Pioneers' who openly use and identify with AI, 'Hidden Users' who identify with AI but conceal its use for competitive advantage, 'Transparent Users' who openly use AI as a tool, and 'Critical Skeptics' who remain cautious and avoid it.
- Personality traits, particularly those from the 'Dark Triad' like narcissism, and competitive work environments significantly drive the strategic concealment of AI use.
- A company's culture is critical; open, innovative cultures foster ethical and transparent AI adoption, whereas rigid, hierarchical cultures encourage concealment and the rise of risky 'Shadow AI'.
Generative AI, Personality Traits, AI Identity, Organizational Culture, AI Adoption
Structural Estimation of Auction Data through Equilibrium Learning and Optimal Transport

Structural Estimation of Auction Data through Equilibrium Learning and Optimal Transport

Markus Ewert and Martin Bichler
This study proposes a new method for analyzing auction data to understand bidders' private valuations. It extends an existing framework by reformulating the estimation challenge as an optimal transport problem, which avoids the statistical limitations of traditional techniques. This novel approach uses a proxy equilibrium model to analytically evaluate bid distributions, leading to more accurate and robust estimations.

Problem Designing profitable auctions, such as setting an optimal reserve price, requires knowing how much bidders are truly willing to pay, but this information is hidden. Existing methods to estimate these valuations from observed bids often suffer from statistical biases and inaccuracies, especially with limited data, leading to poor auction design and lost revenue for sellers.

Outcome - The proposed optimal transport-based estimator consistently outperforms established kernel-based techniques, showing significantly lower error in estimating true bidder valuations.
- The new method is more robust, providing accurate estimates even in scenarios with high variance in bidding behavior where traditional methods fail.
- In practical tests, reserve prices set using the new method's estimates led to significant revenue gains for the auctioneer, while prices derived from older methods resulted in zero revenue.
Structural Estimation, Auctions, Equilibrium Learning, Optimal Transport, Econometrics
A Case Study on Large Vehicles Scheduling for Railway Infrastructure Maintenance: Modelling and Sensitivity Analysis

A Case Study on Large Vehicles Scheduling for Railway Infrastructure Maintenance: Modelling and Sensitivity Analysis

Jannes Glaubitz, Thomas Wolff, Henry Gräser, Philipp Sommerfeldt, Julian Reisch, David Rößler-von Saß, and Natalia Kliewer
This study presents an optimization-driven approach to scheduling large vehicles for preventive railway infrastructure maintenance, using real-world data from Deutsche Bahn. It employs a greedy heuristic and a Mixed Integer Programming (MIP) model to evaluate key factors influencing scheduling efficiency. The goal is to provide actionable insights for strategic decision-making and improve operational management.

Problem Railway infrastructure maintenance is a critical operational task that often causes significant disruptions, delays, and capacity restrictions for both passenger and freight services. These disruptions reduce the overall efficiency and attractiveness of the railway system. The study addresses the challenge of optimizing maintenance schedules to maximize completed work while minimizing interference with regular train operations.

Outcome - The primary bottleneck in maintenance scheduling is the limited availability and reusability of pre-defined work windows ('containers'), not the number of maintenance vehicles.
- Increasing scheduling flexibility by allowing work containers to be booked multiple times dramatically improves maintenance completion rates, from 84.7% to 98.2%.
- Simply adding more vehicles to the fleet provides only marginal improvements, as scheduling efficiency is the limiting factor.
- Increasing the operational radius for vehicles from depots and moderately extending shift lengths can further improve maintenance coverage.
- The analysis suggests that large, predefined maintenance containers are often inefficient and should be split into smaller sections to improve flexibility and resource utilization.
Railway Track Maintenance Planning, Maintenance Track Possession Problem, Operations Research, Mixed Integer Programming, Vehicle Scheduling, Sensitivity Analysis, Optimization
Boundary Resources – A Review

Boundary Resources – A Review

David Rochholz
This study conducts a systematic literature review to analyze the current state of research on 'boundary resources,' which are the tools like APIs and SDKs that connect digital platforms with third-party developers. By examining 89 publications, the paper identifies major themes and significant gaps in the academic literature. The goal is to consolidate existing knowledge and propose a clear research agenda for the future.

Problem Digital platforms rely on third-party developers to create value, but the tools (boundary resources) that enable this collaboration are not well understood. Research is fragmented and often overlooks critical business aspects, such as the financial reasons for opening a platform and how to monetize these resources. Furthermore, most studies focus on consumer apps, ignoring the unique challenges of business-to-business (B2B) platforms and the rise of AI-driven developers.

Outcome - Identifies four key gaps in current research: the financial impact of opening platforms, the overemphasis on consumer (B2C) versus business (B2B) contexts, the lack of a clear definition for what constitutes a platform, and the limited understanding of modern developers, including AI agents.
- Proposes a research agenda focused on monetization strategies, platform valuation, and the distinct dynamics of B2B ecosystems.
- Emphasizes the need to understand how the role of developers is changing with the advent of generative AI.
- Concludes that future research must create better frameworks to help businesses manage and profit from their platform ecosystems in a more strategic way.
Boundary Resource, Platform, Complementor, Research Agenda, Literature Review
You Only Lose Once: Blockchain Gambling Platforms

You Only Lose Once: Blockchain Gambling Platforms

Lorenz Baum, Arda Güler, and Björn Hanneke
This study investigates user behavior on emerging blockchain-based gambling platforms to provide insights for regulators and user protection. The researchers analyzed over 22,800 gambling rounds from YOLO, a smart contract-based platform, involving 3,306 unique users. A generalized linear mixed model was used to identify the effects of users' cognitive biases on their on-chain gambling activities.

Problem Online gambling revenues are increasing, which exacerbates societal problems and often evades regulatory oversight. The rise of decentralized, blockchain-based gambling platforms aggravates these issues by promising transparency while lacking user protection measures, making it easier to exploit users' cognitive biases and harder for authorities to enforce regulations.

Outcome - Cognitive biases like the 'anchoring effect' (repeatedly betting the same amount) and the 'gambler's fallacy' (believing a losing streak makes a win more likely) significantly increase the probability that a user will continue gambling.
- The study confirms that blockchain platforms can exploit these psychological biases, leading to sustained gambling and substantial financial losses for users, with a sample of 3,306 users losing a total of $5.1 million.
- Due to the decentralized and permissionless nature of these platforms, traditional regulatory measures like deposit limits, age verification, and self-exclusion are nearly impossible to enforce.
- The findings highlight the urgent need for new regulatory approaches and user protection mechanisms tailored to the unique challenges of decentralized gambling environments, such as on-chain monitoring for risky behavior.
gambling platform, smart contract, gambling behavior, cognitive bias, user behavior
The Role of Generative AI in P2P Rental Platforms: Investigating the Effects of Timing and Interactivity on User Reliance in Content (Co-)Creation Processes

The Role of Generative AI in P2P Rental Platforms: Investigating the Effects of Timing and Interactivity on User Reliance in Content (Co-)Creation Processes

Niko Spatscheck, Myriam Schaschek, Christoph Tomitza, and Axel Winkelmann
This study investigates how Generative AI can best assist users on peer-to-peer (P2P) rental platforms like Airbnb in writing property listings. Through an experiment with 244 participants, the researchers tested how the timing of when AI suggestions are offered and the level of interactivity (automatic vs. user-prompted) influence how much a user relies on the AI.

Problem While Generative AI offers a powerful way to help property hosts create compelling listings, platforms don't know the most effective way to implement these tools. It's unclear if AI assistance is more impactful at the beginning or end of the writing process, or if users prefer to actively ask for help versus receiving it automatically. This study addresses this knowledge gap to provide guidance for designing better AI co-writing assistants.

Outcome - Offering AI suggestions earlier in the writing process significantly increases how much users rely on them.
- Allowing users to actively prompt the AI for assistance leads to a slightly higher reliance compared to receiving suggestions automatically.
- Higher cognitive load (mental effort) reduces a user's reliance on AI-generated suggestions.
- For businesses like Airbnb, these findings suggest that AI writing tools should be designed to engage users at the very beginning of the content creation process to maximize their adoption and impact.
Human-genAI collaboration, Co-writing, P2P rental platforms, Reliance, Generative AI, Cognitive Load
A Framework for Context-Specific Theorizing on Trust and Reliance in Collaborative Human-AI Decision-Making Environments

A Framework for Context-Specific Theorizing on Trust and Reliance in Collaborative Human-AI Decision-Making Environments

Niko Spatscheck
This study analyzes 59 empirical research papers to understand why findings on human trust in AI have been inconsistent. It synthesizes this research into a single framework that identifies the key factors influencing how people decide to trust and rely on AI systems for decision-making. The goal is to provide a more unified and context-aware understanding of the complex relationship between humans and AI.

Problem Effective collaboration between humans and AI is often hindered because people either trust AI too much (overreliance) or too little (underreliance), leading to poor outcomes. Existing research offers conflicting explanations for this behavior, creating a knowledge gap for developers and organizations. This study addresses the problem that prior research has largely ignored the specific context—such as the user's expertise, the AI's design, and the nature of the task—which is crucial for explaining these inconsistencies.

Outcome - The study created a comprehensive framework that categorizes the factors influencing trust and reliance on AI into three main groups: human-related (e.g., user expertise, cognitive biases), AI-related (e.g., performance, explainability), and decision-related (e.g., risk, complexity).
- It concludes that trust is not static but is dynamically shaped by the interaction of these various contextual factors.
- This framework provides a practical tool for researchers and businesses to better predict how users will interact with AI and to design systems that foster appropriate levels of trust, leading to better collaborative performance.
AI Systems, Trust, Reliance, Collaborative Decision-Making, Human-AI Collaboration, Contextual Factors, Conceptual Framework
“We don't need it” - Insights into Blockchain Adoption in the German Pig Value Chain

“We don't need it” - Insights into Blockchain Adoption in the German Pig Value Chain

Hauke Precht, Marlen Jirschitzka, and Jorge Marx Gómez
This study investigates why blockchain technology, despite its acclaimed benefits for transparency and traceability, has not been adopted in the German pig value chain. Researchers conducted eight semi-structured interviews with industry experts, analyzing the findings through the technology-organization-environment (TOE) framework to identify specific barriers to implementation.

Problem There is a significant disconnect between the theoretical advantages of blockchain for food supply chains and its actual implementation in the real world. This study addresses the specific research gap of why the German pig industry, a major agricultural sector, is not utilizing blockchain technology, aiming to understand the practical factors that prevent its adoption.

Outcome - Stakeholders perceive their existing technology solutions as sufficient, meeting current demands for data exchange and traceability without needing blockchain.
- Trust, a key benefit of blockchain, is already well-established within the industry through long-standing business relationships, interlocking company ownership, and neutral non-profit organizations.
- The vast majority of industry experts do not believe blockchain offers any significant additional benefit or value over their current systems and processes.
- There is a lack of market demand for the features blockchain provides; neither industry actors nor end consumers are asking for the level of transparency or immutability it offers.
- Significant practical barriers include the high investment costs required, a general lack of financial slack for new IT projects, and insufficient digital infrastructure across the value chain.
blockchain adoption, TOE, food supply chain, German pig value chain, qualitative research, supply chain management, technology adoption barriers
Algorithmic Control in Non-Platform Organizations – Workers' Legitimacy Judgments and the Impact of Individual Character Traits

Algorithmic Control in Non-Platform Organizations – Workers' Legitimacy Judgments and the Impact of Individual Character Traits

Felix Hirsch
This study investigates how employees in traditional, non-platform companies perceive algorithmic control (AC) systems that manage their work. Using fuzzy-set Qualitative Comparative Analysis (fsQCA), it specifically examines how a worker's individual competitiveness influences whether they judge these systems as legitimate in terms of fairness, autonomy, and professional development.

Problem While the use of algorithms to manage workers is expanding from the platform economy to traditional organizations, little is known about why employees react so differently to it. Existing research has focused on organizational factors, largely neglecting how individual personality traits impact workers' acceptance and judgment of these new management systems.

Outcome - A worker's personality, specifically their competitiveness, is a major factor in how they perceive algorithmic management.
- Competitive workers generally judge algorithmic control positively, particularly in relation to fairness, autonomy, and competence development.
- Non-competitive workers tend to have negative judgments towards algorithmic systems, often rejecting them as unhelpful for their professional growth.
- The findings show a clear distinction: competitive workers see AC as fair, especially rating systems, while non-competitive workers view it as unfair.
Algorithmic Control, Legitimacy Judgments, Non-Platform Organizations, fsQCA, Worker Perception, Character Traits
Design Guidelines for Effective Digital Business Simulation Games: Insights from a Systematic Literature Review on Training Outcomes

Design Guidelines for Effective Digital Business Simulation Games: Insights from a Systematic Literature Review on Training Outcomes

Manuel Thomas Pflumm, Timo Phillip Böttcher, and Helmut Krcmar
This study analyzes 64 empirical papers to understand the effectiveness of Digital Business Simulation Games (DBSGs) as training tools. It systematically reviews existing research to identify key training outcomes and uses these findings to develop a practical framework of design guidelines. The goal is to provide evidence-based recommendations for creating and implementing more impactful business simulation games.

Problem Businesses and universities increasingly use digital simulation games to teach complex decision-making, but their actual effectiveness varies. Research on what makes these games successful is scattered, and there is a lack of clear, comprehensive guidelines for developers and instructors. This makes it difficult to consistently design games and training programs that maximize learning and skill development.

Outcome - The study identified four key training outcomes from DBSGs: attitudinal (how users feel about the training), motivational (engagement and drive), behavioral (teamwork and actions), and cognitive (critical thinking and skill development).
- Positive attitudes, motivation, and engagement were found to directly reinforce and enhance cognitive learning outcomes, showing that a user's experience is crucial for effective learning.
- The research provides a practical framework with specific guidelines for both the development of the game itself and the implementation of the training program.
- Key development guidelines include using realistic business scenarios, providing high-quality information, and incorporating motivating elements like compelling stories and leaderboards.
- Key implementation guidelines for instructors include proper preparation, pre-training briefings, guided debriefing sessions, and connecting the simulation experience to real-world business cases.
Digital business simulation games, training effectiveness, design guidelines, literature review, corporate learning, experiential learning
Designing Speech-Based Assistance Systems: The Automation of Minute-Taking in Meetings

Designing Speech-Based Assistance Systems: The Automation of Minute-Taking in Meetings

Anton Koslow, Benedikt Berger
This study investigates how to design speech-based assistance systems (SBAS) to automate meeting minute-taking. The researchers developed and evaluated a prototype with varying levels of automation in an online study to understand how to balance the economic benefits of automation with potential drawbacks for employees.

Problem While AI-powered speech assistants promise to make tasks like taking meeting minutes more efficient, high levels of automation can negatively impact employees by reducing their satisfaction and sense of professional identity. This research addresses the challenge of designing these systems to reap the benefits of automation while mitigating its adverse effects on human workers.

Outcome - A higher level of automation improves the objective quality of meeting minutes, such as the completeness of information and accuracy of speaker assignments.
- However, high automation can have adverse effects on the minute-taker's satisfaction and their identification with the work they produce.
- Users reported higher satisfaction and identification with the results under partial automation compared to high automation, suggesting they value their own contribution to the final product.
- Automation effectively reduces the perceived cognitive effort required for the task.
- The study concludes that assistance systems should be designed to enhance human work, not just replace it, by balancing automation with meaningful user integration and control.
Automation, speech, digital assistants, design science
Unveiling Location-Specific Price Drivers: A Two-Stage Cluster Analysis for Interpretable House Price Predictions

Unveiling Location-Specific Price Drivers: A Two-Stage Cluster Analysis for Interpretable House Price Predictions

Paul Gümmer, Julian Rosenberger, Mathias Kraus, Patrick Zschech, and Nico Hambauer
This study proposes a novel machine learning approach for house price prediction using a two-stage clustering method on 43,309 German property listings from 2023. The method first groups properties by location and then refines these groups with additional property features, subsequently applying interpretable models like linear regression (LR) or generalized additive models (GAM) to each cluster. This balances predictive accuracy with the ability to understand the model's decision-making process.

Problem Predicting house prices is difficult because of significant variations in local markets. Current methods often use either highly complex 'black-box' models that are accurate but hard to interpret, or overly simplistic models that are interpretable but fail to capture the nuances of different market segments. This creates a trade-off between accuracy and transparency, making it difficult for real estate professionals to get reliable and understandable property valuations.

Outcome - The two-stage clustering approach significantly improved prediction accuracy compared to models without clustering.
- The mean absolute error was reduced by 36% for the Generalized Additive Model (GAM/EBM) and 58% for the Linear Regression (LR) model.
- The method provides deeper, cluster-specific insights into how different features, like construction year and living space, affect property prices in different local markets.
- By segmenting the market, the model reveals that price drivers vary significantly across geographical locations and property types, enhancing market transparency for buyers, sellers, and analysts.
House Pricing, Cluster Analysis, Interpretable Machine Learning, Location-Specific Predictions
IT-Based Self-Monitoring for Women's Physical Activity: A Self-Determination Theory Perspective

IT-Based Self-Monitoring for Women's Physical Activity: A Self-Determination Theory Perspective

Asma Aborobb, Falk Uebernickel, and Danielly de Paula
This study analyzes what drives women's engagement with digital fitness applications. Researchers used computational topic modeling on over 34,000 user reviews, mapping the findings to Self-Determination Theory's core psychological needs: autonomy, competence, and relatedness. The goal was to create a structured framework to understand how app features can better support user motivation and long-term use.

Problem Many digital health and fitness apps struggle with low long-term user engagement because they often lack a strong theoretical foundation and adopt a "one-size-fits-all" approach. This issue is particularly pressing as there is a persistent global disparity in physical activity, with women being less active than men, suggesting that existing apps may not adequately address their specific psychological and motivational needs.

Outcome - Autonomy is the most dominant factor for women users, who value control, flexibility, and customization in their fitness apps.
- Competence is the second most important need, highlighting the desire for features that support skill development, progress tracking, and provide structured feedback.
- Relatedness, though less prominent, is also crucial, with users seeking social support, community connection, and representation through supportive coaches and digital influencers, especially around topics like maternal health.
- The findings suggest that to improve long-term engagement, fitness apps targeting women should prioritize features that give users a sense of control, help them feel effective, and foster a sense of community.
ITSM, Self-Determination Theory, Physical Activity, User Engagement
The PV Solution Guide: A Prototype for a Decision Support System for Photovoltaic Systems

The PV Solution Guide: A Prototype for a Decision Support System for Photovoltaic Systems

Chantale Lauer, Maximilian Lenner, Jan Piontek, and Christian Murlowski
This study presents the conceptual design of the 'PV Solution Guide,' a user-centric prototype for a decision support system for homeowners considering photovoltaic (PV) systems. The prototype uses a conversational agent and 3D modeling to adapt guidance to specific house types and the user's level of expertise. An initial evaluation compared the prototype's usability and trustworthiness against an established tool.

Problem Current online tools and guides for homeowners interested in PV systems are often too rigid, failing to accommodate unique home designs or varying levels of user knowledge. Information is frequently scattered, incomplete, or biased, leading to consumer frustration, distrust, and decision paralysis, which ultimately hinders the adoption of renewable energy.

Outcome - The study developed the 'PV Solution Guide,' a prototype decision support system designed to be more adaptive and user-friendly than existing tools.
- In a comparative evaluation, the prototype significantly outperformed the established 'Solarkataster Rheinland-Pfalz' tool in usability, with a System Usability Scale (SUS) score of 80.21 versus 56.04.
- The prototype also achieved a higher perceived trust score (82.59% vs. 76.48%), excelling in perceived benevolence and competence.
- Key features contributing to user trust and usability included transparent cost structures, personalization based on user knowledge and housing, and an interactive 3D model of the user's home.
Decision Support Systems, Photovoltaic Systems, Human-Centered Design, Qualitative Research
Designing AI-driven Meal Demand Prediction Systems

Designing AI-driven Meal Demand Prediction Systems

Alicia Cabrejas Leonhardt, Maximilian Kalff, Emil Kobel, and Max Bauch
This study outlines the design of an Artificial Intelligence (AI) system for predicting meal demand, with a focus on the airline catering industry. Through interviews with various stakeholders, the researchers identified key system requirements and developed nine fundamental design principles. These principles were then consolidated into a feasible system architecture to guide the development of effective forecasting tools.

Problem Inaccurate demand forecasting creates significant challenges for industries like airline catering, leading to a difficult balance between waste and customer satisfaction. Overproduction results in high costs and food waste, while underproduction causes lost sales and unhappy customers. This paper addresses the need for a more precise, data-driven approach to forecasting to improve sustainability, reduce costs, and enhance operational efficiency.

Outcome - The research identified key requirements for AI-driven demand forecasting systems based on interviews with industry experts.
- Nine core design principles were established to guide the development of these systems, focusing on aspects like data integration, sustainability, modularity, transparency, and user-centric design.
- A feasible system architecture was proposed that consolidates all nine principles, demonstrating a practical path for implementation.
- The findings provide a framework for creating advanced AI tools that can improve prediction accuracy, reduce food waste, and support better decision-making in complex operational environments.
meal demand prediction, forecasting methodology, customer choice behaviour, supervised machine learning, design science research
Analyzing German Parliamentary Speeches: A Machine Learning Approach for Topic and Sentiment Classification

Analyzing German Parliamentary Speeches: A Machine Learning Approach for Topic and Sentiment Classification

Lukas Pätz, Moritz Beyer, Jannik Späth, Lasse Bohlen, Patrick Zschech, Mathias Kraus, and Julian Rosenberger
This study investigates political discourse in the German parliament (the Bundestag) by applying machine learning to analyze approximately 28,000 speeches from the last five years. The researchers developed and trained two separate models to classify the topic and the sentiment (positive or negative tone) of each speech. These models were then used to identify trends in topics and sentiment across different political parties and over time.

Problem In recent years, Germany has experienced a growing public distrust in political institutions and a perceived divide between politicians and the general population. While much political discussion is analyzed from social media, understanding the formal, unfiltered debates within parliament is crucial for transparency and for assessing the dynamics of political communication. This study addresses the need for tools to systematically analyze this large volume of political speech to uncover patterns in parties' priorities and rhetorical strategies.

Outcome - Debates are dominated by three key policy areas: Economy and Finance, Social Affairs and Education, and Foreign and Security Policy, which together account for about 70% of discussions.
- A party's role as either government or opposition strongly influences its tone; parties in opposition use significantly more negative language than those in government, and this tone shifts when their role changes after an election.
- Parties on the political extremes (AfD and Die Linke) consistently use a much higher percentage of negative language compared to centrist parties.
- Parties tend to be most critical (i.e., use more negative sentiment) when discussing their own core policy areas, likely as a strategy to emphasize their priorities and the need for action.
- The developed machine learning models proved highly effective, demonstrating that this computational approach is a feasible and valuable method for large-scale analysis of political discourse.
Natural Language Processing, German Parliamentary, Discourse Analysis, Bundestag, Machine Learning, Sentiment Analysis, Topic Classification
Challenges and Mitigation Strategies for AI Startups: Leveraging Effectuation Theory in a Dynamic Environment

Challenges and Mitigation Strategies for AI Startups: Leveraging Effectuation Theory in a Dynamic Environment

Marleen Umminger, Alina Hafner
This study investigates the unique benefits and obstacles encountered by Artificial Intelligence (AI) startups. Through ten semi-structured interviews with founders in the DACH region, the research identifies key challenges and applies effectuation theory to explore effective strategies for navigating the uncertain and dynamic high-tech field.

Problem While investment in AI startups is surging, founders face unique challenges related to data acquisition, talent recruitment, regulatory hurdles, and intense competition. Existing literature often groups AI startups with general digital ventures, overlooking the specific difficulties stemming from AI's complexity and data dependency, which creates a need for tailored mitigation strategies.

Outcome - AI startups face core resource challenges in securing high-quality data, accessing affordable AI models, and hiring skilled technical staff like CTOs.
- To manage costs, founders often use publicly available data, form partnerships with customers for data access, and start with open-source or low-cost MVP models.
- Founders navigate competition by tailoring solutions to specific customer needs and leveraging personal networks, while regulatory uncertainty is managed by either seeking legal support or framing compliance as a competitive advantage to attract enterprise customers.
- Effectuation theory proves to be a relevant framework, as successful founders tend to leverage existing resources and networks (bird-in-hand), form strategic partnerships (crazy quilt), and adapt flexibly to unforeseen events (lemonade) rather than relying on long-term prediction.
Artificial intelligence, Entrepreneurial challenge, Effectuation theory, Qualitative research, AI startups, Mitigation strategies
BPMN4CAI: A BPMN Extension for Modeling Dynamic Conversational AI

BPMN4CAI: A BPMN Extension for Modeling Dynamic Conversational AI

Björn-Lennart Eger, Daniel Rose, and Barbara Dinter
This study develops and evaluates a standard-compliant extension for Business Process Model and Notation (BPMN) called BPMN4CAI. Using a Design Science Research methodology, the paper creates a framework that systematically extends existing BPMN elements to better model the dynamic and context-sensitive interactions of Conversational AI systems. The applicability of the BPMN4CAI framework is demonstrated through a case study in the insurance industry.

Problem Conversational AI systems like chatbots are increasingly integrated into business processes, but the standard modeling language, BPMN, is designed for predictable, deterministic processes. This creates a gap, as traditional BPMN cannot adequately represent the dynamic, context-aware dialogues and flexible decision-making inherent to modern AI. Businesses lack a standardized method to formally and accurately model processes involving these advanced AI agents.

Outcome - The study successfully developed BPMN4CAI, an extension to the standard BPMN, which allows for the formal modeling of Conversational AI in business processes.
- The new extension elements (e.g., Conversational Task, AI Decision Gateway, Human Escalation Event) facilitate the representation of adaptive decision-making, context management, and transparent interactions.
- A proof-of-concept demonstrated that BPMN4CAI improves model clarity and provides a semantic bridge for technical implementation compared to standard BPMN.
- The evaluation also identified limitations, noting that modeling highly dynamic, non-deterministic process paths and visualizing complex context transfers remains a challenge.
Conversational AI, BPMN, Business Process Modeling, Chatbots, Conversational Agent
Generative Al in Business Process Optimization: A Maturity Analysis of Business Applications

Generative Al in Business Process Optimization: A Maturity Analysis of Business Applications

Ralf Mengele
This study analyzes the current state of Generative AI (GAI) in the business world by systematically reviewing scientific literature. It identifies where GAI applications have been explored or implemented across the value chain and evaluates the maturity of these use cases. The goal is to provide managers and researchers with a clear overview of which business areas can already benefit from GAI and which require further development.

Problem While Generative AI holds enormous potential for companies, its recent emergence means it is often unclear where the technology can be most effectively applied. Businesses lack a comprehensive, systematic overview that evaluates the maturity of GAI use cases across different business processes, making it difficult to prioritize investment and adoption.

Outcome - The most mature and well-researched applications of Generative AI are in product development and in maintenance and repair within the manufacturing sector.
- The manufacturing segment as a whole exhibits the most mature GAI use cases compared to other parts of the business value chain.
- Technical domains show a higher level of GAI maturity and successful implementation than process areas dominated by interpersonal interactions, such as marketing and sales.
- GAI models like Generative Adversarial Networks (GANs) are particularly mature, proving highly effective for tasks like generating synthetic data for early damage detection in machinery.
- Research into GAI is still in its early stages for many business areas, with fields like marketing, sales, and human resources showing low implementation and maturity.
Generative AI, Business Processes, Optimization, Maturity Analysis, Literature Review, Manufacturing
AI at Work: Intelligent Personal Assistants in Work Practices for Process Innovation

AI at Work: Intelligent Personal Assistants in Work Practices for Process Innovation

Zeynep Kockar, Mara Burger
This paper explores how AI-based Intelligent Personal Assistants (IPAs) can be integrated into professional workflows to foster process innovation and improve adaptability. Utilizing the Task-Technology Fit (TTF) theory as a foundation, the research analyzes data from an interview study with twelve participants to create a framework explaining IPA adoption, their benefits, and their limitations in a work context.

Problem While businesses are increasingly adopting AI technologies, there is a significant research gap in understanding how Intelligent Personal Assistants specifically influence and innovate work processes in real-world professional settings. Prior studies have focused on adoption challenges or automation benefits, but have not thoroughly examined how these tools integrate with existing workflows and contribute to process adaptability.

Outcome - IPAs enhance workflow integration in four key areas: providing guidance and problem-solving, offering decision support and brainstorming, enabling workflow automation for efficiency, and facilitating language and communication tasks.
- The adoption of IPAs is primarily driven by social influence (word-of-mouth), the need for problem-solving and efficiency, curiosity, and prior academic or professional background with the technology.
- Significant barriers to wider adoption include data privacy and security concerns, challenges integrating IPAs with existing enterprise systems, and limitations in the AI's memory, reasoning, and creativity.
- The study developed a framework that illustrates how factors like work context, existing tools, and workflow challenges influence the adoption and impact of IPAs.
- Regular users tend to integrate IPAs for strategic and creative tasks, whereas occasional users leverage them for more straightforward or repetitive tasks like documentation.
Intelligent Personal Assistants, Process Innovation, Workflow, Task-Technology Fit Theory
Designing Scalable Enterprise Systems: Learning From Digital Startups

Designing Scalable Enterprise Systems: Learning From Digital Startups

Richard J. Weber, Max Blaschke, Maximilian Kalff, Noah Khalil, Emil Kobel, Oscar A. Ulbricht, Tobias Wuttke, Thomas Haskamp, and Jan vom Brocke
This study investigates how to design enterprise systems (ES) suitable for the rapidly changing needs of digital startups. Using a design science research approach involving 11 startups, the researchers identified key system requirements and developed nine design principles to create ES that are flexible, adaptable, and scalable.

Problem Traditional enterprise systems are often rigid, assuming business processes are stable and standardized. This design philosophy clashes with the needs of dynamic digital startups, which require highly adaptable systems to support continuous process evolution and rapid growth.

Outcome - The study identified core requirements for enterprise systems in startups, highlighting the need for agility, speed, and minimal overhead to support early-stage growth.
- Nine key design principles for scalable ES were developed, focusing on automation, integration, data-driven decision-making, flexibility, and user-centered design.
- A proposed ES architecture emphasizes a modular approach with a central workflow engine, enabling systems to adapt and scale with the startup.
- The research concludes that for startups, ES design must prioritize process adaptability and transparency over the rigid reliability typical of traditional systems.
Enterprise systems, Business process management, Digital entrepreneurship
Perbaikan Proses Bisnis Onboarding Pelanggan di PT SEVIMA Menggunakan Heuristic Redesign

Perbaikan Proses Bisnis Onboarding Pelanggan di PT SEVIMA Menggunakan Heuristic Redesign

Ribka Devina Margaretha, Mahendrawathi ER, Sugianto Halim
This study addresses challenges in PT SEVIMA's customer onboarding process, where Account Managers (AMs) were not always aligned with client needs. Using a Business Process Management (BPM) Lifecycle approach combined with heuristic principles (Resequencing, Specialize, Control Addition, and Empower), the research redesigns the existing workflow. The goal is to improve the matching of AMs to clients, thereby increasing onboarding efficiency and customer satisfaction.

Problem PT SEVIMA, an IT startup for the education sector, struggled with an inefficient customer onboarding process. The primary issue was the frequent mismatch between the assigned Account Manager's skills and the specific, technical needs of the new client, leading to implementation delays and decreased satisfaction.

Outcome - Recommends grouping Account Managers (AMs) based on specialization profiles built from post-project evaluations.
- Suggests moving the initial client needs survey to occur before an AM is assigned to ensure a better match.
- Proposes involving the technical migration team earlier in the process to align strategies from the start.
- These improvements aim to enhance onboarding efficiency, reduce rework, and ultimately increase client satisfaction.
Business Process Redesign, Customer Onboarding, Knowledge-Intensive Process, Heuristics Method, Startup, BPM Lifecycle
Dealing Effectively with Shadow IT by Managing Both Cybersecurity and User Needs

Dealing Effectively with Shadow IT by Managing Both Cybersecurity and User Needs

Steffi Haag, Andreas Eckhardt
This study analyzes how companies can manage the use of unauthorized technology, known as Shadow IT. Through interviews with 44 employees across 34 companies, the research identifies four common approaches organizations take and provides 10 recommendations for IT leaders to effectively balance security risks with the needs of their employees.

Problem Employees often use unapproved apps and services (Shadow IT) to be more productive, but this creates significant cybersecurity risks like data leaks and malware infections. Companies struggle to eliminate this practice without hindering employee efficiency. The challenge lies in finding a balance between enforcing security policies and meeting the legitimate technology needs of users.

Outcome - Four distinct organizational archetypes for managing Shadow IT were identified, each resulting in different levels of unauthorized technology use (from very little to very frequent).
- Shadow IT users are categorized into two types: tech-savvy 'Goal-Oriented Actors' (GOAs) who carefully manage risks, and less aware 'Followers' who pose a greater threat.
- Effective management of Shadow IT is possible by aligning cybersecurity policies with user needs through transparent communication and responsive IT support.
- The study offers 10 practical recommendations, including accepting the existence of Shadow IT, creating dedicated user experience teams, and managing different user types differently to harness benefits while minimizing risks.
Shadow IT, Cybersecurity, IT Governance, User Needs, Risk Management, Organizational Culture, IT Policy
The Importance of Board Member Actions for Cybersecurity Governance and Risk Management

The Importance of Board Member Actions for Cybersecurity Governance and Risk Management

Jeffrey G. Proudfoot, W. Alec Cram, Stuart Madnick, Michael Coden
This study investigates the challenges boards of directors face in providing effective cybersecurity oversight. Drawing on in-depth interviews with 35 board members and cybersecurity experts, the paper identifies four core challenges and proposes ten specific actions boards can take to improve their governance and risk management capabilities.

Problem Corporate boards are increasingly held responsible for cybersecurity governance, yet they are often ill-equipped to handle this complex and rapidly evolving area. This gap between responsibility and expertise creates significant risk for organizations, as boards may struggle to ask the right questions, properly assess risk, and provide meaningful oversight.

Outcome - The study identified four primary challenges for boards: 1) inconsistent attitudes and governance approaches, 2) ineffective interaction dynamics with executives like the CISO, 3) a lack of sufficient cybersecurity expertise, and 4) navigating expanding and complex regulations.
- Boards must acknowledge that cybersecurity is an enterprise-wide operational risk, not just an IT issue, and gauge their organization's cybersecurity maturity against industry peers.
- Board members should focus on the business implications of cyber threats rather than technical details and must demand clear, jargon-free communication from executives.
- To address expertise gaps, boards should determine their need for expert advisors and actively seek training, such as tabletop cyberattack simulations.
- Boards must understand that regulatory compliance does not guarantee sufficient security and should guide the organization to balance compliance with proactive risk mitigation.
cybersecurity governance, board of directors, risk management, corporate governance, CISO, cyber risk, board expertise
Successfully Organizing AI Innovation Through Collaboration with Startups

Successfully Organizing AI Innovation Through Collaboration with Startups

Jana Oehmichen, Alexander Schult, John Qi Dong
This study examines how established firms can successfully partner with Artificial Intelligence (AI) startups to foster innovation. Based on an in-depth analysis of six real-world AI implementation projects across two startups, the research identifies five key challenges and provides corresponding recommendations for navigating these collaborations effectively.

Problem Established companies often lack the specialized expertise needed to leverage AI technologies, leading them to partner with startups. However, these collaborations introduce unique difficulties, such as assessing a startup's true capabilities, identifying high-impact AI applications, aligning commercial interests, and managing organizational change, which can derail innovation efforts.

Outcome - Challenge 1: Finding the right AI startup. Firms should overcome the inscrutability of AI startups by assessing credible quality signals, such as investor backing, academic achievements of staff, and success in prior contests, rather than relying solely on product demos.
- Challenge 2: Identifying the right AI use case. Instead of focusing on data availability, companies should collaborate with startups in workshops to identify use cases with the highest potential for value creation and business impact.
- Challenge 3: Agreeing on commercial terms. To align incentives and reduce information asymmetry, contracts should include performance-based or usage-based compensation, linking the startup's payment to the value generated by the AI solution.
- Challenge 4: Considering the impact on people. Firms must manage user acceptance by carefully selecting the degree of AI autonomy, involving employees in the design process, and clarifying the startup's role to mitigate fears of job displacement.
- Challenge 5: Overcoming implementation roadblocks. Depending on the company's organizational maturity, it should either facilitate deep collaboration between the startup and all internal stakeholders or use the startup to build new systems that bypass internal roadblocks entirely.
Artificial Intelligence, AI Innovation, Corporate-startup collaboration, Open Innovation, Digital Transformation, AI Startups
Managing Where Employees Work in a Post-Pandemic World

Managing Where Employees Work in a Post-Pandemic World

Molly Wasko, Alissa Dickey
This study examines how a large manufacturing company navigated the challenges of remote and hybrid work following the COVID-19 pandemic. Through an 18-month case study, the research explores the impacts on different employee groups (virtual, hybrid, and on-site) and provides recommendations for managing a blended workforce. The goal is to help organizations, particularly those with significant physical operations, balance new employee expectations with business needs.

Problem The widespread shift to remote work during the pandemic created a major challenge for businesses deciding on their long-term workplace strategy. Companies are grappling with whether to mandate a full return to the office, go fully remote, or adopt a hybrid model. This problem is especially complex for industries like manufacturing that rely on physical operations and cannot fully digitize their entire workforce.

Outcome - Employees successfully adapted information and communication technology (ICT) to perform many tasks remotely, effectively separating their work from a physical location.
- Contrary to expectations, on-site workers who remained at the physical workplace throughout the pandemic reported feeling the most isolated, least valued, and dissatisfied.
- Despite demonstrated high productivity and employee desire for flexibility, business leaders still strongly prefer having employees co-located in the office, believing it is crucial for building and maintaining the company's core values.
- A 'Digital-Physical Intensity' framework was developed to help organizations classify jobs and make objective decisions about which roles are best suited for on-site, hybrid, or virtual work.
remote work, hybrid work, post-pandemic workplace, blended workforce, employee experience, digital transformation, organizational culture
Managing IT Challenges When Scaling Digital Innovations

Managing IT Challenges When Scaling Digital Innovations

Sara Schiffer, Martin Mocker, Alexander Teubner
This paper presents a case study on 'freeyou,' the digital innovation spinoff of a major German insurance company. It examines how the company successfully transitioned its online-only car insurance product from an initial 'exploring' phase to a profitable 'scaling' phase. The study highlights the necessary shifts in IT approaches, organizational structure, and data analytics required to manage this transition.

Problem Many digital innovations fail when they move from the idea validation stage to the scaling stage, where they need to become profitable and handle large volumes of users. This study addresses the common IT-related challenges that cause these failures and provides practical guidance for managers on how to navigate this critical transition successfully.

Outcome - Prepare for a significant cultural shift: Management must explicitly communicate the change in focus from creative exploration and prototyping to efficient and profitable operations to align the team and manage expectations.
- Rearchitect IT systems for scalability: Systems built for speed and flexibility in the exploration phase must be redesigned or replaced with robust, efficient, and reliable platforms capable of handling a large user base.
- Adjust team composition and skills: The transition to scaling requires different expertise, shifting from IT generalists who explore new technologies to specialists focused on process automation, data analytics, and stable operations. Companies must be prepared to bring in new talent and restructure teams accordingly.
digital innovation, scaling, IT management, organizational change, case study, insurtech, innovation lifecycle
Identifying and Filling Gaps in Operational Technology Cybersecurity

Identifying and Filling Gaps in Operational Technology Cybersecurity

Abbatemarco Nico, Hans Brechbühl
This study identifies critical gaps in Operational Technology (OT) cybersecurity by drawing on insights from 36 leaders across 14 global corporations. It analyzes the organizational challenges that hinder the successful implementation of OT cybersecurity, going beyond purely technical issues. The research provides practical recommendations for managers to bridge these security gaps effectively.

Problem As industrial companies embrace 'Industry 4.0', their operational technology (OT) systems, which control physical processes, are becoming increasingly connected to digital networks. This connectivity introduces significant cybersecurity risks that can halt production and cause substantial financial loss, yet many organizations struggle to implement robust security due to organizational, rather than technical, obstacles.

Outcome - Cybersecurity in OT projects is often treated as an afterthought, bolted on at the end rather than integrated from the start.
- Cybersecurity teams typically lack the authority, budget, and top management support needed to enforce security measures in OT environments.
- There is a severe shortage of personnel with expertise in both OT and cybersecurity, and a cultural disconnect exists between IT and OT teams.
- Priorities are often misaligned, with OT personnel focusing on uptime and productivity, viewing security measures as hindrances.
- The tangible benefits of cybersecurity are difficult to recognize and quantify, making it hard to justify investments until a failure occurs.
Operational Technology, OT Cybersecurity, Industry 4.0, Cybersecurity Gaps, Risk Management, Industrial Control Systems, Technochange
Identifying and Addressing Senior Executives' Different Perceptions of the Value of IT Investments

Identifying and Addressing Senior Executives' Different Perceptions of the Value of IT Investments

Alastair Tipple, Hameed Chughtai, Jonathan H. Klein
This study explores how Chief Information Officers (CIOs) can uncover and manage differing opinions among senior executives regarding the value of IT investments. Using a case study at a U.K. firm, the researchers applied a method based on Repertory (Rep) Grid analysis and heat maps to make these perception gaps visible and actionable.

Problem The full benefits of IT investments are often not realized because senior leaders lack a shared understanding of their value and effectiveness. This misalignment can undermine project support and success, yet CIOs typically lack practical tools to objectively identify and resolve these hidden differences in perception within the management team.

Outcome - Repertory (Rep) Grids combined with heat maps are a practical and effective technique for making executives' differing perceptions of IT value explicit and visible.
- The method provides a structured, data-driven foundation for CIOs to have tailored, objective conversations with individual leaders to build consensus.
- By creating a common set of criteria for evaluation, the process helps align the senior management team and fosters a shared understanding of IT's strategic contribution.
- The visual nature of heat maps helps focus discussions on specific points of disagreement, reducing emotional conflict and accelerating the path to a common ground.
- The approach allows CIOs to develop targeted action plans to address specific gaps in understanding, ultimately improving support for and the realization of value from IT investments.
IT investment value, senior management perception, Repertory Grid, heat maps, CIO, strategic alignment, social alignment
How WashTec Explored Digital Business Models

How WashTec Explored Digital Business Models

Christian Ritter, Anna Maria Oberländer, Bastian Stahl, Björn Häckel, Carsten Klees, Ralf Koeppe, and Maximilian Röglinger
This case study describes how WashTec, a global leader in the car wash industry, successfully explored and developed new digital business models. The paper outlines the company's structured four-phase exploration approach—Activation, Inspiration, Evaluation, and Monetization—which serves as a blueprint for digital innovation. This process offers a guide for other established, incumbent companies seeking to navigate their own digital transformation.

Problem Many established companies excel at enhancing their existing business models but struggle to explore and develop entirely new digital ones. This creates a significant challenge for traditional, hardware-centric firms needing to adapt to a digital landscape. The study addresses how an incumbent company can overcome this inertia and systematically innovate to create new value propositions and maintain a competitive edge.

Outcome - WashTec developed a structured four-phase approach (Activation, Inspiration, Evaluation, Monetization) that enabled the successful exploration of digital business models.
- The process resulted in three distinct digital business models: Automated Chemical Supply, a Digital Wash Platform, and In-Car Washing Services.
- The study offers five recommendations for other incumbent firms: set clear boundaries for exploration, utilize digital-savvy pioneers while involving the whole organization, anchor the process with strategic symbols, consider value beyond direct revenue, and integrate exploration objectives into the core business.
digital transformation, business model innovation, incumbent firms, case study, WashTec, digital strategy, exploration
How to Successfully Navigate Crisis-Driven Digital Transformations

How to Successfully Navigate Crisis-Driven Digital Transformations

Ralf Plattfaut, Vincent Borghoff
This study investigates how digital transformations initiated by a crisis, such as the COVID-19 pandemic, differ from transformations under normal circumstances. Through case studies of three German small and medium-sized organizations (the 'Mittelstand'), the research identifies challenges to established transformation 'logics' and provides recommendations for successfully managing these events.

Problem While digital transformation is widely studied, there is little understanding of how the process works when driven by an external crisis rather than strategic planning. The COVID-19 pandemic created an urgent, unprecedented need for businesses to digitize their operations, but existing frameworks were ill-suited for this high-pressure, uncertain environment.

Outcome - The trigger for digital transformation in a crisis is the external shock itself, not the emergence of new technology.
- Decision-making shifts from slow, consensus-based strategic planning to rapid, top-down ad-hoc reactions to ensure survival.
- Major organizational restructuring is deferred; instead, companies form small, agile steering groups to manage the transformation efforts.
- Normal organizational barriers like inertia and resistance to change significantly decrease during the crisis due to the clear and urgent need for action.
- After the crisis, companies must actively work to retain the agile practices learned and manage the potential re-emergence of resistance as urgency subsides.
Digital Transformation, Crisis Management, Organizational Change, German Mittelstand, SMEs, COVID-19, Business Resilience
How to Design a Better Cybersecurity Readiness Program

How to Design a Better Cybersecurity Readiness Program

Kaveh Abhari, Morteza Safaei Pour, Hossein Shirazi
This study explores the common pitfalls of four types of cybersecurity training by interviewing employees at large accounting firms. It identifies four unintended negative consequences of mistraining and overtraining and, in response, proposes the LEAN model, a new framework for designing more effective cybersecurity readiness programs.

Problem Organizations invest heavily in cybersecurity readiness programs, but these initiatives often fail due to poor design, leading to mistraining and overtraining. This not only makes the training ineffective but can also create adverse effects like employee anxiety and fatigue, paradoxically amplifying an organization's cyber vulnerabilities instead of reducing them.

Outcome - Conventional cybersecurity training often leads to four adverse effects on employees: threat anxiety, security fatigue, risk passivity, and cyber hesitancy.
- These individual effects cause significant organizational problems, including erosion of individual performance, fragmentation of team dynamics, disruption of client experiences, and stagnation of the security culture.
- The study proposes the LEAN model to counteract these issues, based on four strategies: Localize, Empower, Activate, and Normalize.
- The LEAN model recommends tailoring training to specific roles (Localize), fostering ownership and authority (Empower), promoting coordinated action through collaborative exercises (Activate), and embedding security into daily operations to build a proactive culture (Normalize).
cybersecurity training, cybersecurity readiness, mistraining, security culture, employee behavior, LEAN model
How Siemens Democratized Artificial Intelligence

How Siemens Democratized Artificial Intelligence

Benjamin van Giffen, Helmuth Ludwig
This paper presents an in-depth case study on how the global technology company Siemens successfully moved artificial intelligence (AI) projects from pilot stages to full-scale, value-generating applications. The study analyzes Siemens' journey through three evolutionary stages, focusing on the concept of 'AI democratization', which involves integrating the unique skills of domain experts, data scientists, and IT professionals. The findings provide a framework for how other organizations can build the necessary capabilities to adopt and scale AI technologies effectively.

Problem Many companies invest in artificial intelligence but struggle to progress beyond small-scale prototypes and pilot projects. This failure to scale prevents them from realizing the full business value of AI. The core problem is the difficulty in making modern AI technologies broadly accessible to employees, which is necessary to identify, develop, and implement valuable applications across the organization.

Outcome - Siemens successfully scaled AI by evolving through three stages: 1) Tactical AI pilots, 2) Strategic AI enablement, and 3) AI democratization for business transformation.
- Democratizing AI, defined as the collaborative integration of domain experts, data scientists, and IT professionals, is crucial for overcoming key adoption challenges such as defining AI tasks, managing data, accepting probabilistic outcomes, and addressing 'black-box' fears.
- Key initiatives that enabled this transformation included establishing a central AI Lab to foster co-creation, an AI Academy for upskilling employees, and developing a global AI platform to support scaling.
- This approach allowed Siemens to transform manufacturing processes with predictive quality control and create innovative healthcare products like the AI-Rad Companion.
- The study concludes that democratizing AI creates value by rooting AI exploration in deep domain knowledge and reduces costs by creating scalable infrastructures and processes.
Artificial Intelligence, AI Democratization, Digital Transformation, Organizational Capability, Case Study, AI Adoption, Siemens
How Shell Fueled Digital Transformation by Establishing DIY Software Development

How Shell Fueled Digital Transformation by Establishing DIY Software Development

Noel Carroll, Mary Maher
This paper presents a case study on how the international energy company Shell successfully implemented a large-scale digital transformation. It details their 'Do It Yourself' (DIY) program, which empowers employees to create their own software applications using low-code/no-code platforms. The study analyzes Shell's approach and provides recommendations for other organizations looking to leverage citizen development to drive digital initiatives.

Problem Many organizations struggle with digital transformation, facing high failure rates and uncertainty. These initiatives often fail to engage the broader workforce, creating a bottleneck within the IT department and a disconnect from immediate business needs. This study addresses how a large, traditional company can overcome these challenges by democratizing technology and empowering its employees to become agents of change.

Outcome - Shell successfully drove digital transformation by establishing a 'Do It Yourself' (DIY) citizen development program, empowering non-technical employees to build their own applications.
- A structured four-phase process (Sensemaking, Stakeholder Participation, Collective Action, Evaluating Progress) was critical for normalizing and scaling the program across the organization.
- Implementing a risk-based governance framework, the 'DIY Zoning Model', allowed Shell to balance employee autonomy and innovation with necessary security and compliance controls.
- The DIY program delivered significant business value, including millions of dollars in cost savings, improved operational efficiency and safety, and increased employee engagement.
- Empowering employees with low-code tools not only solved immediate business problems but also helped attract and retain new talent from the 'digital generation'.
Digital Transformation, Citizen Development, Low-Code/No-Code, Change Management, Case Study, Shell, Organizational Culture
How Large Companies Can Help Small and Medium-Sized Enterprise (SME) Suppliers Strengthen Cybersecurity

How Large Companies Can Help Small and Medium-Sized Enterprise (SME) Suppliers Strengthen Cybersecurity

Jillian K. Kwong, Keri Pearlson
This study investigates the cybersecurity challenges faced by small and medium-sized enterprise (SME) suppliers and proposes actionable strategies for large companies to help them improve. Based on interviews with executives and cybersecurity experts, the paper identifies key barriers SMEs encounter and outlines five practical actions large firms can take to strengthen their supply chain's cyber resilience.

Problem Large companies increasingly require their smaller suppliers to meet the same stringent cybersecurity standards they do, creating a significant burden for SMEs with limited resources. This gap creates a major security vulnerability, as attackers often target less-secure SMEs as a backdoor to access the networks of larger corporations, posing a substantial third-party risk to entire supply chains.

Outcome - SME suppliers are often unable to meet the security standards of their large partners due to four key barriers: unfriendly regulations, organizational culture clashes, variability in cybersecurity frameworks, and misalignment of business processes.
- Large companies can proactively strengthen their supply chain by providing SMEs with the resources and expertise needed to understand and comply with regulations.
- Creating incentives for meeting security benchmarks is more effective than penalizing suppliers for non-compliance.
- Large firms should develop programs to help SMEs elevate their cybersecurity culture and align security processes with their own.
- Coordinating with other large companies to standardize cybersecurity frameworks and assessment procedures can significantly reduce the compliance burden on SMEs.
Cybersecurity, Supply Chain Management, Third-Party Risk, Small and Medium-Sized Enterprises (SMEs), Cyber Resilience, Vendor Risk Management
How Boards of Directors Govern Artificial Intelligence

How Boards of Directors Govern Artificial Intelligence

Benjamin van Giffen, Helmuth Ludwig
This study investigates how corporate boards of directors oversee and integrate Artificial Intelligence (AI) into their governance practices. Based on in-depth interviews with high-profile board members from diverse industries, the research identifies common challenges and provides examples of effective strategies for board-level AI governance.

Problem Despite the transformative impact of AI on the business landscape, the majority of corporate boards struggle to understand its implications and their role in governing it. This creates a significant gap, as boards have a fiduciary responsibility to oversee strategy, risk, and investment related to critical technologies, yet AI is often not a mainstream boardroom topic.

Outcome - Identified four key groups of board-level AI governance issues: Strategy and Firm Competitiveness, Capital Allocation, AI Risks, and Technology Competence.
- Boards should ensure AI is integrated into the company's core business strategy by evaluating its impact on the competitive landscape and making it a key topic in annual strategy meetings.
- Effective capital allocation involves encouraging AI experimentation, securing investments in foundational AI capabilities, and strategically considering external partnerships and acquisitions.
- To manage risks, boards must engage with experts, integrate AI-specific risks into Enterprise Risk Management (ERM) frameworks, and address ethical, reputational, and legal challenges.
- Enhancing technology competence requires boards to develop their own AI literacy, review board and committee composition for relevant expertise, and include AI competency in executive succession planning.
AI governance, board of directors, corporate governance, artificial intelligence, strategic management, risk management, technology competence
Fueling Digital Transformation with Citizen Developers and Low-Code Development

Fueling Digital Transformation with Citizen Developers and Low-Code Development

Ainara Novales Rubén Mancha
This study examines how organizations can leverage low-code development platforms and citizen developers (non-technical employees) to accelerate digital transformation. Through in-depth case studies of two early adopters, Hortilux and Volvo Group, along with interviews from seven other firms, the paper identifies key strategies and challenges. The research provides five actionable recommendations for business leaders to successfully implement low-code initiatives.

Problem Many organizations struggle to keep pace with digital innovation due to a persistent shortage and high cost of professional software developers. This creates a significant bottleneck in application development, slowing down responsiveness to customer needs and hindering digital transformation goals. The study addresses how to overcome this resource gap by empowering business users to create their own software solutions.

Outcome - Set a clear strategy for selecting the right use cases for low-code development, starting with simple, low-complexity tasks like process automation.
- Identify, assign, and provide training to upskill tech-savvy employees into citizen developers, ensuring they have the support and guidance needed.
- Establish a dedicated low-code team or department to provide organization-wide support, training, and governance for citizen development initiatives.
- Ensure the low-code architecture is extendable, reusable, and up-to-date to avoid creating complex, siloed applications that are difficult to maintain.
- Evaluate the technical requirements and constraints of different solutions to select the low-code platform that best fits the organization's specific needs.
low-code development, citizen developers, digital transformation, IT strategy, application development, software development bottleneck, case study
F. Warren McFarlan's Pioneering Role in Impacting IT Management Through Academic Research

F. Warren McFarlan's Pioneering Role in Impacting IT Management Through Academic Research

Blake Ives, Mary Lacity, Jeanne Ross
This article chronicles the distinguished career of F. Warren McFarlan, a seminal figure in the field of IT management. Based on interviews with McFarlan and his colleagues, as well as archival material, the paper details his immense contribution to bridging the divide between academic research and practical IT management. It highlights his methods, influential frameworks, and enduring legacy in educating generations of IT practitioners and researchers.

Problem There is often a significant gap between academic research and the practical needs of business managers. Academics typically focus on theory and description, while business leaders require actionable, prescriptive insights. This paper addresses this challenge by examining the career of F. Warren McFarlan as a case study in how to successfully produce practice-based research that is valuable to both the academic and business communities.

Outcome - F. Warren McFarlan was a foundational figure who played a pioneering role in establishing IT management as a respected academic and business discipline.
- He effectively bridged the gap between academia and industry by developing practical frameworks and using the case study method to teach senior executives how to manage technology strategically.
- Through his extensive body of research, including over 300 cases and numerous influential articles, he provided managers with accessible tools to assess IT project risk and align technology with business strategy.
- McFarlan was instrumental in championing academic outlets for practice-based research, notably serving as editor-in-chief of MIS Quarterly during a critical period to ensure its survival and relevance.
- His legacy includes not only his own research but also his mentorship of junior faculty and his role in building the IT management program at Harvard Business School.
F. Warren McFarlan, IT Management, Practice-Based Research, Academic-Practitioner Gap, Case Study Research, Harvard Business School, Strategic IT
Experiences and Lessons Learned at a Small and Medium-Sized Enterprise (SME) Following Two Ransomware Attacks

Experiences and Lessons Learned at a Small and Medium-Sized Enterprise (SME) Following Two Ransomware Attacks

Donald Wynn, Jr., W. David Salisbury, Mark Winemiller
This paper presents a case study of a small U.S. manufacturing company that suffered two distinct ransomware attacks four years apart, despite strengthening its cybersecurity after the first incident. The study analyzes both attacks, the company's response, and the lessons learned from the experiences. The goal is to provide actionable recommendations to help other small and medium-sized enterprises (SMEs) improve their defenses and recovery strategies against evolving cyber threats.

Problem Small and medium-sized enterprises (SMEs) face unique cybersecurity challenges due to significant resource constraints compared to larger corporations. They often lack the financial capacity, specialized expertise, and trained workforce to implement and maintain adequate technical and procedural controls. This vulnerability is increasingly exploited by cybercriminals, with a high percentage of ransomware attacks specifically targeting these smaller, less-defended businesses.

Outcome - All businesses are targets: The belief in 'security by obscurity' is a dangerous misconception; any online presence makes a business a potential target for cyberattacks.
- Comprehensive backups are essential: Backups must include not only data but also system configurations and software to enable a full and timely recovery.
- Management buy-in is critical: Senior leadership must understand the importance of cybersecurity and provide the necessary funding and organizational support for robust defense measures.
- People are a key vulnerability: Technical defenses can be bypassed by human error, as demonstrated by the second attack which originated from a phishing email, underscoring the need for continuous employee training.
- Cybercrime is an evolving 'arms race': Attackers are becoming increasingly sophisticated, professional, and organized, requiring businesses to continually adapt and strengthen their defenses.
ransomware, cybersecurity, SME, case study, incident response, cyber attack, information security
Evolution of the Metaverse

Evolution of the Metaverse

Mary Lacity, Jeffrey K. Mullins, Le Kuai
This paper explores the potential opportunities and risks of the emerging metaverse for business and society through an interview format with leading researchers. The study analyzes the current state of metaverse technologies, their potential business applications, and critical considerations for governance and ethical implementation for IT practitioners.

Problem Following renewed corporate interest and massive investment, the concept of the metaverse has generated significant hype, but businesses lack clarity on its definition, tangible value, and long-term impact. This creates uncertainty for leaders about how to approach the technology, differentiate it from past virtual worlds, and navigate the significant risks of surveillance, data privacy, and governance.

Outcome - The business value of the metaverse centers on providing richer, safer experiences for customers and employees, reducing costs, and meeting organizational goals through applications like immersive training, virtual collaboration, and digital twins.
- Companies face a critical choice between centralized 'Web 2' platforms, which monetize user data, and decentralized 'Web 3' models that offer users more control over their digital assets and identity.
- The metaverse can improve employee onboarding, training for dangerous tasks, and collaboration, offering a greater sense of presence than traditional videoconferencing.
- Key challenges include the lack of a single, interoperable metaverse (which is likely over a decade away), limited current capabilities of decentralized platforms, and the potential for negative consequences like addiction and surveillance.
- Businesses are encouraged to explore potential use cases, participate in creating open standards, and consider both the immense promise and potential perils before making significant investments.
Metaverse, Virtual Worlds, Augmented Reality, Web 3.0, Digital Twin, Business Strategy, Governance
Boundary Management Strategies for Leading Digital Transformation in Smart Cities

Boundary Management Strategies for Leading Digital Transformation in Smart Cities

Jocelyn Cranefield, Jan Pries-Heje
This study investigates the leadership challenges inherent in smart city digital transformations. Based on in-depth interviews with leaders from 12 cities, the research identifies common obstacles and describes three 'boundary management' strategies leaders use to overcome them and drive sustainable change.

Problem Cities struggle to scale up smart city initiatives beyond the pilot stage because of a fundamental conflict between traditional, siloed city bureaucracy and the integrated, data-driven logic of a smart city. This clash creates significant organizational, political, and cultural barriers that impede progress and prevent the realization of long-term benefits for citizens.

Outcome - Identifies eight key challenges for smart city leaders, including misalignment of municipal structures, restrictive data policies, resistance to innovation, and city politics.
- Finds that successful smart city leaders act as expert 'boundary spanners,' navigating the divide between the traditional institutional logic of city governance and the emerging logic of smart cities.
- Proposes a framework of three boundary management strategies leaders use: 1) Boundary Bridging to generate buy-in and knowledge, 2) Boundary Buffering to protect projects from resistance, and 3) Boundary Building to create new, sustainable governance structures.
smart cities, digital transformation, leadership, boundary management, institutional logic, urban governance, innovation
Adopt Agile Cybersecurity Policymaking to Counter Emerging Digital Risks

Adopt Agile Cybersecurity Policymaking to Counter Emerging Digital Risks

Masoud Afshari-Mofrad, Alireza Amrollahi, Babak Abedin
This study investigates the need for flexibility and speed in creating and updating cybersecurity rules within organizations. Through in-depth interviews with cybersecurity professionals, the research identifies key areas of digital risk and provides practical recommendations for businesses to develop more agile and adaptive security policies.

Problem In the face of rapidly evolving cyber threats, many organizations rely on static, outdated cybersecurity policies that are only updated after a security breach occurs. This reactive approach leaves them vulnerable to new attack methods, risks from new technologies, and threats from business partners, creating a significant security gap.

Outcome - Update cybersecurity policies to address risks from outdated legacy systems by implementing modern digital asset and vulnerability management.
- Adapt policies to address emerging technologies like AI by enhancing technology scouting and establishing a resilient cyber risk management framework.
- Strengthen policies for third-party vendors by conducting agile risk assessments and regularly reviewing security controls in contracts.
- Build flexible policies for disruptive external events (like pandemics or geopolitical tensions) through continuous employee training and robust business continuity plans.
agile cybersecurity, cybersecurity policymaking, digital risk, adaptive security, risk management, third-party risk, legacy systems
Promoting Cybersecurity Information Sharing Across the Extended Value Chain

Promoting Cybersecurity Information Sharing Across the Extended Value Chain

Olga Biedova, Lakshmi Goel, Justin Zhang, Steven A. Williamson, Blake Ives
This study analyzes an alternative cybersecurity information-sharing forum centered on the extended value chain of a single company in the forest and paper products industry. The paper explores the forum's design, execution, and challenges to provide recommendations for similar company-specific collaborations. The goal is to enhance cybersecurity resilience across interconnected business partners by fostering a more trusting and relevant environment for sharing best practices.

Problem As cyberthreats become more complex, industries with interconnected information and operational technologies (IT/OT) face significant vulnerabilities. Despite government and industry calls for greater collaboration, inter-organizational cybersecurity information sharing remains sporadic due to concerns over confidentiality, competitiveness, and lack of trust. Standard sector-based sharing initiatives can also be too broad to address the specific needs of a company and its unique value chain partners.

Outcome - A company-led, value-chain-specific cybersecurity forum is an effective alternative to broader industry groups, fostering greater trust and more relevant discussions among business partners.
- Key success factors for such a forum include inviting the right participants (security strategy leaders), establishing clear ground rules to encourage open dialogue, and using external facilitators to ensure neutrality.
- The forum successfully shifted the culture from one of distrust to one of transparency and collaboration, leading participants to be more open about sharing experiences, including previous security breaches.
- Participants gained valuable insights into the security maturity of their partners, leading to tangible improvements in cybersecurity practices, such as updating security playbooks, adopting new risk metrics, and enhancing third-party risk management.
- The collaborative model strengthens the entire value chain, as companies learn from each other's strategies, tools, and policies to collectively improve their defense against common threats.
cybersecurity, information sharing, extended value chain, supply chain security, cyber resilience, forest products industry, inter-organizational collaboration
Unraveling the Role of Cyber Insurance in Fortifying Organizational Cybersecurity

Unraveling the Role of Cyber Insurance in Fortifying Organizational Cybersecurity

Wojciech Strzelczyk, Karolina Puławska
This study explores how cyber insurance serves as more than just a financial tool for compensating victims of cyber incidents. Based on in-depth interviews with insurance industry experts and policy buyers, the research analyzes how insurance improves an organization's cybersecurity across three distinct stages: pre-purchase, post-purchase, and post-cyberattack.

Problem As businesses increasingly rely on digital technologies, they face a growing risk of cyberattacks that can lead to severe financial losses, reputational harm, and regulatory penalties. Many companies possess inadequate cybersecurity measures, and there is a need to understand how external mechanisms like insurance can proactively strengthen defenses rather than simply covering losses after an attack.

Outcome - Cyber insurance actively enhances an organization's security posture, not just providing financial compensation after an incident.
- The pre-purchase underwriting process forces companies to rigorously evaluate and improve their cybersecurity practices to even qualify for a policy.
- Post-purchase, insurers require continuous improvement through audits and training, often providing resources and expertise to help clients strengthen their defenses.
- Following an attack, cyber insurance provides access to critical incident management services, including expert support for damage containment, system restoration, and post-incident analysis to prevent future breaches.
cyber insurance, cybersecurity, risk management, organizational cybersecurity, incident response, underwriting
How HireVue Created

How HireVue Created "Glass Box" Transparency for its AI Application

Monideepa Tarafdar, Irina Rets, Lindsey Zuloaga, Nathan Mondragon
This paper presents a case study on HireVue, a company that provides an AI application for assessing job interviews. It describes the transparency-related challenges HireVue faced and explains how it addressed them by developing a "glass box" approach, which focuses on making the entire system of AI development and deployment understandable, rather than just the technical algorithm.

Problem AI applications used for critical decisions, such as hiring, are often perceived as technical "black boxes." This lack of clarity creates significant challenges for businesses in trusting the technology, ensuring fairness, mitigating bias, and complying with regulations, which hinders the responsible adoption of AI in recruitment.

Outcome - The study introduces a "glass box" model for AI transparency, which shifts focus from the technical algorithm to the broader sociotechnical system, including design processes, client interactions, and organizational functions.
- HireVue implemented five types of transparency practices: pre-deployment client-focused, internal, post-deployment client-focused, knowledge-related, and audit-related.
- This multi-faceted approach helps build trust with clients, regulators, and applicants by providing clarity on the AI's application, limitations, and validation processes.
- The findings serve as a practical guide for other AI software companies on how to create effective and comprehensive transparency for their own applications, especially in high-stakes fields.
AI transparency, algorithmic hiring, glass box model, ethical AI, recruitment technology, HireVue, case study
How Germany Successfully Implemented Its Intergovernmental FLORA System

How Germany Successfully Implemented Its Intergovernmental FLORA System

Julia Amend, Simon Feulner, Alexander Rieger, Tamara Roth, Gilbert Fridgen, and Tobias Guggenberger
This paper presents a case study on Germany's implementation of FLORA, a blockchain-based IT system designed to manage the intergovernmental processing of asylum seekers. It analyzes how the project navigated legal and technical challenges across different government levels. Based on the findings, the study offers three key recommendations for successfully deploying similar complex, multi-agency IT systems in the public sector.

Problem Governments face significant challenges in digitalizing services that require cooperation across different administrative layers, such as federal and state agencies. Legal mandates often require these layers to maintain separate IT systems, which complicates data exchange and modernization. Germany's asylum procedure previously relied on manually sharing Excel-based lists between agencies, a process that was slow, error-prone, and created data privacy risks.

Outcome - FLORA replaced inefficient Excel-based lists with a decentralized system, enabling a more efficient and secure exchange of procedural information between federal and state agencies.
- The system created a 'single procedural source of truth,' which significantly improved the accuracy, completeness, and timeliness of information for case handlers.
- By streamlining information exchange, FLORA reduced the time required for initial stages of the asylum procedure by up to 50%.
- The blockchain-based architecture enhanced legal compliance by reducing procedural errors and providing a secure way to manage data that adheres to strict GDPR privacy requirements.
- The study recommends that governments consider decentralized IT solutions to avoid the high hidden costs of centralized systems, deploy modular solutions to break down legacy architectures, and use a Software-as-a-Service (SaaS) model to lower initial adoption barriers for agencies.
intergovernmental IT systems, digital government, blockchain, public sector innovation, case study, asylum procedure, Germany
The Danish Business Authority's Approach to the Ongoing Evaluation of Al Systems

The Danish Business Authority's Approach to the Ongoing Evaluation of Al Systems

Oliver Krancher, Per Rådberg Nagbøl, Oliver Müller
This study examines the strategies employed by the Danish Business Authority (DBA), a pioneering public-sector adopter of AI, for the continuous evaluation of its AI systems. Through a case study of the DBA's practices and their custom X-RAI framework, the paper provides actionable recommendations for other organizations on how to manage AI systems responsibly after deployment.

Problem AI systems can degrade in performance over time, a phenomenon known as model drift, leading to inaccurate or biased decisions. Many organizations lack established procedures for the ongoing monitoring and evaluation of AI systems post-deployment, creating risks of operational failures, financial losses, and non-compliance with regulations like the EU AI Act.

Outcome - Organizations need a multi-faceted approach to AI evaluation, as single strategies like human oversight or periodic audits are insufficient on their own.
- The study presents the DBA's three-stage evaluation process: pre-production planning, in-production monitoring, and formal post-implementation evaluations.
- A key strategy is 'enveloping' AI systems and their evaluations, which means setting clear, pre-defined boundaries for the system's use and how it will be monitored to prevent misuse and ensure accountability.
- The DBA uses an MLOps platform and an 'X-RAI' (Transparent, Explainable, Responsible, Accurate AI) framework to ensure traceability, automate deployments, and guide risk assessments.
- Formal evaluations should use deliberate sampling, including random and negative cases, and 'blind' reviews (where caseworkers assess a case without seeing the AI's prediction) to mitigate human and machine bias.
AI evaluation, AI governance, model drift, responsible AI, MLOps, public sector AI, case study
How Stakeholders Operationalize Responsible AI in Data-Sensitive Contexts

How Stakeholders Operationalize Responsible AI in Data-Sensitive Contexts

Shivaang Sharma, Angela Aristidou
This study investigates the challenges of implementing responsible AI in complex, multi-stakeholder environments such as humanitarian crises. Researchers analyzed the deployment of six AI tools, identifying significant gaps in expectations and values among developers, aid agencies, and affected populations. Based on these findings, the paper introduces the concept of "AI Responsibility Rifts" (AIRRs) and proposes the SHARE framework to help organizations navigate these disagreements.

Problem Traditional approaches to AI safety focus on objective, technical risks like hallucinations or data bias. This perspective is insufficient for data-sensitive contexts because it overlooks the subjective disagreements among diverse stakeholders about an AI tool's purpose, impact, and ethical boundaries. These unresolved conflicts, or "rifts," can hinder the adoption of valuable AI tools and lead to unintended negative consequences for vulnerable populations.

Outcome - The study introduces the concept of "AI Responsibility Rifts" (AIRRs), defined as misalignments in stakeholders' subjective expectations, values, and perceptions of an AI system's impact.
- It identifies five key areas where these rifts occur: Safety, Humanity, Accountability, Reliability, and Equity.
- The paper proposes the SHARE framework, a self-diagnostic questionnaire designed to help organizations identify and address these rifts among their stakeholders.
- It provides core recommendations and caveats for executives to close the gaps in each of the five rift areas, promoting a more inclusive and effective approach to responsible AI.
Responsible AI, AI ethics, stakeholder management, humanitarian AI, AI governance, data-sensitive contexts, SHARE framework
Promises and Perils of Generative AI in Cybersecurity

Promises and Perils of Generative AI in Cybersecurity

Pratim Datta, Tom Acton
This paper presents a case study of a fictional insurance company, based on real-life events, to illustrate how generative artificial intelligence (GenAI) can be used for both offensive and defensive cybersecurity purposes. It explores the dual nature of GenAI as a tool for both attackers and defenders, presenting a significant dilemma for IT executives. The study provides actionable recommendations for developing a comprehensive cybersecurity strategy in the age of GenAI.

Problem With the rapid adoption of Generative AI by both cybersecurity defenders and malicious actors, IT leaders face a critical challenge. GenAI significantly enhances the capabilities of attackers to create sophisticated, large-scale, and automated cyberattacks, while also offering powerful new tools for defense. This creates a high-stakes 'AI arms race,' forcing organizations to decide how to strategically embrace GenAI for defense without being left vulnerable to adversaries armed with the same technology.

Outcome - GenAI is a double-edged sword, capable of both triggering and defending against sophisticated cyberattacks, requiring a proactive, not reactive, security posture.
- Organizations must integrate a 'Defense in Depth' (DiD) strategy that extends beyond technology to include processes, a security-first culture, and continuous employee education.
- Robust data governance is crucial to manage and protect data, the primary target of attacks, by classifying its value and implementing security controls accordingly.
- A culture of continuous improvement is essential, involving regular simulations of real-world attacks (red-team/blue-team exercises) and maintaining a zero-trust mindset.
- Companies must fortify defenses against AI-powered social engineering by combining advanced technical filtering with employee training focused on skepticism and verification.
- Businesses should embrace proactive, AI-driven defense mechanisms like AI-powered threat hunting and adaptive honeypots to anticipate and neutralize threats before they escalate.
Generative AI, Cybersecurity, Black-hat AI, White-hat AI, Threat Hunting, Social Engineering, Defense in Depth
How to Operationalize Responsible Use of Artificial Intelligence

How to Operationalize Responsible Use of Artificial Intelligence

Lorenn P. Ruster, Katherine A. Daniell
This study outlines a practical five-phase process for organizations to translate responsible AI principles into concrete business practices. Based on participatory action research with two startups, the paper provides a roadmap for crafting specific responsibility pledges and embedding them into organizational processes, moving beyond abstract ethical statements.

Problem Many organizations are committed to the responsible use of AI but struggle with how to implement it practically, creating a significant "principle-to-practice gap". This confusion can lead to inaction or superficial efforts known as "ethics-washing," where companies appear ethical without making substantive changes. The study addresses the lack of clear, actionable guidance for businesses, especially smaller ones, on where to begin.

Outcome - Presents a five-phase process for operationalizing responsible AI: 1) Buy-in, 2) Intuition-building, 3) Pledge-crafting, 4) Pledge-communicating, and 5) Pledge-embedding.
- Argues that responsible AI should be approached as a systems problem, considering organizational mindsets, culture, and processes, not just technical fixes.
- Recommends that organizations create contextualized, action-oriented "pledges" rather than simply adopting generic AI principles.
- Finds that investing in responsible AI practices early, even in small projects, helps build organizational capability and transfers to future endeavors.
- Provides a framework for businesses to navigate communication challenges, balancing transparency with commercial interests to build user trust.
Responsible AI, AI Ethics, Operationalization, Systems Thinking, AI Governance, Pledge-making, Startups
Successfully Mitigating AI Management Risks to Scale AI Globally

Successfully Mitigating AI Management Risks to Scale AI Globally

Thomas Hutzschenreuter, Tim Lämmermann, Alexander Sake, Helmuth Ludwig
This study presents an in-depth case study of the industrial AI pioneer Siemens AG to understand how companies can effectively scale artificial intelligence systems. It identifies five critical technology management risks associated with both generative and predictive AI and provides practical recommendations for mitigating them to create company-wide business impact.

Problem Many companies struggle to effectively scale modern AI systems, with over 70% of implementation projects failing to create a measurable business impact. These failures stem from machine learning's unique characteristics, which amplify existing technology management challenges and introduce entirely new ones that firms are often unprepared to handle.

Outcome - Missing or falsely evaluated potential AI use case opportunities.
- Algorithmic training and data quality issues.
- Task-specific system complexities.
- Mismanagement of system stakeholders.
- Threats from provider and system dependencies.
AI management, risk mitigation, scaling AI, generative AI, predictive AI, technology management, case study
How Siemens Empowered Workforce Re- and Upskilling Through Digital Learning

How Siemens Empowered Workforce Re- and Upskilling Through Digital Learning

Leonie Rebecca Freise, Eva Ritz, Ulrich Bretschneider, Roman Rietsche, Gunter Beitinger, and Jan Marco Leimeister
This case study examines how Siemens successfully implemented a human-centric, bottom-up approach to employee reskilling and upskilling through digital learning. The paper presents a four-phase model for leveraging information systems to address skill gaps and provides five key recommendations for organizations to foster lifelong learning in dynamic manufacturing environments.

Problem The rapid digital transformation in manufacturing is creating a significant skills gap, with a high percentage of companies reporting shortages. Traditional training methods are often not scalable or adaptable enough to meet these evolving demands, presenting a major challenge for organizations trying to build a future-ready workforce.

Outcome - The study introduces a four-phase model for developing human-centric digital learning: 1) Recognizing employee needs, 2) Identifying key employee traits (like self-regulation and attitude), 3) Developing tailored strategies, and 4) Aligning strategies with organizational goals.
- Key employee needs for successful digital learning include task-oriented courses, peer exchange, on-the-job training, regular feedback, personalized learning paths, and micro-learning formats ('learning nuggets').
- The paper proposes four distinct learning strategies based on employees' attitude and self-regulated learning skills, ranging from community mentoring for those low in both, to personalized courses for those high in both.
- Five practical recommendations for companies are provided: 1) Foster a lifelong learning culture, 2) Tailor digital learning programs, 3) Create dedicated spaces for collaboration, 4) Incorporate flexible training formats, and 5) Use analytics to provide feedback.
digital learning, upskilling, reskilling, workforce development, human-centric, manufacturing, case study
A Three-Layer Model for Successful Organizational Digital Transformation

A Three-Layer Model for Successful Organizational Digital Transformation

Ferry Nolte, Alexander Richter, Nadine Guhr
This study analyzes the digital transformation journey on the shop floor of automotive supplier Continental AG. Based on this case study, the paper proposes a practical three-layer model—IT evolution, work practices evolution, and mindset evolution—to guide organizations through successful digital transformation. The model provides recommended actions for aligning these layers to reduce implementation risks and improve outcomes.

Problem Many industrial companies struggle with digital transformation, particularly on the shop floor, where environments are often poorly integrated with digital technology. These transformation efforts are frequently implemented as a 'big bang,' overwhelming workers with new technologies and revised work practices, which can lead to resistance, failure to adopt new systems, and the loss of experienced employees.

Outcome - Successful digital transformation requires a coordinated and synchronized evolution across three interdependent layers: IT, work practices, and employee mindset.
- The paper introduces a practical three-layer model (IT Evolution, Work Practices Evolution, and Mindset Evolution) as a roadmap for managing the complexities of organizational change.
- A one-size-fits-all approach fails; organizations must provide tailored support, tools, and training that cater to the diverse skill levels and starting points of all employees, especially lower-skilled workers.
- To ensure adoption, work processes and performance metrics must be strategically adapted to integrate new digital tools, rather than simply layering technology on top of old workflows.
- A cultural shift is fundamental; success depends on moving away from rigid hierarchies to a culture that empowers employees, encourages experimentation, and fosters a collective readiness for continuous change.
Digital Transformation, Organizational Change, Change Management, Shop Floor Digitalization, Three-Layer Model, Case Study, Dynamic Capabilities
Transforming Energy Management with an AI-Enabled Digital Twin

Transforming Energy Management with an AI-Enabled Digital Twin

Hadi Ghanbari, Petter Nissinen
This paper reports on a case study of how one of Europe's largest district heating providers, called EnergyCo, implemented an AI-assisted digital twin to improve energy efficiency and sustainability. The study details the implementation process and its outcomes, providing six key recommendations for executives in other industries who are considering adopting digital twin technology.

Problem Large-scale energy providers face significant challenges in managing complex district heating networks due to fluctuating energy prices, the shift to decentralized renewable energy sources, and operational inefficiencies from siloed departments. Traditional control systems lack the comprehensive, real-time view needed to optimize the entire network, leading to energy loss, higher costs, and difficulties in achieving sustainability goals.

Outcome - The AI-enabled digital twin provided a comprehensive, real-time representation of the entire district heating network, replacing fragmented views from legacy systems.
- It enabled advanced simulation and optimization, allowing the company to improve operational efficiency, manage fluctuating energy prices, and move toward its carbon neutrality goals.
- The system facilitated scenario-based decision-making, helping operators forecast demand, optimize temperatures and pressures, and reduce heat loss.
- The digital twin enhanced cross-departmental collaboration by providing a shared, holistic view of the network's operations.
- It enabled a shift from reactive to proactive maintenance by using predictive insights to identify potential equipment failures before they occur, reducing costs and downtime.
Digital Twin, Energy Management, District Heating, AI, Cyber-Physical Systems, Sustainability, Case Study
Transforming to Digital Product Management

Transforming to Digital Product Management

R. Ryan Nelson
This study analyzes the successful digital transformations of CarMax and The Washington Post to advocate for a strategic shift from traditional IT project management to digital product management. It demonstrates how adopting practices like Agile and DevOps, combined with empowered, cross-functional teams, enables companies to become nimbler and more adaptive in a fast-changing digital landscape. The research is based on extensive field research, including interviews with senior executives from the case study companies.

Problem Many businesses struggle to adapt and innovate because their traditional IT project management methods are too slow and rigid for the modern digital economy. This project-based approach often results in high failure rates, misaligned business and IT goals, and an inability to respond quickly to market changes or new competitors. This gap prevents organizations from realizing the full value of their technology investments and puts them at risk of becoming obsolete.

Outcome - A shift from a project-oriented to a product-oriented mindset is essential for business agility and continuous innovation.
- Successful transformations rely on creating durable, empowered, cross-functional teams that manage a digital product's entire lifecycle, focusing on business outcomes rather than project outputs.
- Adopting practices like dual-track Agile and DevOps enables teams to discover the right solutions for customers while delivering value incrementally and consistently.
- The transition to digital product management is a long-term cultural and organizational journey requiring strong executive buy-in, not a one-time project.
- Organizations should differentiate which initiatives are best suited for a project approach (e.g., migrations, compliance) versus a product approach (e.g., customer-facing applications, e-commerce platforms).
digital product management, IT project management, digital transformation, agile development, DevOps, organizational change, case study
How a Utility Company Established a Corporate Data Culture for Data-Driven Decision Making

How a Utility Company Established a Corporate Data Culture for Data-Driven Decision Making

Philipp Staudt, Rainer Hoffmann
This paper presents a case study of a large German utility company's successful transition to a data-driven organization. It outlines the strategy, which involved three core transformations: enabling the workforce, improving the data lifecycle, and implementing employee-centered data management. The study provides actionable recommendations for industrial organizations facing similar challenges.

Problem Many industrial companies, particularly in the utility sector, struggle to extract value from their data. The ongoing energy transition, with the rise of renewable energy sources and electric vehicles, has made traditional, heuristic-based decision-making obsolete, creating an urgent need for a robust corporate data culture to manage increasing complexity and ensure grid stability.

Outcome - A data culture was successfully established through three intertwined transformations: enabling the workforce, improving the data lifecycle, and transitioning to employee-centered data management.
- Enabling the workforce involved upskilling programs ('Data and AI Multipliers'), creating platforms for knowledge sharing, and clear communication to ensure widespread buy-in and engagement.
- The data lifecycle was improved by establishing new data infrastructure for real-time data, creating a central data lake, and implementing a strong data governance framework with new roles like 'data officers' and 'data stewards'.
- An employee-centric approach, featuring cross-functional teams, showcasing quick wins to demonstrate value, and transparent communication, was crucial for overcoming resistance and building trust.
- The transformation resulted in the deployment of over 50 data-driven solutions that replaced outdated processes and improved decision-making in real-time operations, maintenance, and long-term planning.
data culture, data-driven decision making, utility company, energy transition, change management, data governance, case study
How the Odyssey Project Is Using Old and Cutting-Edge Technologies for Financial Inclusion

How the Odyssey Project Is Using Old and Cutting-Edge Technologies for Financial Inclusion

Samia Cornelius Bhatti, Dorothy E. Leidner
This paper presents a case study of The Odyssey Project, a fintech startup aiming to increase financial inclusion for the unbanked. It details how the company combines established SMS technology with modern innovations like blockchain and AI to create an accessible and affordable digital financial solution, particularly for users in underdeveloped countries without smartphones or consistent internet access.

Problem Approximately 1.7 billion adults globally remain unbanked, lacking access to formal financial services. This financial exclusion is often due to the high cost of services, geographical distance to banks, and the requirement for expensive smartphones and internet data, creating a significant barrier to economic participation and stability.

Outcome - The Odyssey Project developed a fintech solution that integrates old technology (SMS) with cutting-edge technologies (blockchain, AI, cloud computing) to serve the unbanked.
- The platform, named RoyPay, uses an SMS-based chatbot (RoyChat) as the user interface, making it accessible on basic mobile phones without an internet connection.
- Blockchain technology is used for the core payment mechanism to ensure secure, transparent, and low-cost transactions, eliminating many traditional intermediary fees.
- The system is built on a scalable and cost-effective infrastructure using cloud services, open-source software, and containerization to minimize operational costs.
- The study demonstrates a successful model for creating context-specific technological solutions that address the unique needs and constraints of underserved populations.
financial inclusion, fintech, blockchain, unbanked, SMS technology, mobile payments, developing economies
Leveraging Information Systems for Environmental Sustainability and Business Value

Leveraging Information Systems for Environmental Sustainability and Business Value

Anne Ixmeier, Franziska Wagner, Johann Kranz
This study analyzes 31 articles from practitioner journals to understand how businesses can use Information Systems (IS) to enhance environmental sustainability. Based on a comprehensive literature review, the research provides five practical recommendations for managers to bridge the gap between sustainability goals and actual implementation, ultimately creating business value.

Problem Many businesses face growing pressure to improve their environmental sustainability but struggle to translate sustainability initiatives into tangible business value. Managers are often unclear on how to effectively leverage information systems to achieve both environmental and financial goals, a challenge referred to as the 'sustainability implementation gap'.

Outcome - Legitimize sustainability by using IS to create awareness and link environmental metrics to business value.
- Optimize processes, products, and services by using IS to reduce environmental impact and improve eco-efficiency.
- Internalize sustainability by integrating it into core business strategies and decision-making, informed by data from environmental management systems.
- Standardize sustainability data by establishing robust data governance to ensure information is accessible, comparable, and transparent across the value chain.
- Collaborate with external partners by using IS to build strategic partnerships and ecosystems that can collectively address complex sustainability challenges.
Information Systems, Environmental Sustainability, Green IS, Business Value, Corporate Strategy, Sustainability Implementation
The Hidden Causes of Digital Investment Failures

The Hidden Causes of Digital Investment Failures

Joe Peppard, R. M. Bastien
This study analyzes hundreds of digital projects to uncover the subtle, hidden root causes behind their frequent failure or underachievement. It moves beyond commonly cited symptoms, like budget overruns, to identify five fundamental organizational and structural issues that prevent companies from realizing value from their technology investments. The analysis is supported by an illustrative case study of a major insurance company's large-scale transformation program.

Problem Organizations invest heavily in digital technology expecting significant returns, but most struggle to achieve their goals, and project success rates have not improved over time. Despite an abundance of project management frameworks and best practices, companies often address the symptoms of failure rather than the underlying problems. This research addresses the gap by identifying the deep-rooted, often surprising causes for these persistent investment failures.

Outcome - The Illusion of Control: Business leaders believe they are controlling projects through metrics and governance, but this is an illusion that masks a lack of real influence over value creation.
- The Fallacy of the “Working System”: The primary goal becomes delivering a functional IT system on time and on budget, rather than achieving the intended business performance improvements.
- Conflicts of Interest: The conventional model of a single, centralized IT department creates inherent conflicts of interest, as the same group is responsible for designing, building, and quality-assuring systems.
- The IT Amnesia Syndrome: A project-by-project focus leads to a collective organizational memory loss about why and how systems were built, creating massive complexity and technical debt for future projects.
- Managing Expenses, Not Assets: Digital systems are treated as short-term expenses to be managed rather than long-term productive assets whose value must be cultivated over their entire lifecycle.
digital investment, project failure, IT governance, root cause analysis, business value, single-counter IT model, technical debt
Applying the Rite of Passage Approach to Ensure a Successful Digital Business Transformation

Applying the Rite of Passage Approach to Ensure a Successful Digital Business Transformation

Nkosi Leary, Lorry Perkins, Umang Thakkar, Gregory Gimpel
This study examines how a U.S. recruiting company, ASK Consulting, successfully managed a major digital overhaul by treating the employee transformation as a 'rite of passage.' Based on this case study, the paper outlines a three-stage approach (separation, transition, integration) and provides actionable recommendations for leaders, or 'masters of ceremonies,' to guide their workforce through profound organizational change.

Problem Many digital transformation initiatives fail because they focus on technology and business processes while neglecting the crucial human element. This creates a gap where companies struggle to convert their existing workforce from legacy mindsets and manual processes to a future-ready, digitally empowered culture, leading to underwhelming results.

Outcome - Framing a digital transformation as a three-stage 'rite of passage' (separation, transition, integration) can successfully manage the human side of organizational change.
- The initial 'separation' from old routines and physical workspaces is critical for creating an environment where employees are open to new mindsets and processes.
- During the 'transition' phase, strong leadership (a 'master of ceremonies') is needed to foster a new sense of community, establish data-driven norms, and test employees' ability to adapt to the new digital environment.
- The final 'integration' stage solidifies the transformation by making changes permanent, restoring stability, and using the newly transformed employees to train new hires, thereby cementing the new culture.
- By implementing this approach, the case study company successfully automated core operations, which led to significant increases in productivity and revenue with a smaller workforce.
digital transformation, change management, rite of passage, employee transformation, organizational culture, leadership, case study
Strategies for Managing Citizen Developers and No-Code Tools

Strategies for Managing Citizen Developers and No-Code Tools

Olga Biedova, Blake Ives, David Male, Michael Moore
This study examines the use of no-code and low-code development tools by citizen developers (non-IT employees) to accelerate productivity and bypass traditional IT bottlenecks. Based on the experiences of several organizations, the paper identifies the strengths, risks, and misalignments between citizen developers and corporate IT departments. It concludes by providing recommended strategies for managing these tools and developers to enhance organizational agility.

Problem Organizations face a growing demand for digital transformation, which often leads to significant IT bottlenecks and costly delays. Hiring professional developers is expensive and can be ineffective due to a lack of specific business insight. This creates a gap where business units need to rapidly deploy new applications but are constrained by the capacity and speed of their central IT departments.

Outcome - No-code tools offer significant benefits, including circumventing IT backlogs, reducing costs, enabling rapid prototyping, and improving alignment between business needs and application development.
- Key challenges include finding talent with the right mindset, dependency on smaller tool vendors, security and privacy risks from 'shadow IT,' and potential for poor data architecture in citizen-developed applications.
- A fundamental misalignment exists between IT departments and citizen developers regarding priorities, timelines, development methodologies, and oversight, often leading to friction.
- Successful adoption requires organizations to strategically manage citizen development by identifying and supporting 'problem solvers' within the business, providing resources, and establishing clear guidelines rather than overly policing them.
- While no-code tools are crucial for agility in early-stage innovation, scaling these applications requires the architectural expertise of a formal IT department to ensure reliability and performance.
citizen developers, no-code tools, low-code development, IT bottleneck, digital transformation, shadow IT, organizational agility
How Audi Scales Artificial Intelligence in Manufacturing

How Audi Scales Artificial Intelligence in Manufacturing

André Sagodi, Benjamin van Giffen, Johannes Schniertshauer, Klemens Niehues, Jan vom Brocke
This paper presents a case study on how the automotive manufacturer Audi successfully scaled an artificial intelligence (AI) solution for quality inspection in its manufacturing press shops. It analyzes Audi's four-year journey, from initial exploration to multi-site deployment, to identify key strategies and challenges. The study provides actionable recommendations for senior leaders aiming to capture business value by scaling AI innovations.

Problem Many organizations struggle to move their AI initiatives from the pilot phase to full-scale operational use, failing to realize the technology's full economic potential. This is a particular challenge in manufacturing, where integrating AI with legacy systems and processes presents significant barriers. This study addresses how a company can overcome these challenges to successfully scale an AI solution and unlock long-term business value.

Outcome - Audi successfully scaled an AI-based system to automate the detection of cracks in sheet metal parts, a crucial quality control step in its press shops.
- The success was driven by a strategic four-stage approach: Exploring, Developing, Implementing, and Scaling, with a focus on designing for scalability from the outset.
- Key success factors included creating a single, universal AI model for multiple deployments, leveraging data from various sources to improve the model, and integrating the solution into the broader Volkswagen Group's digital production platform to create synergies.
- The study highlights the importance of decoupling value from cost, which Audi achieved by automating monitoring and deployment pipelines, thereby scaling operations without proportionally increasing expenses.
- Recommendations for other businesses include making AI scaling a strategic priority, fostering collaboration between AI experts and domain specialists, and streamlining operations through automation and robust governance.
Artificial Intelligence, AI Scaling, Manufacturing, Automotive Industry, Case Study, Digital Transformation, Quality Inspection
Translating AI Ethics Principles into Practice to Support Robotic Process Automation Implementation

Translating AI Ethics Principles into Practice to Support Robotic Process Automation Implementation

Dörte Schulte-Derne, Ulrich Gnewuch
This study investigates how abstract AI ethics principles can be translated into concrete actions during technology implementation. Through a longitudinal case study at a German energy service provider, the authors observed the large-scale rollout of Robotic Process Automation (RPA) over 30 months. The research provides actionable recommendations for leaders to navigate the ethical challenges and employee concerns that arise from AI-driven automation.

Problem Organizations implementing AI to automate processes often face uncertainty, fear, and resistance from employees. While high-level AI ethics principles exist to provide guidance, business leaders struggle to apply these abstract concepts in practice. This creates a significant gap between knowing *what* ethical goals to aim for and knowing *how* to achieve them during a real-world technology deployment.

Outcome - Define clear roles for implementing and supervising AI systems, and ensure senior leaders accept overall responsibility for any negative consequences.
- Strive for a fair distribution of AI's benefits and costs among all employees, addressing tensions in a diverse workforce.
- Increase transparency by making the AI's work visible (e.g., allowing employees to observe a bot at a dedicated workstation) to turn fear into curiosity.
- Enable open communication among trusted peers, creating a 'safe space' for employees to discuss concerns without feeling judged.
- Help employees cope with fears by involving them in the implementation process and avoiding the overwhelming removal of all routine tasks at once.
- Involve employee representation bodies and data protection officers from the beginning of a new AI initiative to proactively address privacy and labor concerns.
AI ethics, Robotic Process Automation (RPA), change management, technology implementation, case study, employee resistance, ethical guidelines
Establishing a Low-Code/No-Code-Enabled Citizen Development Strategy

Establishing a Low-Code/No-Code-Enabled Citizen Development Strategy

Björn Binzer, Edona Elshan, Daniel Fürstenau, Till J. Winkler
This study analyzes the low-code/no-code adoption journeys of 24 different companies to understand the challenges and best practices of citizen development. Drawing on these insights, the paper proposes a seven-step strategic framework designed to guide organizations in effectively implementing and managing these powerful tools. The framework helps structure critical design choices to empower employees with little or no IT background to create digital solutions.

Problem There is a significant gap between the high demand for digital solutions and the limited availability of professional software developers, which constrains business innovation and problem-solving. While low-code/no-code platforms enable non-technical employees (citizen developers) to build applications, organizations often lack a coherent strategy for their adoption. This leads to inefficiencies, security risks, compliance issues, and wasted investments.

Outcome - The study introduces a seven-step framework for creating a citizen development strategy: Coordinate Architecture, Launch a Development Hub, Establish Rules, Form the Workforce, Orchestrate Liaison Actions, Track Successes, and Iterate the Strategy.
- Successful implementation requires a balance between centralized governance and individual developer autonomy, using 'guardrails' rather than rigid restrictions.
- Key activities for scaling the strategy include the '5E Cycle': Evangelize, Enable, Educate, Encourage, and Embed citizen development within the organization's culture.
- Recommendations include automating governance tasks, promoting business-led development initiatives, and encouraging the use of these tools by IT professionals to foster a collaborative relationship between business and IT units.
Citizen Development, Low-Code, No-Code, Digital Transformation, IT Strategy, Governance Framework, Upskilling
The Promise and Perils of Low-Code AI Platforms

The Promise and Perils of Low-Code AI Platforms

Maria Kandaurova, Daniel A. Skog, Petra M. Bosch-Sijtsema
This study investigates the adoption of a low-code conversational Artificial Intelligence (AI) platform within four multinational corporations. Through a case study approach, the research identifies significant challenges that arise from fundamental, yet incorrect, assumptions about low-code technologies. The paper offers recommendations for companies to better navigate the implementation process and unlock the full potential of these platforms.

Problem As businesses increasingly turn to AI for process automation, they often encounter significant hurdles during adoption. Low-code AI platforms are marketed as a solution to simplify this process, but there is limited research on their real-world application. This study addresses the gap by showing how companies' false assumptions about the ease of use, adaptability, and integration of these platforms can limit their effectiveness and return on investment.

Outcome - The usability of low-code AI platforms is often overestimated; non-technical employees typically face a much steeper learning curve than anticipated and still require a foundational level of coding and AI knowledge.
- Adapting low-code AI applications to specific, complex business contexts is challenging and time-consuming, contrary to the assumption of easy tailoring. It often requires significant investment in standardizing existing business processes first.
- Integrating low-code platforms with existing legacy systems and databases is not a simple 'plug-and-play' process. Companies face significant challenges due to incompatible data formats, varied interfaces, and a lack of a comprehensive data strategy.
- Successful implementation requires cross-functional collaboration between IT and business teams, thorough platform testing before procurement, and a strategic approach to reengineering business processes to align with AI capabilities.
Low-Code AI Platforms, Artificial Intelligence, Conversational AI, Implementation Challenges, Digital Transformation, Business Process Automation, Case Study
Combining Low-Code/No-Code with Noncompliant Workarounds to Overcome a Corporate System's Limitations

Combining Low-Code/No-Code with Noncompliant Workarounds to Overcome a Corporate System's Limitations

Robert M. Davison, Louie H. M. Wong, Steven Alter
This study explores how employees at a warehouse in Hong Kong utilize low-code/no-code principles with everyday tools like Microsoft Excel to create unofficial solutions. It examines these noncompliant but essential workarounds that compensate for the shortcomings of their mandated corporate software system. The research is based on a qualitative case study involving interviews with warehouse staff.

Problem A global company implemented a standardized, non-customizable corporate system (Microsoft Dynamics) that was ill-suited for the unique logistical needs of its Hong Kong operations. This created significant operational gaps, particularly in delivery scheduling, leaving employees unable to perform critical tasks using the official software.

Outcome - Employees effectively use Microsoft Excel as a low-code tool to create essential, noncompliant workarounds that are vital for daily operations, such as delivery management.
- These employee-driven solutions, developed without formal low-code platforms or IT approval, become institutionalized and crucial for business success, highlighting the value of 'shadow IT'.
- The study argues that low-code/no-code development is not limited to formal platforms and that managers should recognize, support, and govern these informal solutions.
- Businesses are advised to adopt a portfolio approach to low-code development, leveraging tools like Excel alongside formal platforms, to empower employees and solve real-world operational problems.
Low-Code/No-Code, Workarounds, Shadow IT, Citizen Development, Enterprise Systems, Case Study, Microsoft Excel
Governing Citizen Development to Address Low-Code Platform Challenges

Governing Citizen Development to Address Low-Code Platform Challenges

Altus Viljoen, Marija Radić, Andreas Hein, John Nguyen, Helmut Krcmar
This study investigates how companies can effectively manage 'citizen development'—where employees with minimal technical skills use low-code platforms to build applications. Drawing on 30 interviews with citizen developers and platform experts across two firms, the research provides a practical governance framework to address the unique challenges of this approach.

Problem Companies face a significant shortage of skilled software developers, leading them to adopt low-code platforms that empower non-IT employees to create applications. However, this trend introduces serious risks, such as poor software quality, unmonitored development ('shadow IT'), and long-term maintenance burdens ('technical debt'), which organizations are often unprepared to manage.

Outcome - Citizen development introduces three primary risks: substandard software quality, shadow IT, and technical debt.
- Effective governance requires a more nuanced understanding of roles, distinguishing between 'traditional citizen developers' and 'low-code champions,' and three types of technical experts who support them.
- The study proposes three core sets of recommendations for governance: 1) strategically manage project scope and complexity, 2) organize effective collaboration through knowledge bases and proper tools, and 3) implement targeted education and training programs.
- Without strong governance, the benefits of rapid, decentralized development are quickly outweighed by escalating risks and costs.
citizen development, low-code platforms, IT governance, shadow IT, technical debt, software quality, case study
How GuideCom Used the Cognigy.AI Low-Code Platform to Develop an AI-Based Smart Assistant

How GuideCom Used the Cognigy.AI Low-Code Platform to Develop an AI-Based Smart Assistant

Imke Grashoff, Jan Recker
This case study investigates how GuideCom, a medium-sized German software provider, utilized the Cognigy.AI low-code platform to create an AI-based smart assistant. The research follows the company's entire development process to identify the key ways in which low-code platforms enable and constrain AI development. The study illustrates the strategic trade-offs companies face when adopting this approach.

Problem Small and medium-sized enterprises (SMEs) often lack the extensive resources and specialized expertise required for in-house AI development, while off-the-shelf solutions can be too rigid. Low-code platforms are presented as a solution to democratize AI, but there is a lack of understanding regarding their real-world impact. This study addresses the gap by examining the practical enablers and constraints that firms encounter when using these platforms for AI product development.

Outcome - Low-code platforms enable AI development by reducing complexity through visual interfaces, facilitating cross-functional collaboration between IT and business experts, and preserving resources.
- Key constraints of using low-code AI platforms include challenges with architectural integration into existing systems, ensuring the product is expandable for different clients and use cases, and managing security and data privacy concerns.
- Contrary to the 'no-code' implication, existing software development skills are still critical for customizing solutions, re-engineering code, and overcoming platform limitations, especially during testing and implementation.
- Establishing a strong knowledge network with the platform provider (for technical support) and innovation partners like clients (for domain expertise and data) is a crucial factor for success.
- The decision to use a low-code platform is a strategic trade-off; it significantly lowers the barrier to entry for AI innovation but requires careful management of platform dependencies and inherent constraints.
low-code development, AI development, smart assistant, conversational AI, case study, digital transformation, SME
EMERGENCE OF IT IMPLEMENTATION CONSEQUENCES IN ORGANIZATIONS: AN ASSEMBLAGE APPROACH

EMERGENCE OF IT IMPLEMENTATION CONSEQUENCES IN ORGANIZATIONS: AN ASSEMBLAGE APPROACH

Abdul Sesay, Elena Karahanna, and Marie-Claude Boudreau
This study investigates how the effects of new technology, specifically body-worn cameras (BWCs), unfold within organizations over time. Using a multi-site case study of three U.S. police departments, the research develops a process model to explain how the consequences of IT implementation emerge. The study identifies three key phases in this process: individuation (selecting the technology and related policies), composition (combining the technology with users), and actualization (using the technology in real-world interactions).

Problem When organizations implement new technology, the results are often unpredictable, with outcomes varying widely between different settings. Existing research has not fully explained why a technology can be successful in one organization but fail in another. This study addresses the gap in understanding how the consequences of a new technology, like police body-worn cameras, actually develop and evolve into established organizational practices.

Outcome - The process through which technology creates new behaviors and practices is complex and non-linear, occurring in three distinct phases (individuation, composition, and actualization).
- Successful implementation is not guaranteed; it depends on the careful alignment of the technology itself (material components) with policies, training, and user adoption (expressive components) at each stage.
- The study found that of the three police departments, only one successfully implemented body cameras because it carefully selected high-quality equipment, developed specific policies for its use, and ensured officers were trained and held accountable.
- The other two departments experienced failure or delays due to poor quality equipment, generic policies, and inconsistent use, which prevented new, positive practices from taking hold.
- The model shows that outcomes emerge over time and may require continuous adjustments, demonstrating that success is an ongoing process, not a one-time event.
IT implementation, Assemblage theory, body-worn camera, organizational change, police technology, process model
SUPPORTING COMMUNITY FIRST RESPONDERS IN AGING IN PLACE: AN ACTION DESIGN FOR A COMMUNITY-BASED SMART ACTIVITY MONITORING SYSTEM

SUPPORTING COMMUNITY FIRST RESPONDERS IN AGING IN PLACE: AN ACTION DESIGN FOR A COMMUNITY-BASED SMART ACTIVITY MONITORING SYSTEM

Carmen Leong, Carol Hsu, Nadee Goonawardene, Hwee-Pink Tan
This study details the development of a smart activity monitoring system designed to help elderly individuals live independently at home. Using a three-year action design research approach, it deployed a sensor-based system in a community setting to understand how to best support community first responders—such as neighbors and volunteers—who lack professional healthcare training.

Problem As the global population ages, more elderly individuals wish to remain in their own homes, but this raises safety concerns like falls or medical emergencies going unnoticed. This study addresses the specific challenge of designing monitoring systems that provide remote, non-professional first responders with the right information (situational awareness) to accurately assess an emergency alert and respond effectively.

Outcome - Technology adaptation alone is insufficient; the system design must also encourage the elderly person to adapt their behavior, such as carrying a beacon when leaving home, to ensure data accuracy.
- Instead of relying on simple automated alerts, the system should provide responders with contextual information, like usual sleep times or last known activity, to support human-based assessment and reduce false alarms.
- To support teams of responders, the system must integrate communication channels, allowing all actions and updates related to an alert to be logged in a single, closed-loop thread for better coordination.
- Long-term activity data can be used for proactive care, helping identify subtle changes in behavior (e.g., deteriorating mobility) that may signal future health risks before an acute emergency occurs.
Activity monitoring systems, community-based model, elderly care, situational awareness, IoT, sensor-based monitoring systems, action design research
What it takes to control Al by design: human learning

What it takes to control Al by design: human learning

Dov Te'eni, Inbal Yahav, David Schwartz
This study proposes a robust framework, based on systems theory, for maintaining meaningful human control over complex human-AI systems. The framework emphasizes the importance of continual human learning to parallel advancements in machine learning, operating through two distinct modes: a stable mode for efficient operation and an adaptive mode for learning. The authors demonstrate this concept with a method called reciprocal human-machine learning applied to a critical text classification system.

Problem Traditional methods for control and oversight are insufficient for the complexity of modern AI technologies, creating a gap in ensuring that critical AI systems remain aligned with human values and goals. As AI becomes more autonomous and operates in volatile environments, there is an urgent need for a new approach to design systems that allow humans to effectively stay in control and adapt to changing circumstances.

Outcome - The study introduces a framework for human control over AI that operates at multiple levels and in two modes: stable and adaptive.
- Effective control requires continual human learning to match the pace of machine learning, ensuring humans can stay 'in the loop' and 'in control'.
- A method called 'reciprocal human-machine learning' is presented, where humans and AI learn from each other's feedback in an adaptive mode.
- This approach results in high-performance AI systems that are unbiased and aligned with human values.
- The framework provides a model for designing control in critical AI systems that operate in dynamic environments.
Human-AI system, Control, Reciprocal learning, Feedback, Oversight
Balancing fear and confidence: A strategic approach to mitigating human risk in cybersecurity

Balancing fear and confidence: A strategic approach to mitigating human risk in cybersecurity

Dennis F. Galletta, Gregory D. Moody, Paul Benjamin Lowry, Robert Willison, Scott Boss, Yan Chen, Xin “Robert” Luo, Daniel Pienta, Peter Polak, Sebastian Schuetze, and Jason Thatcher
This study explores how to improve cybersecurity by focusing on the human element. Based on interviews with C-level executives and prior experimental research, the paper proposes a strategy for communicating cyber threats that balances making employees aware of the dangers (fear) with building their confidence (efficacy) to handle those threats effectively.

Problem Despite advanced security technology, costly data breaches continue to rise because human error remains the weakest link. Traditional cybersecurity training and policies have proven ineffective, indicating a need for a new strategic approach to manage human risk.

Outcome - Human behavior is the primary vulnerability in cybersecurity, and conventional training programs are often insufficient to address this risk.
- Managers must strike a careful balance in their security communications: instilling a healthy awareness of threats ('survival fear') without causing excessive panic or anxiety, which can be counterproductive.
- Building employees' confidence ('efficacy') in their ability to identify and respond to threats is just as crucial as making them aware of the dangers.
- Effective tools for changing behavior include interactive methods like phishing simulations that provide immediate feedback, gamification, and fostering a culture where security is a shared responsibility.
- The most effective approach is to empower users by providing them with clear, simple tools and the knowledge to act, rather than simply punishing mistakes or overwhelming them with fear.
Cybersecurity, Human Risk, Fear Appeals, Security Awareness, User Actions, Management Interventions, Data Breaches
Design Knowledge for Virtual Learning Companions from a Value-centered Perspective

Design Knowledge for Virtual Learning Companions from a Value-centered Perspective

Ricarda Schlimbach, Bijan Khosrawi-Rad, Tim C. Lange, Timo Strohmann, Susanne Robra-Bissantz
This study develops design principles for Virtual Learning Companions (VLCs), which are AI-powered chatbots designed to help students with motivation and time management. Using a design science research approach, the authors conducted interviews, workshops, and built and tested several prototypes with students. The research aims to create a framework for designing VLCs that not only provide functional support but also build a supportive, companion-like relationship with the learner.

Problem Working students in higher education often struggle to balance their studies with their jobs, leading to challenges with motivation and time management. While conversational AI like ChatGPT is becoming common, these tools often lack the element of companionship and a holistic approach to learning support. This research addresses the gap in how to design AI learning tools that effectively integrate motivation, time management, and relationship-building from a user-value-centered perspective.

Outcome - The study produced a comprehensive framework for designing Virtual Learning Companions (VLCs), resulting in 9 design principles, 28 meta-requirements, and 33 design features.
- The findings are structured around a “value-in-interaction” model, which proposes that a VLC's value is created across three interconnected layers: the Relationship Layer, the Matching Layer, and the Service Layer.
- Key design principles include creating a human-like and adaptive companion, enabling proactive and reactive behavior, building a trustworthy relationship, providing supportive content, and fostering a motivational and ethical learning environment.
- Evaluation of a coded prototype revealed that different student groups have different preferences, emphasizing that VLCs must be adaptable to their specific educational context and user needs to be effective.
Conversational Agent, Education, Virtual Learning Companion, Design Knowledge, Value
REGULATING EMERGING TECHNOLOGIES: PROSPECTIVE SENSEMAKING THROUGH ABSTRACTION AND ELABORATION

REGULATING EMERGING TECHNOLOGIES: PROSPECTIVE SENSEMAKING THROUGH ABSTRACTION AND ELABORATION

Stefan Seidel, Christoph J. Frick, Jan vom Brocke
This study examines how various actors, including legal experts, government officials, and industry leaders, collaborated to create laws for new technologies like blockchain. Through a case study in Liechtenstein, it analyzes the process of developing a law on "trustworthy technology," focusing on how the participants collectively made sense of a complex and evolving subject to construct a new regulatory framework.

Problem Governments face a significant challenge in regulating emerging digital technologies. They must create rules that prevent harmful effects and protect users without stifling innovation. This is particularly difficult when the full potential and risks of a new technology are not yet clear, creating regulatory gaps and uncertainty for businesses.

Outcome - Creating effective regulation for new technologies is a process of 'collective prospective sensemaking,' where diverse stakeholders build a shared understanding over time.
- This process relies on two interrelated activities: 'abstraction' and 'elaboration'. Abstraction involves generalizing the essential properties of a technology to create flexible, technology-neutral rules that encourage innovation.
- Elaboration involves specifying details and requirements to provide legal certainty and protect users.
- Through this process, the regulatory target can evolve significantly, as seen in the case study's shift from regulating 'blockchain/cryptocurrency' to a broader, more durable law for the 'token economy' and 'trustworthy technology'.
Technology regulation, prospective sensemaking, sensemaking, institutional construction, emerging technology, blockchain, token economy