AIS Logo
Living knowledge for digital leadership
All AI Governance & Ethics Digital Transformation & Innovation Supply Chain & IoT SME & IT Management Platform Ecosystems & Strategy Cybersecurity & Risk AI Applications & Technologies Healthcare & Well-being Digital Work & Collaboration
Understanding the Ethics of Generative AI: Established and New Ethical Principles

Understanding the Ethics of Generative AI: Established and New Ethical Principles

Joakim Laine, Matti Minkkinen, Matti Mäntymäki
This study conducts a comprehensive review of academic literature to synthesize the ethical principles of generative artificial intelligence (GenAI) and large language models (LLMs). It explores how established AI ethics are presented in the context of GenAI and identifies what new ethical principles have surfaced due to the unique capabilities of this technology.

Problem The rapid development and widespread adoption of powerful GenAI tools like ChatGPT have introduced new ethical challenges that are not fully covered by existing AI ethics frameworks. This creates a critical gap, as the specific ethical principles required for the responsible development and deployment of GenAI systems remain relatively unclear.

Outcome - Established AI ethics principles (e.g., fairness, privacy, responsibility) are still relevant, but their importance and interpretation are shifting in the context of GenAI.
- Six new ethical principles specific to GenAI are identified: respect for intellectual property, truthfulness, robustness, recognition of malicious uses, sociocultural responsibility, and human-centric design.
- Principles such as non-maleficence, privacy, and environmental sustainability have gained heightened importance due to the general-purpose, large-scale nature of GenAI systems.
- The paper proposes 'meta-principles' for managing ethical complexities, including ranking principles, mapping contradictions between them, and implementing continuous monitoring.
Generative AI, AI Ethics, Large Language Models, AI Governance, Ethical Principles, AI Auditing
TSAW Drones: Revolutionizing India's Drone Logistics with Digital Technologies

TSAW Drones: Revolutionizing India's Drone Logistics with Digital Technologies

Rakesh Gupta, Sujeet Kumar Sharma, Stevelal Stevelal
This case study examines TSAW Drones, an Indian startup transforming the country's logistics sector with advanced drone technology. It explores how the company leverages the Internet of Things (IoT), big data, cloud computing, and artificial intelligence (AI) to deliver essential supplies, particularly in the healthcare sector, to remote and inaccessible locations. The paper analyzes TSAW's technological evolution, its position in the competitive market, and the strategic choices it faces for future growth.

Problem India's diverse and challenging geography creates significant logistical hurdles, especially for the timely delivery of critical medical supplies to remote rural areas. Traditional transportation networks are often inefficient or non-existent in these regions, leading to delays and inadequate healthcare access. This study addresses how TSAW Drones tackles this problem by creating a 'fifth mode of transportation' to bridge these infrastructure gaps and ensure rapid, reliable delivery of essential goods.

Outcome - TSAW Drones successfully leveraged a combination of digital technologies, including AI, IoT, and a Drone Cloud Intelligence System (DCIS), to establish itself as a key player in India's healthcare logistics.
- The company pioneered critical services, such as delivering medical supplies to high-altitude locations and transporting oncological tissues mid-surgery, proving the viability of drones for time-sensitive healthcare needs.
- The study highlights the strategic crossroads faced by TSAW: whether to deepen its specialization within the complex healthcare vertical or to expand horizontally into other growing sectors like agriculture and infrastructure.
- Favorable government policies and the rapid evolution of smart-connected product (SCP) technologies are identified as key drivers for the growth of India's drone industry and companies like TSAW.
Drone Logistics, Drone Technology, Artificial Intelligence, Cloud Computing, Smart Connected Products (SCPs), Case Study, Logistics Innovation
Watch Out, You are Live! Toward Understanding the Impact of AI on Privacy of Employees

Watch Out, You are Live! Toward Understanding the Impact of AI on Privacy of Employees

Ashneet Kaur, Sudhanshu Maheshwari, Indranil Bose, Simarjeet Singh
This study conducts a systematic literature review to comprehensively explore the implications of Artificial Intelligence (AI) on employee privacy. It utilizes the privacy calculus framework to analyze the trade-offs organizations and employees face when integrating AI technologies in the workplace. The research evaluates how different types of AI technologies compromise or safeguard privacy and discusses their varying impacts.

Problem The rapid and pervasive adoption of AI in the workplace has enhanced efficiency but also raised significant concerns regarding employee privacy. There is a research gap in holistically understanding the broad implications of advancing AI technologies on employee privacy, as previous studies often focus on narrow applications without a comprehensive theoretical framework.

Outcome - The integration of AI in the workplace presents a trade-off, offering benefits like objective performance evaluation while posing significant risks such as over-surveillance and erosion of trust.
- The study categorizes AI into four advancing types (descriptive, predictive, prescriptive, and autonomous), each progressively increasing the complexity of privacy challenges and altering the employee privacy calculus.
- As AI algorithms become more advanced and opaque, it becomes more difficult for employees to understand how their data is used, leading to feelings of powerlessness and potential resistance.
- The paper identifies a significant lack of empirical research specifically on AI's impact on employee privacy, as opposed to the more widely studied area of consumer privacy.
- To mitigate privacy risks, the study recommends practical strategies for organizations, including transparent communication about data practices, involving employees in AI system design, and implementing strong ethical AI frameworks.
Artificial Intelligence, Employee Privacy, Privacy Calculus, Systematic Review, Workplace Surveillance, AI Ethics
IBM Watson Health Growth Strategy: Is Artificial Intelligence (AI) The Answer

IBM Watson Health Growth Strategy: Is Artificial Intelligence (AI) The Answer

Abhinav Shekhar, Rakesh Gupta, Sujeet Kumar Sharma
This study analyzes IBM's strategic dilemma with its Watson Health initiative, which aimed to monetize artificial intelligence for cancer detection and treatment recommendations. It explores whether IBM should continue its specialized focus on healthcare (a vertical strategy) or reposition Watson as a versatile, cross-industry AI platform (a horizontal strategy). The paper provides insights into the opportunities and challenges associated with unlocking the transformational power of AI in a business context.

Problem Despite a multi-billion dollar investment and initial promise, IBM's Watson Health struggled with profitability, model accuracy, and scalability. The AI's recommendations were not consistently reliable or generalizable across different patient populations and healthcare systems, leading to poor adoption. This created a critical strategic crossroads for IBM: whether to continue investing heavily in the specialized healthcare vertical or to pivot towards a more scalable, general-purpose AI platform to drive future growth.

Outcome - Model Accuracy & Bias: Watson's performance was inconsistent, and its recommendations, trained primarily on US data, were not always applicable to international patient populations, revealing significant algorithmic bias.
- Lack of Explainability: The 'black box' nature of the AI made it difficult for clinicians to trust its recommendations, hindering adoption as they could not understand its reasoning process.
- Integration and Scaling Challenges: Integrating Watson into existing hospital workflows and electronic health records was costly and complex, creating significant barriers to widespread implementation.
- Strategic Dilemma: The challenges forced IBM to choose between continuing its high-investment vertical strategy in healthcare, pivoting to a more scalable horizontal cross-industry platform, or attempting a convergence of both approaches.
Artificial Intelligence (AI), AI Strategy, Watson, Healthcare AI, Vertical AI, Horizontal AI, AI Ethics
Reinventing French Agriculture: The Era of Farmers 4.0, Technological Innovation and Sustainability

Reinventing French Agriculture: The Era of Farmers 4.0, Technological Innovation and Sustainability

Claude Chammaa, Fatma Fourati-Jamoussi, Lucian Ceapraz, Valérie Leroux
This study investigates the behavioral, contextual, and economic factors that influence French farmers' adoption of innovative agricultural technologies. Using a mixed-methods approach that combines qualitative interviews and quantitative surveys, the research proposes and validates the French Farming Innovation Adoption (FFIA) model, an agricultural adaptation of the UTAUT2 model, to explain technology usage.

Problem The agricultural sector is rapidly transforming with digital innovation, but the factors driving technology adoption among farmers, particularly in cost-sensitive and highly regulated environments like France, are not fully understood. Existing technology acceptance models often fail to capture the central role of economic viability, leaving a gap in explaining how sustainability goals and policy supports translate into practical adoption.

Outcome - The most significant direct predictor of technology adoption is 'Price Value'; farmers prioritize innovations they perceive as economically beneficial and cost-effective.
- Traditional drivers like government subsidies (Facilitating Conditions), expected performance, and social influence do not directly impact technology use. Instead, their influence is indirect, mediated through the farmer's perception of the technology's price value.
- Perceived sustainability benefits alone do not significantly drive adoption. For farmers to invest, environmental advantages must be clearly linked to economic gains, such as reduced costs or increased yields.
- Economic appraisal is the critical filter through which farmers evaluate new technologies, making it the central consideration in their decision-making process.
Farmers 4.0, Technology Adoption, Sustainability, Agricultural Innovation, UTAUT2, Price Value, Artificial Intelligence
Unveiling Enablers to the Use of Generative AI Artefacts in Rural Educational Settings: A Socio-Technical Perspective

Unveiling Enablers to the Use of Generative AI Artefacts in Rural Educational Settings: A Socio-Technical Perspective

Pramod K. Patnaik, Kunal Rao, Gaurav Dixit
This study investigates the factors that enable the use of Generative AI (GenAI) tools in rural educational settings within developing countries. Using a mixed-method approach that combines in-depth interviews and the Grey DEMATEL decision-making method, the research identifies and analyzes these enablers through a socio-technical lens to understand their causal relationships.

Problem Marginalized rural communities in developing countries face significant challenges in education, including a persistent digital divide that limits access to modern learning tools. This research addresses the gap in understanding how Generative AI can be practically leveraged to overcome these education-related challenges and improve learning quality in under-resourced regions.

Outcome - The study identified fifteen key enablers for using Generative AI in rural education, grouped into social and technical categories.
- 'Policy initiatives at the government level' was found to be the most critical enabler, directly influencing other key factors like GenAI training for teachers and students, community awareness, and school leadership commitment.
- Six novel enablers were uncovered through interviews, including affordable internet data, affordable telecommunication networks, and the provision of subsidized devices for lower-income groups.
- An empirical framework was developed to illustrate the causal relationships among the enablers, helping stakeholders prioritize interventions for effective GenAI adoption.
Generative AI, Rural, Education, Digital Divide, Interviews, Socio-technical Theory
Understanding the Implementation of Responsible Artificial Intelligence in Organizations: A Neo-Institutional Theory Perspective

Understanding the Implementation of Responsible Artificial Intelligence in Organizations: A Neo-Institutional Theory Perspective

David Horneber
This study conducts a literature review to understand why organizations struggle to effectively implement Responsible Artificial Intelligence (AI). Using a neo-institutional theory framework, the paper analyzes institutional pressures, common challenges, and the roles that AI practitioners play in either promoting or hindering the adoption of responsible AI practices.

Problem Despite growing awareness of AI's ethical and social risks and the availability of responsible AI frameworks, many organizations fail to translate these principles into practice. This gap between stated policy and actual implementation means that the goals of making AI safe and ethical are often not met, creating significant risks for businesses and society while undermining trust.

Outcome - A fundamental tension exists between the pressures to adopt Responsible AI (e.g., legal compliance, reputation) and inhibitors (e.g., market demand for functional AI, lack of accountability), leading to ineffective, symbolic implementation.
- Ineffectiveness often takes two forms: 'policy-practice decoupling' (policies are adopted for show but not implemented) and 'means-end decoupling' (practices are implemented but fail to achieve their intended ethical goals).
- AI practitioners play crucial roles as either 'institutional custodians' who resist change to preserve existing technical practices, or as 'institutional entrepreneurs' who champion the implementation of Responsible AI.
- The study concludes that a bottom-up approach by motivated practitioners is insufficient; effective implementation requires strong organizational support, clear structures, and proactive processes to bridge the gap between policy and successful outcomes.
Artificial Intelligence, Responsible AI, AI Ethics, Organizations, Neo-Institutional Theory
Building an Artificial Intelligence Explanation Capability

Building an Artificial Intelligence Explanation Capability

Ida Someh, Barbara H. Wixom, Cynthia M. Beath, Angela Zutavern
This study introduces the concept of an "AI Explanation Capability" (AIX) that companies must develop to successfully implement artificial intelligence. Using case studies from the Australian Taxation Office and General Electric, the paper outlines a framework with four key dimensions (decision tracing, bias remediation, boundary setting, and value formulation) to help organizations address the inherent challenges of AI.

Problem Businesses are increasingly adopting AI but struggle with its distinctive challenges, particularly the "black-box" nature of complex models. This opacity makes it difficult to trust AI, manage risks like algorithmic bias, prevent unintended negative consequences, and prove the technology's business value, ultimately hindering widespread and successful deployment.

Outcome - AI projects present four unique challenges: Model Opacity (the inability to understand a model's inner workings), Model Drift (degrading performance over time), Mindless Actions (acting without context), and the Unproven Nature of AI (difficulty in demonstrating value).
- To overcome these challenges, organizations must build a new organizational competency called an AI Explanation Capability (AIX).
- The AIX capability is comprised of four dimensions: Decision Tracing (making models understandable), Bias Remediation (identifying and fixing unfairness), Boundary Setting (defining safe operating limits for AI), and Value Formulation (articulating and measuring the business value of AI).
- Building this capability requires a company-wide effort, involving domain experts and business leaders alongside data scientists to ensure AI is deployed safely, ethically, and effectively.
AI explanation, explainable AI, AIX capability, model opacity, model drift, AI governance, bias remediation
Exploring the Agentic Metaverse's Potential for Transforming Cybersecurity Workforce Development

Exploring the Agentic Metaverse's Potential for Transforming Cybersecurity Workforce Development

Ersin Dincelli, Haadi Jafarian
This study explores how an 'agentic metaverse'—an immersive virtual world powered by intelligent AI agents—can be used for cybersecurity training. The researchers presented an AI-driven metaverse prototype to 53 cybersecurity professionals to gather qualitative feedback on its potential for transforming workforce development.

Problem Traditional cybersecurity training methods, such as classroom instruction and static online courses, are struggling to keep up with the fast-evolving threat landscape and high demand for skilled professionals. These conventional approaches often lack the realism and adaptivity needed to prepare individuals for the complex, high-pressure situations they face in the real world, contributing to a persistent skills gap.

Outcome - The concept of an AI-driven agentic metaverse for training was met with strong enthusiasm, with 92% of professionals believing it would be effective for professional training.
- Key challenges to implementing this technology include significant infrastructure demands, the complexity of designing realistic AI-driven scenarios, ensuring security and privacy, and managing user adoption.
- The study identified five core challenges: infrastructure, multi-agent scenario design, security/privacy, governance of social dynamics, and change management.
- Six practical recommendations are provided for organizations to guide implementation, focusing on building a scalable infrastructure, developing realistic training scenarios, and embedding security, privacy, and safety by design.
Agentic Metaverse, Cybersecurity Training, Workforce Development, AI Agents, Immersive Learning, Virtual Reality, Training Simulation
Possible, Probable and Preferable Futures for Integrating Artificial Intelligence into Talent Acquisition

Possible, Probable and Preferable Futures for Integrating Artificial Intelligence into Talent Acquisition

Laura Bayor, Christoph Weinert, Tina Ilek, Christian Maier, Tim Weitzel
This study explores the integration of Artificial Intelligence (AI) into the talent acquisition (TA) process to guide organizations toward a better future of work. Using a Delphi study with C-level TA experts, the research identifies, evaluates, and categorizes AI opportunities and challenges into possible, probable, and preferable futures, offering actionable recommendations.

Problem Acquiring skilled employees is a major challenge for businesses, and traditional talent acquisition processes are often labor-intensive and inefficient. While AI offers a solution, many organizations are uncertain about how to effectively integrate it, facing the risk of falling behind competitors if they fail to adopt the right strategies.

Outcome - The study identifies three primary business goals for integrating AI into talent acquisition: finding the best-fit candidates, making HR tasks more efficient, and attracting new applicants.
- Key preferable AI opportunities include automated interview scheduling, AI-assisted applicant ranking, identifying and reaching out to passive candidates ('cold talent'), and optimizing job posting content for better reach and diversity.
- Significant challenges that organizations must mitigate include data privacy and security issues, employee and stakeholder distrust of AI, technical integration hurdles, potential for bias in AI systems, and ethical concerns.
- The paper recommends immediate actions such as implementing AI recommendation agents and chatbots, and future actions like standardizing internal data, ensuring AI transparency, and establishing clear lines of accountability for AI-driven hiring decisions.
Artificial Intelligence, Talent Acquisition, Human Resources, Recruitment, Delphi Study, Future of Work, Strategic HR Management
Implementing AI into ERP Software

Implementing AI into ERP Software

Siar Sarferaz
This study investigates how to systematically integrate Artificial Intelligence (AI) into complex Enterprise Resource Planning (ERP) systems. Through an analysis of real-world use cases, the author identifies key challenges and proposes a comprehensive DevOps (Development and Operations) framework to standardize and streamline the entire lifecycle of AI applications within an ERP environment.

Problem While integrating AI into ERP software offers immense potential for automation and optimization, organizations lack a systematic approach to do so. This absence of a standardized framework leads to inconsistent, inefficient, and costly implementations, creating significant barriers to adopting AI capabilities at scale within enterprise systems.

Outcome - Identified 20 specific, recurring gaps in the development and operation of AI applications within ERP systems, including complex setup, heterogeneous development, and insufficient monitoring.
- Developed a comprehensive DevOps framework that standardizes the entire AI lifecycle into six stages: Create, Check, Configure, Train, Deploy, and Monitor.
- The proposed framework provides a systematic, self-service approach for business users to manage AI models, reducing the reliance on specialized technical teams and lowering the total cost of ownership.
- A quantitative evaluation across 10 real-world AI scenarios demonstrated that the framework reduced processing time by 27%, increased cost savings by 17%, and improved outcome quality by 15%.
Enterprise Resource Planning, Artificial Intelligence, DevOps, Software Integration, AI Development, AI Operations, Enterprise AI
Trust Me, I'm a Tax Advisor: Influencing Factors for Adopting Generative AI Assistants in Tax Law

Trust Me, I'm a Tax Advisor: Influencing Factors for Adopting Generative AI Assistants in Tax Law

Ben Möllmann, Leonardo Banh, Jan Laufer, and Gero Strobel
This study explores the critical role of user trust in the adoption of Generative AI assistants within the specialized domain of tax law. Employing a mixed-methods approach, researchers conducted quantitative questionnaires and qualitative interviews with legal experts using two different AI prototypes. The goal was to identify which design factors are most effective at building trust and encouraging use.

Problem While Generative AI can assist in fields like tax law that require up-to-date research, its adoption is hindered by issues like lack of transparency, potential for bias, and inaccurate outputs (hallucinations). These problems undermine user trust, which is essential for collaboration in high-stakes professional settings where accuracy is paramount.

Outcome - Transparency, such as providing clear source citations, was a key factor in building user trust.
- Human-like features (anthropomorphism), like a conversational greeting and layout, positively influenced user perception and trust.
- Compliance with social and ethical norms, including being upfront about the AI's limitations, was also found to enhance trustworthiness.
- A higher level of trust in the AI assistant directly leads to an increased intention among professionals to use the tool in their work.
Generative Artificial Intelligence, Human-GenAI Collaboration, Trust, GenAI Adoption
There is AI in SustAInability – A Taxonomy Structuring AI For Environmental Sustainability

There is AI in SustAInability – A Taxonomy Structuring AI For Environmental Sustainability

Feline Schnaak, Katharina Breiter, Henner Gimpel
This study develops a structured framework to organize the growing field of artificial intelligence for environmental sustainability (AIfES). Through an iterative process involving literature reviews and real-world examples, the researchers created a multi-layer taxonomy. This framework is designed to help analyze and categorize AI systems based on their context, technical setup, and usage.

Problem Artificial intelligence is recognized as a powerful tool for promoting environmental sustainability, but the existing research and applications are fragmented and lack a cohesive structure. This disorganization makes it difficult for researchers and businesses to holistically understand, compare, and develop effective AI solutions. There is a clear need for a systematic framework to guide the analysis and deployment of AI in this critical domain.

Outcome - The study introduces a comprehensive, multi-layer taxonomy for AI systems for environmental sustainability (AIfES).
- This taxonomy is structured into three layers: context (the sustainability challenge), AI setup (the technology and data), and usage (risks and end-users).
- It provides a systematic tool for researchers, developers, and policymakers to analyze, classify, and benchmark AI applications, enhancing transparency and understanding.
- The framework supports the responsible design and development of impactful AI solutions by highlighting key dimensions and characteristics for evaluation.
Artificial Intelligence, AI for Sustainability, Environmental Sustainability, Green IS, Taxonomy
Navigating Generative AI Usage Tensions in Knowledge Work: A Socio-Technical Perspective

Navigating Generative AI Usage Tensions in Knowledge Work: A Socio-Technical Perspective

Anna Gieß, Sofia Schöbel, and Frederik Möller
This study explores the complex challenges and advantages of integrating Generative Artificial Intelligence (GenAI) into knowledge-based work. Using socio-technical systems theory, the researchers conducted a systematic literature review and qualitative interviews with 18 knowledge workers to identify key points of conflict. The paper proposes solutions like human-in-the-loop models and robust AI governance policies to foster responsible and efficient GenAI usage.

Problem As organizations rapidly adopt GenAI to boost productivity, they face significant tensions between efficiency, reliability, and data privacy. There is a need to understand these conflicting forces to develop strategies that maximize the benefits of GenAI while mitigating risks related to ethics, data protection, and over-reliance on the technology.

Outcome - Productivity-Reflection Tension: GenAI increases efficiency but can lead to blind reliance and reduced critical thinking on the content it generates.
- Availability-Reliability Contradiction: While GenAI offers constant access to information, its output is not always reliable, increasing the risk of misinformation.
- Efficiency-Traceability Dilemma: Content is produced quickly, but the lack of clear source references makes verification difficult in professional settings.
- Usefulness-Transparency Tension: The utility of GenAI is limited by a lack of transparency in how it generates outputs, which reduces user trust.
- Convenience-Data Protection Tension: GenAI simplifies tasks but creates significant concerns about the privacy and security of sensitive information.
Generative AI, Knowledge work, Tensions, Socio-technical systems theory
Thinking Twice: A Sequential Approach to Nudge Towards Reflective Judgment in GenAI-Assisted Decision Making

Thinking Twice: A Sequential Approach to Nudge Towards Reflective Judgment in GenAI-Assisted Decision Making

Hüseyin Hussein Keke, Daniel Eisenhardt, Christian Meske
This study investigates how to encourage more thoughtful and analytical decision-making when people use Generative AI (GenAI). Through an experiment with 130 participants, researchers tested an interaction design where users first made their own decision on a problem-solving task before receiving AI assistance. This sequential approach was compared to conditions where users received AI help concurrently or not at all.

Problem When using GenAI tools for decision support, humans have a natural tendency to rely on quick, intuitive judgments rather than engaging in deep, analytical thought. This can lead to suboptimal decisions and increases the risks associated with relying on AI, as users may not critically evaluate the AI's output. The study addresses the challenge of designing human-AI interactions that promote a shift towards more reflective thinking.

Outcome - Requiring users to make an initial decision before receiving GenAI help (a sequential approach) significantly improved their final decision-making performance.
- This sequential interaction method was more effective than providing AI assistance at the same time as the task (concurrently) or providing no AI assistance at all.
- Users who made an initial decision first were more likely to use the available AI prompts, suggesting a more deliberate engagement with the technology.
- The findings suggest that this sequential design acts as a 'cognitive nudge,' successfully shifting users from fast, intuitive thinking to slower, more reflective analysis.
Dual Process Theory, Digital Nudging, Cognitive Forcing, Generative AI, Decision Making
Adopting Generative AI in Industrial Product Companies: Challenges and Early Pathways

Adopting Generative AI in Industrial Product Companies: Challenges and Early Pathways

Vincent Paffrath, Manuel Wlcek, and Felix Wortmann
This study investigates the adoption of Generative AI (GenAI) within industrial product companies by identifying key challenges and potential solutions. Based on expert interviews with industry leaders and technology providers, the research categorizes findings into technological, organizational, and environmental dimensions to bridge the gap between expectation and practical implementation.

Problem While GenAI is transforming many industries, its adoption by industrial product companies is particularly difficult. Unlike software firms, these companies often lack deep digital expertise, are burdened by legacy systems, and must integrate new technologies into complex hardware and service environments, making it hard to realize GenAI's full potential.

Outcome - Technological challenges like AI model 'hallucinations' and inconsistent results are best managed through enterprise grounding (using company data to improve accuracy) and standardized testing procedures.
- Organizational hurdles include the difficulty of calculating ROI and managing unrealistic expectations. The study suggests focusing on simple, non-financial KPIs (like user adoption and time saved) and providing realistic employee training to demystify the technology.
- Environmental risks such as vendor lock-in and complex new regulations can be mitigated by creating model-agnostic systems that allow switching between providers and establishing standardized compliance frameworks for all AI use cases.
GenAI, AI Adoption, Industrial Product Companies, AI in Manufacturing, Digital Transformation
AI-Powered Teams: How the Usage of Generative AI Tools Enhances Knowledge Transfer and Knowledge Application in Knowledge-Intensive Teams

AI-Powered Teams: How the Usage of Generative AI Tools Enhances Knowledge Transfer and Knowledge Application in Knowledge-Intensive Teams

Olivia Bruhin, Luc Bumann, Philipp Ebel
This study investigates the role of Generative AI (GenAI) tools, such as ChatGPT and GitHub Copilot, in software development teams. Through an empirical study with 80 software developers, the research examines how GenAI usage influences key knowledge management processes—knowledge transfer and application—and the subsequent effect on team performance.

Problem While the individual productivity gains from GenAI tools are increasingly recognized, their broader impact on team-level knowledge management and performance remains poorly understood. This gap poses a risk for businesses, as adopting these technologies without understanding their collaborative effects could lead to unintended consequences like reduced knowledge retention or impaired team dynamics.

Outcome - The use of Generative AI (GenAI) tools significantly enhances both knowledge transfer (sharing) and knowledge application within software development teams.
- GenAI usage has a direct positive impact on overall team performance.
- The performance improvement is primarily driven by the team's improved ability to apply knowledge, rather than just the transfer of knowledge alone.
- The findings highlight GenAI's role as a catalyst for innovation, but stress that knowledge gained via AI must be actively and contextually applied to boost team performance effectively.
Human-AI Collaboration, AI in Knowledge Work, Collaboration, Generative AI, Software Development, Team Performance, Knowledge Management
Revisiting the Responsibility Gap in Human-AI Collaboration from an Affective Agency Perspective

Revisiting the Responsibility Gap in Human-AI Collaboration from an Affective Agency Perspective

Jonas Rieskamp, Annika Küster, Bünyamin Kalyoncuoglu, Paulina Frieda Saffer, and Milad Mirbabaie
This study investigates how responsibility is understood and assigned when artificial intelligence (AI) systems influence decision-making processes. Using qualitative interviews with experts across various sectors, the research explores how human oversight and emotional engagement (affective agency) shape accountability in human-AI collaboration.

Problem As AI systems become more autonomous in fields from healthcare to finance, a 'responsibility gap' emerges. It becomes difficult to assign accountability for errors or outcomes, as responsibility is diffused among developers, users, and the AI itself, challenging traditional models of liability.

Outcome - Using AI does not diminish human responsibility; instead, it often intensifies it, requiring users to critically evaluate and validate AI outputs.
- Most professionals view AI as a supportive tool or 'sparring partner' rather than an autonomous decision-maker, maintaining that humans must have the final authority.
- The uncertainty surrounding how AI works encourages users to be more cautious and critical, which helps bridge the responsibility gap rather than leading to blind trust.
- Responsibility remains anchored in human oversight, with users feeling accountable not only for the final decision but also for how the AI was used to reach it.
Artificial Intelligence (AI), Responsibility Gap, Responsibility in Human-AI collaboration, Decision-Making, Sociomateriality, Affective Agency
Load More Showing 18 of 48