AIS Logo
Living knowledge for digital leadership
All AI Governance & Ethics Digital Transformation & Innovation Supply Chain & IoT SME & IT Management Platform Ecosystems & Strategy Cybersecurity & Risk AI Applications & Technologies Healthcare & Well-being Digital Work & Collaboration
A Multi-Level Strategy for Deepfake Content Moderation under EU Regulation

A Multi-Level Strategy for Deepfake Content Moderation under EU Regulation

Luca Deck, Max-Paul Förster, Raimund Weidlich, and Niklas Kühl
This study reviews existing methods for marking, detecting, and labeling deepfakes to assess their effectiveness under new EU regulations. Based on a multivocal literature review, the paper finds that individual methods are insufficient. Consequently, it proposes a novel multi-level strategy that combines the strengths of existing approaches for more scalable and practical content moderation on online platforms.

Problem The increasing availability of deepfake technology poses a significant risk to democratic societies by enabling the spread of political disinformation. While the European Union has enacted regulations to enforce transparency, there is a lack of effective industry standards for implementation. This makes it challenging for online platforms to moderate deepfake content at scale, as current individual methods fail to meet regulatory and practical requirements.

Outcome - Individual methods for marking, detecting, and labeling deepfakes are insufficient to meet EU regulatory and practical requirements alone.
- The study proposes a multi-level strategy that combines the strengths of various methods (e.g., technical detection, trusted sources) to create a more robust and effective moderation process.
- A simple scoring mechanism is introduced to ensure the strategy is scalable and practical for online platforms managing massive amounts of content.
- The proposed framework is designed to be adaptable to new types of deepfake technology and allows for context-specific risk assessment, such as for political communication.
Deepfakes, EU Regulation, Online Platforms, Content Moderation, Political Communication
Boundary Resources – A Review

Boundary Resources – A Review

David Rochholz
This study conducts a systematic literature review to analyze the current state of research on 'boundary resources,' which are the tools like APIs and SDKs that connect digital platforms with third-party developers. By examining 89 publications, the paper identifies major themes and significant gaps in the academic literature. The goal is to consolidate existing knowledge and propose a clear research agenda for the future.

Problem Digital platforms rely on third-party developers to create value, but the tools (boundary resources) that enable this collaboration are not well understood. Research is fragmented and often overlooks critical business aspects, such as the financial reasons for opening a platform and how to monetize these resources. Furthermore, most studies focus on consumer apps, ignoring the unique challenges of business-to-business (B2B) platforms and the rise of AI-driven developers.

Outcome - Identifies four key gaps in current research: the financial impact of opening platforms, the overemphasis on consumer (B2C) versus business (B2B) contexts, the lack of a clear definition for what constitutes a platform, and the limited understanding of modern developers, including AI agents.
- Proposes a research agenda focused on monetization strategies, platform valuation, and the distinct dynamics of B2B ecosystems.
- Emphasizes the need to understand how the role of developers is changing with the advent of generative AI.
- Concludes that future research must create better frameworks to help businesses manage and profit from their platform ecosystems in a more strategic way.
Boundary Resource, Platform, Complementor, Research Agenda, Literature Review
You Only Lose Once: Blockchain Gambling Platforms

You Only Lose Once: Blockchain Gambling Platforms

Lorenz Baum, Arda Güler, and Björn Hanneke
This study investigates user behavior on emerging blockchain-based gambling platforms to provide insights for regulators and user protection. The researchers analyzed over 22,800 gambling rounds from YOLO, a smart contract-based platform, involving 3,306 unique users. A generalized linear mixed model was used to identify the effects of users' cognitive biases on their on-chain gambling activities.

Problem Online gambling revenues are increasing, which exacerbates societal problems and often evades regulatory oversight. The rise of decentralized, blockchain-based gambling platforms aggravates these issues by promising transparency while lacking user protection measures, making it easier to exploit users' cognitive biases and harder for authorities to enforce regulations.

Outcome - Cognitive biases like the 'anchoring effect' (repeatedly betting the same amount) and the 'gambler's fallacy' (believing a losing streak makes a win more likely) significantly increase the probability that a user will continue gambling.
- The study confirms that blockchain platforms can exploit these psychological biases, leading to sustained gambling and substantial financial losses for users, with a sample of 3,306 users losing a total of $5.1 million.
- Due to the decentralized and permissionless nature of these platforms, traditional regulatory measures like deposit limits, age verification, and self-exclusion are nearly impossible to enforce.
- The findings highlight the urgent need for new regulatory approaches and user protection mechanisms tailored to the unique challenges of decentralized gambling environments, such as on-chain monitoring for risky behavior.
gambling platform, smart contract, gambling behavior, cognitive bias, user behavior
The Role of Generative AI in P2P Rental Platforms: Investigating the Effects of Timing and Interactivity on User Reliance in Content (Co-)Creation Processes

The Role of Generative AI in P2P Rental Platforms: Investigating the Effects of Timing and Interactivity on User Reliance in Content (Co-)Creation Processes

Niko Spatscheck, Myriam Schaschek, Christoph Tomitza, and Axel Winkelmann
This study investigates how Generative AI can best assist users on peer-to-peer (P2P) rental platforms like Airbnb in writing property listings. Through an experiment with 244 participants, the researchers tested how the timing of when AI suggestions are offered and the level of interactivity (automatic vs. user-prompted) influence how much a user relies on the AI.

Problem While Generative AI offers a powerful way to help property hosts create compelling listings, platforms don't know the most effective way to implement these tools. It's unclear if AI assistance is more impactful at the beginning or end of the writing process, or if users prefer to actively ask for help versus receiving it automatically. This study addresses this knowledge gap to provide guidance for designing better AI co-writing assistants.

Outcome - Offering AI suggestions earlier in the writing process significantly increases how much users rely on them.
- Allowing users to actively prompt the AI for assistance leads to a slightly higher reliance compared to receiving suggestions automatically.
- Higher cognitive load (mental effort) reduces a user's reliance on AI-generated suggestions.
- For businesses like Airbnb, these findings suggest that AI writing tools should be designed to engage users at the very beginning of the content creation process to maximize their adoption and impact.
Human-genAI collaboration, Co-writing, P2P rental platforms, Reliance, Generative AI, Cognitive Load
Algorithmic Control in Non-Platform Organizations – Workers' Legitimacy Judgments and the Impact of Individual Character Traits

Algorithmic Control in Non-Platform Organizations – Workers' Legitimacy Judgments and the Impact of Individual Character Traits

Felix Hirsch
This study investigates how employees in traditional, non-platform companies perceive algorithmic control (AC) systems that manage their work. Using fuzzy-set Qualitative Comparative Analysis (fsQCA), it specifically examines how a worker's individual competitiveness influences whether they judge these systems as legitimate in terms of fairness, autonomy, and professional development.

Problem While the use of algorithms to manage workers is expanding from the platform economy to traditional organizations, little is known about why employees react so differently to it. Existing research has focused on organizational factors, largely neglecting how individual personality traits impact workers' acceptance and judgment of these new management systems.

Outcome - A worker's personality, specifically their competitiveness, is a major factor in how they perceive algorithmic management.
- Competitive workers generally judge algorithmic control positively, particularly in relation to fairness, autonomy, and competence development.
- Non-competitive workers tend to have negative judgments towards algorithmic systems, often rejecting them as unhelpful for their professional growth.
- The findings show a clear distinction: competitive workers see AC as fair, especially rating systems, while non-competitive workers view it as unfair.
Algorithmic Control, Legitimacy Judgments, Non-Platform Organizations, fsQCA, Worker Perception, Character Traits
The Promise and Perils of Low-Code AI Platforms

The Promise and Perils of Low-Code AI Platforms

Maria Kandaurova, Daniel A. Skog, Petra M. Bosch-Sijtsema
This study investigates the adoption of a low-code conversational Artificial Intelligence (AI) platform within four multinational corporations. Through a case study approach, the research identifies significant challenges that arise from fundamental, yet incorrect, assumptions about low-code technologies. The paper offers recommendations for companies to better navigate the implementation process and unlock the full potential of these platforms.

Problem As businesses increasingly turn to AI for process automation, they often encounter significant hurdles during adoption. Low-code AI platforms are marketed as a solution to simplify this process, but there is limited research on their real-world application. This study addresses the gap by showing how companies' false assumptions about the ease of use, adaptability, and integration of these platforms can limit their effectiveness and return on investment.

Outcome - The usability of low-code AI platforms is often overestimated; non-technical employees typically face a much steeper learning curve than anticipated and still require a foundational level of coding and AI knowledge.
- Adapting low-code AI applications to specific, complex business contexts is challenging and time-consuming, contrary to the assumption of easy tailoring. It often requires significant investment in standardizing existing business processes first.
- Integrating low-code platforms with existing legacy systems and databases is not a simple 'plug-and-play' process. Companies face significant challenges due to incompatible data formats, varied interfaces, and a lack of a comprehensive data strategy.
- Successful implementation requires cross-functional collaboration between IT and business teams, thorough platform testing before procurement, and a strategic approach to reengineering business processes to align with AI capabilities.
Low-Code AI Platforms, Artificial Intelligence, Conversational AI, Implementation Challenges, Digital Transformation, Business Process Automation, Case Study
Governing Citizen Development to Address Low-Code Platform Challenges

Governing Citizen Development to Address Low-Code Platform Challenges

Altus Viljoen, Marija Radić, Andreas Hein, John Nguyen, Helmut Krcmar
This study investigates how companies can effectively manage 'citizen development'—where employees with minimal technical skills use low-code platforms to build applications. Drawing on 30 interviews with citizen developers and platform experts across two firms, the research provides a practical governance framework to address the unique challenges of this approach.

Problem Companies face a significant shortage of skilled software developers, leading them to adopt low-code platforms that empower non-IT employees to create applications. However, this trend introduces serious risks, such as poor software quality, unmonitored development ('shadow IT'), and long-term maintenance burdens ('technical debt'), which organizations are often unprepared to manage.

Outcome - Citizen development introduces three primary risks: substandard software quality, shadow IT, and technical debt.
- Effective governance requires a more nuanced understanding of roles, distinguishing between 'traditional citizen developers' and 'low-code champions,' and three types of technical experts who support them.
- The study proposes three core sets of recommendations for governance: 1) strategically manage project scope and complexity, 2) organize effective collaboration through knowledge bases and proper tools, and 3) implement targeted education and training programs.
- Without strong governance, the benefits of rapid, decentralized development are quickly outweighed by escalating risks and costs.
citizen development, low-code platforms, IT governance, shadow IT, technical debt, software quality, case study
Showing all 25 podcasts