A Multi-Level Strategy for Deepfake Content Moderation under EU Regulation
Luca Deck, Max-Paul Förster, Raimund Weidlich, and Niklas Kühl
This study reviews existing methods for marking, detecting, and labeling deepfakes to assess their effectiveness under new EU regulations. Based on a multivocal literature review, the paper finds that individual methods are insufficient. Consequently, it proposes a novel multi-level strategy that combines the strengths of existing approaches for more scalable and practical content moderation on online platforms.
Problem
The increasing availability of deepfake technology poses a significant risk to democratic societies by enabling the spread of political disinformation. While the European Union has enacted regulations to enforce transparency, there is a lack of effective industry standards for implementation. This makes it challenging for online platforms to moderate deepfake content at scale, as current individual methods fail to meet regulatory and practical requirements.
Outcome
- Individual methods for marking, detecting, and labeling deepfakes are insufficient to meet EU regulatory and practical requirements alone. - The study proposes a multi-level strategy that combines the strengths of various methods (e.g., technical detection, trusted sources) to create a more robust and effective moderation process. - A simple scoring mechanism is introduced to ensure the strategy is scalable and practical for online platforms managing massive amounts of content. - The proposed framework is designed to be adaptable to new types of deepfake technology and allows for context-specific risk assessment, such as for political communication.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. In a world flooded with digital content, telling fact from fiction is harder than ever. Today, we're diving into the heart of this challenge: deepfakes.
Host: We're looking at a fascinating new study titled "A Multi-Level Strategy for Deepfake Content Moderation under EU Regulation." Here to help us unpack it is our expert analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Glad to be here, Anna.
Host: This study seems to be proposing a new playbook for online platforms. It reviews current methods for spotting deepfakes, finds them lacking under new EU laws, and suggests a new, combined strategy. Is that the gist?
Expert: That's it exactly. The key takeaway is that no single solution is a silver bullet. To tackle deepfakes effectively, especially at scale, platforms need a much smarter, layered approach.
Host: So let's start with the big problem. We hear about deepfakes constantly, but what's the specific challenge this study is addressing?
Expert: The problem is the massive risk they pose to our societies, particularly through political disinformation. The study mentions how deepfake technology is already being used to manipulate public opinion, citing a fake video of a German chancellor that caused a huge stir.
Host: And with major elections always on the horizon, the threat is very real. The European Union has regulations like the AI Act and the Digital Services Act to fight this, correct?
Expert: They do. The EU is mandating transparency. The AI Act requires creators of AI systems to *mark* deepfakes, and the Digital Services Act requires very large online platforms to *label* them for users. But here's the billion-dollar question the study highlights: how?
Host: The law says what to do, but not how to do it?
Expert: Precisely. There’s a huge gap between the legal requirement and a practical industry standard. The individual methods platforms currently use—like watermarking or simple technical detection—can't keep up with the volume and sophistication of deepfakes. They fail to meet the regulatory demands in the real world.
Host: So how did the researchers come up with a better way? What was their approach in this study?
Expert: They conducted what's called a multivocal literature review. In simple terms, they looked beyond just academic research and also analyzed official EU guidelines, industry reports, and other practical documents. This gave them a 360-degree view of the legal rules, the technical tools, and the real-world business challenges.
Host: A very pragmatic approach. So what were the key findings? The study proposes this "multi-level strategy." Can you break that down for us?
Expert: Of course. Think of it as a two-stage process. The first level is a fast, simple check for embedded "markers." Does the video have a reliable digital watermark saying it's AI-generated? Or, conversely, does it have a marker from a trusted source verifying it’s authentic? This helps sort the easy cases quickly.
Host: Okay, but what about the difficult cases, the ones without clear markers?
Expert: That's where the second level, a much more sophisticated analysis, kicks in. This is the core of the strategy. It doesn't rely on just one signal. Instead, it combines three things: the results of technical detection algorithms, information from trusted human sources like fact-checkers, and an assessment of the content's "downstream risk."
Host: Downstream risk? What does that mean?
Expert: It's all about context. A deepfake of a cat singing is low-risk entertainment. A deepfake of a political leader declaring a national emergency is an extremely high-risk threat. The strategy weighs the potential for real-world harm, giving more scrutiny to content involving things like political communication.
Host: And all of this gets rolled into a simple score for the platform's moderation team?
Expert: Exactly. The scores from the technical, trusted, and risk inputs are combined. Based on that final score, the platform can apply a clear label for its users, like "Warning" for a probable deepfake, or "Verified" for authenticated content. It makes the monumental task of moderation both scalable and defensible.
Host: This is the most important part for our audience, Alex. Why does this framework matter for business, especially for companies that aren't giant social media platforms?
Expert: For any large online platform operating in the EU, this is a direct roadmap for complying with the AI Act and the Digital Services Act. Having a robust, logical process like this isn't just about good governance; it's about mitigating massive legal and financial risks.
Host: So it's a compliance and risk-management tool. What else?
Expert: It’s fundamentally about trust. No brand wants its platform to be known for spreading disinformation. That erodes user trust and drives away advertisers. Implementing a smart, transparent moderation strategy like this one protects the integrity of your digital environment and, ultimately, your brand's reputation.
Host: And what's the takeaway for smaller businesses?
Expert: The principles are universal. Even if you don't fall under these specific EU regulations, if your business relies on user-generated content, or even just wants to secure its internal communications, this risk-based approach is best practice. It provides a systematic way to think about and manage the threat of manipulated media.
Host: Let's summarize. The growing threat of deepfakes is being met with new EU regulations, but platforms lack a practical way to comply.
Host: This study finds that single detection methods are not enough. It proposes a multi-level strategy that combines technical detection, trusted sources, and a risk assessment into a simple, scalable scoring system.
Host: For businesses, this offers a clear path toward compliance, protects invaluable brand trust, and provides a powerful framework for managing the modern risk of digital disinformation.
Host: Alex, thank you for making such a complex topic so clear. This strategy seems like a crucial step in the right direction.
Expert: My pleasure, Anna. It’s a vital conversation to be having.
Host: And thank you to our listeners for joining us on A.I.S. Insights, powered by Living Knowledge. We’ll see you next time.
Deepfakes, EU Regulation, Online Platforms, Content Moderation, Political Communication
Boundary Resources – A Review
David Rochholz
This study conducts a systematic literature review to analyze the current state of research on 'boundary resources,' which are the tools like APIs and SDKs that connect digital platforms with third-party developers. By examining 89 publications, the paper identifies major themes and significant gaps in the academic literature. The goal is to consolidate existing knowledge and propose a clear research agenda for the future.
Problem
Digital platforms rely on third-party developers to create value, but the tools (boundary resources) that enable this collaboration are not well understood. Research is fragmented and often overlooks critical business aspects, such as the financial reasons for opening a platform and how to monetize these resources. Furthermore, most studies focus on consumer apps, ignoring the unique challenges of business-to-business (B2B) platforms and the rise of AI-driven developers.
Outcome
- Identifies four key gaps in current research: the financial impact of opening platforms, the overemphasis on consumer (B2C) versus business (B2B) contexts, the lack of a clear definition for what constitutes a platform, and the limited understanding of modern developers, including AI agents. - Proposes a research agenda focused on monetization strategies, platform valuation, and the distinct dynamics of B2B ecosystems. - Emphasizes the need to understand how the role of developers is changing with the advent of generative AI. - Concludes that future research must create better frameworks to help businesses manage and profit from their platform ecosystems in a more strategic way.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge, where we translate complex research into actionable business strategy. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a study called "Boundary Resources – A Review." It’s all about the tools, like APIs and SDKs, that form the bridge between digital platforms and the third-party developers who build on them. Host: With me is our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, let’s start with the big picture. We hear about platforms like the Apple App Store or Salesforce all the time. They seem to be working, so what’s the problem this study is trying to solve? Expert: That's the perfect question. The problem is that while these platforms are hugely successful, we don't fully understand *why* on a strategic level. The tools that connect the platform to outside developers—what the study calls 'boundary resources'—are often treated as a technical afterthought. Expert: But they are at the core of a huge strategic trade-off. Open up too much, and you risk losing control, like Facebook did with the Cambridge Analytica scandal. Open up too little, and you stifle the innovation that makes your platform valuable in the first place. Host: So businesses are walking this tightrope without a clear map. Expert: Exactly. The research is fragmented. It often overlooks the crucial business questions, like what are the financial reasons for opening a platform? And how do you actually make money from these resources? The knowledge is just not consolidated. Host: To get a handle on this, what approach did the researchers take? Expert: They conducted what’s called a systematic literature review. Instead of running a new experiment, they analyzed 89 existing academic publications on the topic. It allowed them to create a comprehensive map of what we know, and more importantly, what we don’t. Host: It sounds like they found some significant gaps in that map. What were the key findings? Expert: There were four big ones. First, as I mentioned, the money. There’s a surprising lack of research on the financial motivations and monetization strategies for opening a platform. Everyone talks about growth, but not enough about profit. Host: That’s a massive blind spot for any business. What was the second gap? Expert: The second was an overemphasis on consumer-facing, or B2C, platforms. Think app stores for your phone. But business-to-business, or B2B, platforms operate under completely different conditions. The strategies that work for a mobile game developer won't necessarily work for a company integrating enterprise software. Host: That makes sense. You can’t just copy and paste the playbook. Expert: Right. The third finding was even more fundamental: a lack of a clear definition of what a platform even is. Does any software that offers an API automatically become a platform? The study found the lines are very blurry, which makes creating a sound strategy incredibly difficult. Host: And the fourth finding feels very relevant for our show. It has to do with who is using these resources. Expert: It does. The final gap is that most research assumes the developer—the ‘complementor’—is human. But with the rise of generative AI, that’s no longer true. AI agents are now acting as developers, creating code and integrations. Our current tools and governance models simply weren't designed for them. Host: This is fascinating. Let’s shift to the big "so what" question. Why does this matter for business leaders listening right now? Expert: It matters immensely. First, on monetization. This study is a call to action for businesses to move beyond vague ideas of ‘ecosystem growth’ and develop concrete strategies for how their boundary resources will generate revenue. Host: So, think of your API not just as a tool for others, but as a product in itself. Expert: Precisely. Second, for anyone in the B2B space, the takeaway is that you need a distinct strategy. The dynamics of trust, integration, and value capture are completely different from the B2C world. You need your own playbook. Host: And what about that fuzzy definition of a platform you mentioned? Expert: The practical advice there is to have strategic clarity. Leaders need to ask: *why* are we opening our platform? Is it to drive innovation? To control a market? Or to create a new revenue stream? Answering that question clarifies what your boundary resources need to do. Host: Finally, the point about A.I. is a look into the future. Expert: It is. The key takeaway is to start future-proofing your platform now. Business leaders need to ask how their APIs, their documentation, and their support systems will serve AI-driven developers. If you don't, you risk being left behind as your competitors build ecosystems that are faster, more efficient, and more automated. Host: So to summarize: businesses need to be crystal clear on the financial and strategic 'why' behind their platform, build a dedicated B2B strategy if applicable, and start designing for a future where your key partners might be AI agents. Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us. Expert: My pleasure, Anna. Host: And thank you for tuning into A.I.S. Insights. Join us next time as we continue to connect research with results.
Boundary Resource, Platform, Complementor, Research Agenda, Literature Review
You Only Lose Once: Blockchain Gambling Platforms
Lorenz Baum, Arda Güler, and Björn Hanneke
This study investigates user behavior on emerging blockchain-based gambling platforms to provide insights for regulators and user protection. The researchers analyzed over 22,800 gambling rounds from YOLO, a smart contract-based platform, involving 3,306 unique users. A generalized linear mixed model was used to identify the effects of users' cognitive biases on their on-chain gambling activities.
Problem
Online gambling revenues are increasing, which exacerbates societal problems and often evades regulatory oversight. The rise of decentralized, blockchain-based gambling platforms aggravates these issues by promising transparency while lacking user protection measures, making it easier to exploit users' cognitive biases and harder for authorities to enforce regulations.
Outcome
- Cognitive biases like the 'anchoring effect' (repeatedly betting the same amount) and the 'gambler's fallacy' (believing a losing streak makes a win more likely) significantly increase the probability that a user will continue gambling. - The study confirms that blockchain platforms can exploit these psychological biases, leading to sustained gambling and substantial financial losses for users, with a sample of 3,306 users losing a total of $5.1 million. - Due to the decentralized and permissionless nature of these platforms, traditional regulatory measures like deposit limits, age verification, and self-exclusion are nearly impossible to enforce. - The findings highlight the urgent need for new regulatory approaches and user protection mechanisms tailored to the unique challenges of decentralized gambling environments, such as on-chain monitoring for risky behavior.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. Today we're diving into a fascinating new study called "You Only Lose Once: Blockchain Gambling Platforms". Host: It investigates user behavior on these emerging, decentralized gambling sites to understand the risks and how we might better protect users. I have our analyst, Alex Ian Sutherland, here to break it down for us. Alex, welcome to the show. Expert: Thanks for having me, Anna. Host: So, Alex, this sounds like a deep dive into the Vegas of the blockchain world. What is the core problem this study is trying to address? Expert: Well, the online gambling industry is already huge, generating almost 100 billion dollars in revenue, and it brings a host of societal problems. But blockchain platforms take the risks to a whole new level. Host: How so? I thought blockchain was all about transparency and fairness. Expert: It is, and that’s the lure. But these platforms operate via 'smart contracts', meaning there's no central company in charge. This makes it almost impossible to enforce the usual user protections we see in traditional gambling, like age verification, deposit limits, or self-exclusion tools. It’s essentially a regulatory wild west, where technology can be used to exploit users' psychological vulnerabilities. Host: That sounds incredibly difficult to track. So how did the researchers approach this? Expert: The key is that the blockchain, while decentralized, is also public. The researchers analyzed the public transaction data from a specific gambling platform on the Ethereum blockchain called YOLO. Expert: They looked at over 22,800 gambling rounds, involving more than 3,300 unique users over a six-month period. They then used a statistical model to pinpoint exactly what factors and behaviors led people to continue gambling, even when they were losing. Host: And what did they find? Do these platforms really manipulate our psychology? Expert: The evidence is clear: yes, they do. The study confirmed that classic cognitive biases are very much at play, and these platforms can amplify them. Host: Cognitive biases? Can you give us an example? Expert: A great example is the 'anchoring effect'. The study found that users who repeatedly bet the same amount were significantly more likely to continue gambling. That repeated bet size becomes a mental 'anchor', making it easier to just hit 'play again' without stopping to think. Host: And what about that classic gambler's mindset of "I've lost this much, I must be due for a win"? Expert: That's called the 'gambler's fallacy', and it's a powerful driver. The study showed that after a streak of losses, users who believed a win was just around the corner were much more likely to keep playing. The platform's design doesn't stop them; in fact, it enables this kind of loss-chasing behavior. Host: This sounds incredibly dangerous. What was the financial damage to the users in the study? Expert: It’s staggering. For this sample of just over 3,300 users, the total losses added up to 5.1 million US dollars. It shows these are not small-stakes games, and the potential for real financial harm is substantial. Host: Okay, this is clearly a major issue. So what are the key takeaways for our business audience? Why does this matter for them? Expert: This is a critical lesson in ethical platform design, especially for anyone in the Web3 space. The study shows how specific features can be used to exploit user psychology. A business could easily design a platform that pre-sets high bet amounts to trigger that 'anchoring effect'. This is a major cautionary tale about responsible innovation. Host: Beyond ethics, are there other business implications? Expert: Absolutely. For the compliance and risk management sectors, this is a wake-up call. The study confirms that traditional regulatory tools are useless here. You can't enforce a deposit limit on a pseudonymous crypto wallet. This creates a huge challenge, but also an opportunity for innovation. Host: An opportunity? How do you mean? Expert: The study suggests new approaches based on the blockchain's transparency. Because all the data is public, you can build new 'Regulatory Tech' or 'RegTech' solutions. Imagine a service that provides on-chain monitoring to automatically flag wallets that are showing signs of addictive gambling behavior. This could be a new market for businesses focused on creating a safer decentralized environment. Host: So to summarize, these blockchain gambling platforms are a new frontier, but they’re amplifying old problems by exploiting human psychology in a regulatory vacuum. Expert: Exactly. And the very nature of the blockchain gives us a perfect, permanent ledger to study this behavior and find new ways to address it. Host: And for businesses, this is both a stark warning about the ethics of platform design and a signal of new opportunities in technology built to manage risk in this new digital world. Alex, this has been incredibly insightful. Thank you for breaking it down. Expert: My pleasure, Anna. Host: And thank you to our listeners for tuning into A.I.S. Insights. Join us next time as we continue to explore the vital intersection of business and technology.
gambling platform, smart contract, gambling behavior, cognitive bias, user behavior
The Role of Generative AI in P2P Rental Platforms: Investigating the Effects of Timing and Interactivity on User Reliance in Content (Co-)Creation Processes
Niko Spatscheck, Myriam Schaschek, Christoph Tomitza, and Axel Winkelmann
This study investigates how Generative AI can best assist users on peer-to-peer (P2P) rental platforms like Airbnb in writing property listings. Through an experiment with 244 participants, the researchers tested how the timing of when AI suggestions are offered and the level of interactivity (automatic vs. user-prompted) influence how much a user relies on the AI.
Problem
While Generative AI offers a powerful way to help property hosts create compelling listings, platforms don't know the most effective way to implement these tools. It's unclear if AI assistance is more impactful at the beginning or end of the writing process, or if users prefer to actively ask for help versus receiving it automatically. This study addresses this knowledge gap to provide guidance for designing better AI co-writing assistants.
Outcome
- Offering AI suggestions earlier in the writing process significantly increases how much users rely on them. - Allowing users to actively prompt the AI for assistance leads to a slightly higher reliance compared to receiving suggestions automatically. - Higher cognitive load (mental effort) reduces a user's reliance on AI-generated suggestions. - For businesses like Airbnb, these findings suggest that AI writing tools should be designed to engage users at the very beginning of the content creation process to maximize their adoption and impact.
Host: Welcome to A.I.S. Insights, the podcast where we connect Living Knowledge to your business. I'm your host, Anna Ivy Summers. Host: Today, we're diving into the world of e-commerce and artificial intelligence, looking at a fascinating new study titled: "The Role of Generative AI in P2P Rental Platforms: Investigating the Effects of Timing and Interactivity on User Reliance in Content (Co-)Creation Processes". Host: That’s a mouthful, so we have our analyst, Alex Ian Sutherland, here to break it down for us. Alex, welcome. Expert: Great to be here, Anna. Host: So, in simple terms, what is this study all about? Expert: It’s about finding the best way for platforms like Airbnb to use Generative AI to help hosts write their property descriptions. The researchers wanted to know if it matters *when* the AI offers help, and *how* it offers that help—for example, automatically or only when the user asks for it. Host: And that's a real challenge for these companies, isn't it? They have this powerful AI technology, but they don't necessarily know the most effective way to deploy it. Expert: Exactly. The core problem is this: if you're a host on a rental platform, a great listing description is crucial. It can be the difference between getting a booking or not. AI can help, but if it's implemented poorly, it can backfire. Host: How so? Expert: Well, the study points out that if a platform fully automates the writing process, it risks creating generic, homogenized content. All the listings start to sound the same, losing that unique, personal touch which is a key advantage of peer-to-peer platforms. It can even erode guest trust if the descriptions feel inauthentic. Host: So the goal is collaboration with the AI, not a complete takeover. How did the researchers test this? Expert: They ran a clever experiment with 244 participants using a simulated Airbnb-like interface. Each person was asked to write a property listing. Expert: The researchers then changed two key things for different groups. First, the timing. Some people got AI suggestions *before* they started writing, some got them halfway *during*, and others only *after* they had finished their own draft. Expert: The second factor was interactivity. For some, the AI suggestions popped up automatically. For others, they had to actively click a button to ask the AI for help. Host: A very controlled environment. So, what did they find? What's the magic formula? Expert: The clearest finding was about timing. Offering AI suggestions earlier in the writing process significantly increases how much people rely on them. Host: Why do you think that is? Expert: The study brings up a concept called "psychological ownership." Once you've spent time and effort writing your own description, you feel attached to it. An AI suggestion that comes in late feels more like an intrusive criticism. But when it comes in at the start, on a blank page, it feels like a helpful starting point. Host: That makes perfect sense. And what about that second factor, being prompted versus having it appear automatically? Expert: The results there showed that allowing users to actively prompt the AI for assistance leads to a slightly higher reliance. It wasn't a huge effect, but it points to the importance of user control. When people feel like they're in the driver's seat, they are more receptive to the AI's input. Host: Fascinating. So, let's get to the most important part for our listeners. Alex, what does this mean for business? What are the practical takeaways? Expert: There are a few crucial ones. First, if you're integrating a generative AI writing tool, design it to engage users right at the beginning of the task. Don't wait. A "help me write the first draft" button is much more effective than a "let me edit what you've already done" button. Expert: Second, empower your users. Give them agency. Designing features that allow users to request AI help, rather than just pushing it on them, can foster more trust and better adoption of the tool. Expert: And finally, a key finding was that when users felt a high cognitive load—meaning they were feeling mentally drained by the task—their reliance on the AI actually went down. So a well-designed tool should be simple, intuitive, and reduce the user's mental effort, not add to it. Host: So the big lesson is that implementation truly matters. It's not just about having the technology, but about integrating it in a thoughtful, human-centric way. Expert: Precisely. The goal isn't to replace the user, but to create an effective human-AI collaboration that makes their job easier while preserving the quality and authenticity of the final product. Host: Fantastic insights. So to recap: for the best results, bring the AI in early, give users control, and focus on true collaboration. Host: Alex Ian Sutherland, thank you so much for breaking down this complex topic for us. Expert: My pleasure, Anna. Host: And thank you to our audience for tuning into A.I.S. Insights — powered by Living Knowledge. We'll see you next time.
Algorithmic Control in Non-Platform Organizations – Workers' Legitimacy Judgments and the Impact of Individual Character Traits
Felix Hirsch
This study investigates how employees in traditional, non-platform companies perceive algorithmic control (AC) systems that manage their work. Using fuzzy-set Qualitative Comparative Analysis (fsQCA), it specifically examines how a worker's individual competitiveness influences whether they judge these systems as legitimate in terms of fairness, autonomy, and professional development.
Problem
While the use of algorithms to manage workers is expanding from the platform economy to traditional organizations, little is known about why employees react so differently to it. Existing research has focused on organizational factors, largely neglecting how individual personality traits impact workers' acceptance and judgment of these new management systems.
Outcome
- A worker's personality, specifically their competitiveness, is a major factor in how they perceive algorithmic management. - Competitive workers generally judge algorithmic control positively, particularly in relation to fairness, autonomy, and competence development. - Non-competitive workers tend to have negative judgments towards algorithmic systems, often rejecting them as unhelpful for their professional growth. - The findings show a clear distinction: competitive workers see AC as fair, especially rating systems, while non-competitive workers view it as unfair.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re looking at a fascinating shift in the workplace. We all know about algorithms managing gig workers, but what happens when this A.I. boss shows up in a traditional office or warehouse? Host: We’re diving into a study titled "Algorithmic Control in Non-Platform Organizations – Workers' Legitimacy Judgments and the Impact of Individual Character Traits." It explores how employees in traditional companies perceive these systems and, crucially, how their personality affects whether they see this new form of management as legitimate. Host: To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: So, Alex, set the scene for us. What's the big problem this study is trying to solve? Expert: The problem is that as algorithmic management expands beyond the Ubers and Lyfts of the world into logistics, retail, and even professional services, we're seeing very different reactions from employees. Some embrace it, some resist it. Expert: Businesses are left wondering why a system that boosts productivity in one team causes morale to plummet in another. Most of the focus has been on the technology itself, but this study points out that we've been neglecting a huge piece of the puzzle: the individual worker. Host: You mean their personality? Expert: Exactly. The study argues that who the employee is as a person—specifically, how competitive they are—is a critical factor in whether they accept or reject being managed by an algorithm. Host: That’s a really interesting angle. So how did the researchers actually study this connection? Expert: They surveyed 92 workers from logistics and warehousing centers, which are prime examples of where these algorithmic systems are already in heavy use. Expert: They used a sophisticated method that goes beyond simple correlation to identify complex patterns. It essentially allowed them to see which specific combinations of algorithmic control—like monitoring, rating, or recommending tasks—and worker competitiveness lead to a positive judgment on things like fairness and autonomy. Host: And what were those key findings? Is there a specific type of person who thrives under an A.I. manager? Expert: There absolutely is. The clearest finding is that a worker’s personality, particularly their competitiveness, is a major predictor of how they perceive algorithmic management. Host: Let me guess, competitive people love it? Expert: You've got it. Competitive workers generally judge these systems very positively. They tend to see algorithmic rating systems, like leaderboards, as fair. They feel it gives them more autonomy and helps them develop their skills by providing clear feedback and recommendations for improvement. Host: And what about their less competitive colleagues? Expert: It’s the polar opposite. Non-competitive workers tend to have negative judgments. They often reject the systems, especially in relation to their own professional growth. They don't see the algorithm as a helpful coach; they see it as an unfair judge. That same rating system a competitive person finds motivating, they perceive as deeply unfair. Host: That’s a stark difference. So, Alex, this brings us to the most important question for our listeners. What does this all mean for business leaders? Why does this matter? Expert: It matters immensely. The biggest takeaway is that there is no 'one-size-fits-all' solution when it comes to algorithmic management. A company can't just buy a piece of software and expect it to work for everyone. Host: So what should they be doing instead? Expert: First, they need to think about system design. The study suggests that just as human managers adapt their style to different employees, algorithmic systems need to be designed with that same flexibility. Expert: For a sales team full of competitive people, a public leaderboard might be fantastic. But for a collaborative, creative team, the system should probably focus more on providing helpful recommendations rather than constant ratings. Host: That makes sense. Are there any hidden risks leaders should be aware of? Expert: Yes, a big one. The study warns that if your system only rewards and promotes competitive behavior, you risk creating a self-reinforcing cycle. Non-competitive workers may become disengaged or even leave. Over time, you could unintentionally build a hyper-competitive, high-turnover culture and lose a diversity of thought and work styles. Host: It sounds like the human manager isn't obsolete just yet. Expert: Far from it. Their role becomes even more critical. They need to be the bridge between the algorithm and the employee, understanding who needs encouragement and who thrives on the data-driven competition the system provides. Host: Fantastic insights. Let’s quickly summarize. Algorithmic management is making its way into traditional companies, but its success isn't guaranteed. Host: Employee acceptance depends heavily on individual personality, especially competitiveness. Competitive workers tend to see these systems as fair and helpful, while non-competitive workers often see them as the opposite. Host: For businesses, this means ditching the one-size-fits-all approach and designing flexible systems that account for the diverse nature of their workforce. Host: Alex Ian Sutherland, thank you so much for breaking down this complex topic for us. Expert: My pleasure, Anna. Host: And thanks to all of you for tuning in to A.I.S. Insights. Join us next time as we continue to explore the latest in business and technology.
Maria Kandaurova, Daniel A. Skog, Petra M. Bosch-Sijtsema
This study investigates the adoption of a low-code conversational Artificial Intelligence (AI) platform within four multinational corporations. Through a case study approach, the research identifies significant challenges that arise from fundamental, yet incorrect, assumptions about low-code technologies. The paper offers recommendations for companies to better navigate the implementation process and unlock the full potential of these platforms.
Problem
As businesses increasingly turn to AI for process automation, they often encounter significant hurdles during adoption. Low-code AI platforms are marketed as a solution to simplify this process, but there is limited research on their real-world application. This study addresses the gap by showing how companies' false assumptions about the ease of use, adaptability, and integration of these platforms can limit their effectiveness and return on investment.
Outcome
- The usability of low-code AI platforms is often overestimated; non-technical employees typically face a much steeper learning curve than anticipated and still require a foundational level of coding and AI knowledge. - Adapting low-code AI applications to specific, complex business contexts is challenging and time-consuming, contrary to the assumption of easy tailoring. It often requires significant investment in standardizing existing business processes first. - Integrating low-code platforms with existing legacy systems and databases is not a simple 'plug-and-play' process. Companies face significant challenges due to incompatible data formats, varied interfaces, and a lack of a comprehensive data strategy. - Successful implementation requires cross-functional collaboration between IT and business teams, thorough platform testing before procurement, and a strategic approach to reengineering business processes to align with AI capabilities.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a very timely topic for any business looking to innovate: the real-world challenges of adopting new technology. We’ll be discussing a fascinating study titled "The Promise and Perils of Low-Code AI Platforms." Host: This study looks at how four major corporations adopted a low-code conversational AI platform, and it uncovers some crucial, and often incorrect, assumptions that businesses make about these powerful tools. Here to break it down for us is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. Businesses are constantly hearing about AI and automation. What’s the core problem that these low-code AI platforms are supposed to solve? Expert: The problem is a classic one: a gap between ambition and resources. Companies want to automate processes, build chatbots, and leverage AI, but they often lack large teams of specialized AI developers. Low-code platforms are marketed as the perfect solution. Host: The 'democratization' of AI we hear so much about. Expert: Exactly. The promise is that you can use a simple, visual, drag-and-drop interface to build complex AI applications, empowering your existing business-focused employees to innovate without needing to write a single line of code. But as the study found, that promise often doesn't match the reality. Host: So how did the researchers investigate this gap between promise and reality? Expert: They took a very practical approach. They didn't just survey people; they conducted an in-depth case study. They followed the journey of four large multinational companies—in the energy, automotive, and retail sectors—as they all tried to implement the very same low-code conversational AI platform. Host: That’s great. So by studying the same platform across different industries, they could really pinpoint the common challenges. What were the main findings? Expert: The findings centered on three major false assumptions businesses made. The first was about usability. The assumption was that ‘low-code’ meant anyone could do it. Host: And that wasn't the case? Expert: Not at all. While the IT staff found it user-friendly, the business-side employees—the ones who were supposed to be empowered—faced a much steeper learning curve than anyone anticipated. One domain expert in the study described the experience as being "like Greek," saying it was far more complex than just "dragging and dropping." Host: So you still need a foundational level of technical knowledge. What was the second false assumption? Expert: It was about adaptability. The idea was that you could easily tailor these platforms to any specific business need. But creating applications to handle complex, real-world customer queries proved incredibly challenging and time-consuming. Host: Why was that? Expert: Because real business processes are often messy and rely on human intuition. The study found that before companies could automate a process, they first had to invest heavily in understanding and standardizing it. You can't teach an AI a process that isn't clearly defined. Host: That makes sense. You have to clean your house before you can automate the cleaning. What was the final key finding? Expert: This one is huge for any CIO: integration. The belief was that these platforms would be a simple 'plug-and-play' solution that could easily connect to existing company databases and systems. Host: I have a feeling it wasn't that simple. Expert: Far from it. The companies ran into major roadblocks trying to connect the platform to their legacy systems. They faced incompatible data formats and a lack of a unified data strategy. The study showed that you often need someone with knowledge of coding and APIs to build the bridges between the new platform and the old systems. Host: So, Alex, this is the crucial part for our listeners. If a business leader is considering a low-code AI tool, what are the key takeaways? What should they do differently? Expert: The study provides a clear roadmap. First, thoroughly test the platform before you buy it. Don't just watch the vendor's demo. Have your actual employees—the business users—try to build a real-world application with it. This will reveal the true learning curve. Host: A 'try before you buy' approach. What else? Expert: Second, success requires cross-functional collaboration. It’s not an IT project or a business project; it's both. The study highlighted that the most successful implementations happened when IT experts and business domain experts worked together in blended teams from day one. Host: So break down those internal silos. Expert: Absolutely. And finally, be prepared to change your processes, not just your tools. You can't just layer AI on top of existing workflows. You need to re-evaluate and often redesign your processes to align with the capabilities of the AI. It's as much about business process re-engineering as it is about technology. Host: This is incredibly insightful. It seems low-code AI platforms are powerful, but they are certainly not a magic bullet. Host: To sum it up: the promise of simplicity with these platforms often hides significant challenges in usability, adaptation, and integration. Success depends less on the drag-and-drop interface and more on a strategic approach that involves rigorous testing, deep collaboration between teams, and a willingness to rethink your fundamental business processes. Host: Alex, thank you so much for shedding light on the perils, and the real promise, of these platforms. Expert: My pleasure, Anna. Host: And a big thank you to our audience for tuning into A.I.S. Insights. We’ll see you next time.
Low-Code AI Platforms, Artificial Intelligence, Conversational AI, Implementation Challenges, Digital Transformation, Business Process Automation, Case Study
Governing Citizen Development to Address Low-Code Platform Challenges
Altus Viljoen, Marija Radić, Andreas Hein, John Nguyen, Helmut Krcmar
This study investigates how companies can effectively manage 'citizen development'—where employees with minimal technical skills use low-code platforms to build applications. Drawing on 30 interviews with citizen developers and platform experts across two firms, the research provides a practical governance framework to address the unique challenges of this approach.
Problem
Companies face a significant shortage of skilled software developers, leading them to adopt low-code platforms that empower non-IT employees to create applications. However, this trend introduces serious risks, such as poor software quality, unmonitored development ('shadow IT'), and long-term maintenance burdens ('technical debt'), which organizations are often unprepared to manage.
Outcome
- Citizen development introduces three primary risks: substandard software quality, shadow IT, and technical debt. - Effective governance requires a more nuanced understanding of roles, distinguishing between 'traditional citizen developers' and 'low-code champions,' and three types of technical experts who support them. - The study proposes three core sets of recommendations for governance: 1) strategically manage project scope and complexity, 2) organize effective collaboration through knowledge bases and proper tools, and 3) implement targeted education and training programs. - Without strong governance, the benefits of rapid, decentralized development are quickly outweighed by escalating risks and costs.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a fascinating area where business and IT are blurring lines: citizen development. We’re looking at a new study titled "Governing Citizen Development to Address Low-Code Platform Challenges". Host: It investigates how companies can effectively manage employees who, with minimal technical skills, are now building their own applications using what are called low-code platforms. With me to break it all down is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, let’s start with the big picture. Why are companies turning to their own non-technical employees to build software in the first place? What’s the problem this study is trying to solve? Expert: The core problem is a massive, ongoing shortage of skilled software developers. Companies have huge backlogs of IT projects, but they can't hire developers fast enough. So, they turn to low-code platforms, which are tools with drag-and-drop interfaces that let almost anyone build a simple application. Host: That sounds like a perfect solution. Democratize development and get things done faster. Expert: It sounds perfect, but the study makes it clear that this introduces a whole new set of serious risks that organizations are often unprepared for. They identified three major challenges. Host: And what are they? Expert: First is simply substandard software quality. An app built by someone in marketing might look fine, but as the study found, it could be running "slow queries" or be "badly planned," hurting the performance of the entire system. Expert: Second is the rise of 'shadow IT'. Employees build things on their own without oversight, which can lead to security issues, data protection breaches, or simply chaos. One developer in the study noted they had a role that was "almost as powerful as a normal developer" and could "damage a few things" if they weren't careful. Expert: And third is technical debt. An employee builds a useful tool, then they leave the company. The study asks, who maintains it? Often, nobody. Or people just keep creating duplicate apps, leading to a messy and expensive digital junkyard. Host: So, how did the researchers get to the bottom of this? What was their approach? Expert: They took a very practical, real-world approach. They conducted 30 in-depth interviews across two different firms. One was a company using a low-code platform, and the other was a company that actually provides a low-code platform. This gave them a 360-degree view from both the user and the expert perspective. Host: It sounds comprehensive. So, after all those conversations, what were the key findings? What's the solution here? Expert: The biggest finding is that simply having "developers" and "non-developers" is the wrong way to think about it. Effective governance requires a much more nuanced understanding of the roles people play. Host: What kind of roles did they find? Expert: They identified two key types of citizen developers. You have your 'traditional citizen developer,' who builds a simple app for their team. But more importantly, they found what they call 'low-code champions.' These are business users who become passionate experts and act as a bridge between their colleagues and IT. They become the "poster children" for the program. Host: That’s a powerful idea. So it’s about nurturing internal talent, not just letting everyone run wild. Expert: Exactly. And to support them, the study proposes a clear, three-part governance framework. First, strategically manage project scope. Don’t let citizen developers build highly complex, mission-critical systems. Guide them to appropriate, simpler use cases. Expert: Second, organize effective collaboration. This means creating a central knowledge base with answers to common questions and using standard collaboration tools so people aren't constantly reinventing the wheel or flooding experts with the same support tickets. Expert: And third, implement targeted education. This isn't just about teaching them to use the software. It’s about training on best practices, data security, and identifying those enthusiastic employees who can become your next 'low-code champions.' Host: This is the crucial part for our listeners. What does this all mean for business leaders? What are the key takeaways? Expert: The first takeaway is this: don't just buy a low-code platform, build a program around it. Governance isn't about restriction; it's about creating the guardrails for success. The study warns that without it, the benefits of speed are "quickly outweighed by escalating risks and costs." Expert: The second, and I think most important, is to actively identify and empower your 'low-code champions'. These people are your force multipliers. They can handle onboarding, answer basic questions, and promote best practices within their business units, which frees up your IT team to focus on bigger things. Expert: And finally, start small and be strategic. The goal of citizen development shouldn't be to replace your IT department, but to supplement it. Empowering a sales team to automate its own reporting workflow is a huge win. Asking them to rebuild the company’s CRM is a disaster waiting to happen. Host: Incredibly clear advice. The promise of empowering your workforce with these tools is real, but it requires a thoughtful strategy to avoid the pitfalls. Host: To summarize, success with citizen development hinges on a strong governance framework. That means strategically managing what gets built, organizing how people collaborate and get support, and investing in targeted education to create internal champions. Host: Alex Ian Sutherland, thank you so much for breaking down this complex topic into such actionable insights. Expert: My pleasure, Anna. Host: And thank you to our audience for tuning in to A.I.S. Insights. We'll see you next time.
citizen development, low-code platforms, IT governance, shadow IT, technical debt, software quality, case study