International Conference on Wirtschaftsinformatik (2025)
Generative AI Usage of University Students: Navigating Between Education and Business
Fabian Walke, Veronika Föller
This study investigates how university students who also work professionally use Generative AI (GenAI) in both their academic and business lives. Using a grounded theory approach, the researchers interviewed eleven part-time students from a distance learning university to understand the characteristics, drivers, and challenges of their GenAI usage.
Problem
While much research has explored GenAI in education or in business separately, there is a significant gap in understanding its use at the intersection of these two domains. Specifically, the unique experiences of part-time students who balance professional careers with their studies have been largely overlooked.
Outcome
- GenAI significantly enhances productivity and learning for students balancing work and education, helping with tasks like writing support, idea generation, and summarizing content. - Students express concerns about the ethical implications, reliability of AI-generated content, and the risk of academic misconduct or being falsely accused of plagiarism. - A key practical consequence is that GenAI tools like ChatGPT are replacing traditional search engines for many information-seeking tasks due to their speed and directness. - The study highlights a strong need for universities to provide clear guidelines, regulations, and formal training on using GenAI effectively and ethically. - User experience is a critical factor; a positive, seamless interaction with a GenAI tool promotes continuous usage, while a poor experience diminishes willingness to use it.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business, technology, and Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating new study titled "Generative AI Usage of University Students: Navigating Between Education and Business." Host: It explores a very specific group: university students who also hold professional jobs. It investigates how they use Generative AI tools like ChatGPT in both their academic and work lives. And here to help us unpack it is our analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. Why focus on this particular group of working students? What’s the problem this study is trying to solve? Expert: Well, there's a lot of research on GenAI in the classroom and a lot on GenAI in the workplace, but very little on the bridge between them. Expert: These part-time students are a unique group. They are under immense time pressure, juggling deadlines for both their studies and their jobs. The study wanted to understand if GenAI is helping them cope, how they use it, and what challenges they face. Expert: Essentially, their experience is a sneak peek into the future of a workforce that will be constantly learning and working with AI. Host: So, how did the researchers get these insights? What was their approach? Expert: They took a very direct, human-centered approach. Instead of a broad survey, they conducted in-depth, one-on-one interviews with eleven of these working students. Expert: This allowed them to move beyond simple statistics and really understand the nuances, the strategies, and the genuine concerns people have when using these powerful tools in their day-to-day lives. Host: That makes sense. So let's get to it. What were the key findings? Expert: The first major finding, unsurprisingly, is that GenAI is a massive productivity booster for them. They use it for everything from summarizing articles and generating ideas for papers to drafting emails and even debugging code for work. It saves them precious time. Host: But I imagine it's not all smooth sailing. Were there concerns? Expert: Absolutely. That was the second key finding. Students are very aware of the risks. They worry about the accuracy of the information, with one participant noting, "You can't blindly trust everything he says." Expert: There’s also a significant fear around academic integrity. They’re anxious about being falsely accused of plagiarism, especially when university guidelines are unclear. As one student put it, "I think that's a real shame because you use Google or even your parents to correct your work and... that is absolutely allowed." Host: That’s a powerful point. Did any other user behaviors stand out? Expert: Yes, and this one is huge. For many information-seeking tasks, GenAI is actively replacing traditional search engines like Google. Expert: Nearly all the students said they now turn to ChatGPT first. It’s faster. Instead of sifting through pages of links, they get a direct, synthesized answer. One student even said, "Googling is a skill itself," implying it's a skill they need less often now. Host: That's a fundamental shift. So bringing all these findings together, what's the big takeaway for businesses? Why does this study matter for our listeners? Expert: It matters immensely, Anna, for several reasons. First, this is your incoming workforce. New graduates and hires will arrive expecting to use AI tools. They'll be looking for companies that don't just permit it, but actively integrate it into workflows to boost efficiency. Host: So businesses need to be prepared for that. What else? Expert: Training and guidelines are non-negotiable. This study screams that users need and want direction. Companies can’t afford a free-for-all. Expert: They need to establish clear policies on what data can be used, how to verify AI-generated content, and how to use it ethically. One student worked at a bank where public GenAI tools were banned due to sensitive customer data. That's a risk every company needs to assess. Proactive training isn't just a nice-to-have; it's essential risk management. Host: That seems critical, especially with data privacy. Any final takeaway for business leaders? Expert: Yes: user experience is everything. The study found that a smooth, intuitive, and fast AI tool encourages continuous use, while a clunky interface kills adoption. Expert: If you're building or buying AI solutions for your team, the quality of the user experience is just as important as the underlying model. If it's not easy to use, your employees simply won't use it. Host: So, to recap: we have an incoming AI-native workforce, a critical need for clear corporate guidelines and training, and the lesson that user experience will determine success or failure. Host: Alex, this has been incredibly insightful. Thank you for breaking down this study for us. Expert: My pleasure, Anna. Host: And thank you to our audience for tuning in to A.I.S. Insights, powered by Living Knowledge. We’ll see you next time.
International Conference on Wirtschaftsinformatik (2025)
The GenAI Who Knew Too Little – Revisiting Transactive Memory Systems in Human GenAI Collaboration
Christian Meske, Tobias Hermanns, Florian Brachten
This study investigates how traditional models of team collaboration, known as Transactive Memory Systems (TMS), manifest when humans work with Generative AI. Through in-depth interviews with 14 knowledge workers, the research analyzes the unique dynamics of expertise recognition, trust, and coordination that emerge in these partnerships.
Problem
While Generative AI is increasingly used as a collaborative tool, our understanding of teamwork is based on human-to-human interaction. This creates a knowledge gap, as the established theories do not account for an AI partner that operates on algorithms rather than social cues, potentially leading to inefficient and frustrating collaborations.
Outcome
- Human-AI collaboration is asymmetrical: Humans learn the AI's capabilities, but the AI fails to recognize and remember human expertise beyond a single conversation. - Trust in GenAI is ambivalent and requires verification: Users simultaneously see the AI as an expert yet doubt its reliability, forcing them to constantly verify its outputs, a step not typically taken with trusted human colleagues. - Teamwork is hierarchical, not mutual: Humans must always take the lead and direct a passive AI that lacks initiative, creating a 'boss-employee' dynamic rather than a reciprocal partnership where both parties contribute ideas.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers.
Host: Today, we're diving into a fascinating new study titled, "The GenAI Who Knew Too Little – Revisiting Transactive Memory Systems in Human GenAI Collaboration."
Host: In simple terms, it explores how our traditional ideas of teamwork hold up when one of our teammates is a Generative AI. To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Glad to be here, Anna.
Host: Alex, we see Generative AI being adopted everywhere. What's the core problem this study is trying to solve for businesses?
Expert: The problem is that our understanding of effective teamwork is based entirely on how humans interact. We build trust, learn who's good at what, and coordinate tasks based on social cues. This is what researchers call a Transactive Memory System—a shared understanding of 'who knows what'.
Expert: But GenAI doesn't operate on social cues. It runs on algorithms. So, when we insert it into a team, the established rules of collaboration can break down, leading to frustration and inefficiency. This study investigates that breakdown.
Host: So how did the researchers get inside this new dynamic? Did they run simulations?
Expert: Not at all, they went straight to the source. They conducted in-depth interviews with 14 professionals—people in fields from computer science to psychology—who use GenAI in their daily work. They wanted to understand the real-world experience of collaborating with these tools on complex tasks.
Host: Let's get to it then. What was the first major finding from those conversations?
Expert: The first key finding is that the collaboration is completely asymmetrical. The human user spends significant time learning the AI's capabilities, its strengths, and its quirks. But the AI learns almost nothing about the human's expertise beyond the immediate conversation.
Expert: As one participant put it, "As soon as I go to a different chat, it's lost again. I have to start from the beginning again. So it's always like a restart." It’s like working with a colleague who has severe short-term memory loss.
Host: That sounds incredibly inefficient. This must have a huge impact on trust, which is vital for any team.
Expert: It absolutely does, and that's the second major finding: trust in GenAI is ambivalent. Users see the AI as a powerful expert, yet they deeply doubt its reliability.
Expert: This creates a paradox. With a trusted human colleague, especially a senior one, you generally accept their output. But with GenAI, users feel forced to constantly verify its work, especially for factual information. One person said the AI is "very reliable at spreading fake news."
Host: So we learn about the AI, but it doesn't learn about us. And we have to double-check all its work. How does that change the actual dynamic of getting things done?
Expert: It creates a strict hierarchy, which was the third key finding. Instead of a partnership, it becomes a 'boss-employee' relationship. The human must always be the initiator, giving commands to a passive AI that waits for instructions.
Expert: The study found that GenAI rarely challenges our thinking or pushes a conversation in a new direction. It just executes tasks. This is the opposite of a proactive human teammate who might say, "Have we considered this alternative approach?"
Host: This paints a very different picture from the seamless AI partner we often hear about. For the business leaders listening, what are the crucial takeaways? Why does this matter?
Expert: It matters immensely. First, businesses need to manage expectations. GenAI, in its current form, is not a strategic partner. It’s a powerful, but deeply flawed, assistant. We should structure workflows around it being a high-level tool, not an autonomous teammate.
Host: So, treat it more like a sophisticated piece of software than a new hire.
Expert: Exactly. Second, the need for verification is not a bug; it's a feature of working with current GenAI. Businesses must build mandatory human oversight and verification steps into any process that uses AI-generated content. Assuming the output is correct is a recipe for disaster.
Host: And looking forward?
Expert: The study gives us a clear roadmap for what's needed. For AI to become a true collaborator, it needs a persistent memory of its human counterpart's skills and context. It needs to be more proactive. So, when businesses are evaluating new AI tools, they should be asking: "Does this system just follow commands, or does it actually help me think better?"
Host: Let's do a quick recap. The human-AI partnership today is asymmetrical, requires constant verification, and functions as a top-down hierarchy.
Host: The key for businesses is to manage AI as a powerful tool, not a true colleague, by building in the right checks and balances until the technology evolves.
Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us.
Expert: My pleasure, Anna.
Host: And thanks to our audience for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time as we continue to explore the future of business and technology.
International Conference on Wirtschaftsinformatik (2025)
LLMs for Intelligent Automation - Insights from a Systematic Literature Review
David Sonnabend, Mahei Manhai Li and Christoph Peters
This study conducts a systematic literature review to examine how Large Language Models (LLMs) can enhance Intelligent Automation (IA). The research aims to overcome the limitations of traditional Robotic Process Automation (RPA), such as handling unstructured data and workflow changes, by systematically investigating the integration of LLMs.
Problem
Traditional Robotic Process Automation (RPA) struggles with complex tasks involving unstructured data and dynamic workflows. While Large Language Models (LLMs) show promise in addressing these issues, there has been no systematic investigation into how they can specifically advance the field of Intelligent Automation (IA), creating a significant research gap.
Outcome
- LLMs are primarily used to process complex inputs, such as unstructured text, within automation workflows. - They are leveraged to generate automation workflows directly from natural language commands, simplifying the creation process. - LLMs are also used to guide goal-oriented Graphical User Interface (GUI) navigation, making automation more adaptable to interface changes. - A key research gap was identified in the lack of systems that combine these different capabilities and enable continuous learning at runtime.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into the world of Intelligent Automation. We're looking at a fascinating new study titled "LLMs for Intelligent Automation - Insights from a Systematic Literature Review." Host: It explores how Large Language models, or LLMs, can supercharge business automation and overcome the limitations of older technologies. Here to help us unpack it all is our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, let's start with the big picture. Automation isn't new. Many companies use something called Robotic Process Automation, or RPA. What’s the problem with it that this study is trying to address? Expert: That's the perfect place to start. Traditional RPA is fantastic for simple, repetitive, rule-based tasks. Think copying data from one spreadsheet to another. But the study points out its major weaknesses. It struggles with anything unstructured, like reading the text of an email or understanding a scanned invoice that isn't perfectly formatted. Host: So it’s brittle? If something changes, it breaks? Expert: Exactly. If a button on a website moves, or the layout of a form changes, the RPA bot often fails. This makes them high-maintenance. The study highlights that despite being promoted as 'low-code', these systems often need highly skilled, and expensive, developers to build and maintain them. Host: Which creates a bottleneck. So, how did the researchers investigate how LLMs can solve this? What was their approach? Expert: They conducted a systematic literature review. Essentially, they did a deep scan of all the relevant academic research published since 2022, which is really when models like ChatGPT made LLMs a practical tool for businesses. They started with over two thousand studies and narrowed it down to the 19 most significant ones to get a clear, consolidated view of the state of the art. Host: And what did that review find? What are the key ways LLMs are being used to create smarter automation today? Expert: The study organized the findings into three main categories. First, LLMs are being used to process complex, unstructured inputs. This is a game-changer. Instead of needing perfectly structured data, an LLM-powered system can read an email, understand its intent and attachments, and take the right action. Host: Can you give me a real-world example? Expert: The study found several, from analyzing medical records to generate treatment recommendations, to digitizing handwritten immigration forms. These are tasks that involve nuance and interpretation that would completely stump a traditional RPA bot. Host: That’s a huge leap. What was the second key finding? Expert: The second role is using LLMs to *build* the automation workflows themselves. Instead of a developer spending hours designing a process, a business manager can simply describe what they need in plain English. For example, "When a new order comes in via email, extract the product name and quantity, update the inventory system, and send a confirmation to the customer." Host: So you’re automating the creation of automation. That must dramatically speed things up. Expert: It does, and it also lowers the technical barrier. Suddenly, the people who actually understand the business process can be the ones to create the automation for it. The third key finding is all about adaptability. Host: This goes back to that problem of bots breaking when a website changes? Expert: Precisely. The study highlights new approaches where LLMs are used to guide navigation in graphical user interfaces, or GUIs. They can understand the screen visually, like a person does. They look for the "submit button" based on its label and context, not its exact coordinates on the screen. This makes the automation far more robust and resilient to software updates. Host: It sounds like LLMs are solving all of RPA's biggest problems. Did the review find any gaps or areas that are still underdeveloped? Expert: It did, and it's a critical point. The researchers found a significant gap in systems that can learn and improve over time from feedback. Most current systems are static. More importantly, very few tools combine all three of these capabilities—understanding complex data, building workflows, and adapting to interfaces—into a single, unified platform. Host: This is the most important part for our listeners. Alex, what does this all mean for business? What are the practical takeaways for a manager or executive? Expert: There are three big ones. First, the scope of what you can automate has just exploded. Processes that always needed a human in the loop because they involved unstructured data or complex decision-making are now prime candidates for automation. Businesses should be re-evaluating their core processes. Host: So, think bigger than just data entry. Expert: Exactly. The second takeaway is agility. Because you can now create workflows with natural language, you can deploy automations faster and empower your non-technical staff to build their own solutions, which frees up your IT department to focus on more strategic work. Host: And the third? Expert: A lower total cost of ownership. By building more resilient bots that don't break every time an application is updated, you drastically reduce ongoing maintenance costs, which has always been a major hidden cost of traditional RPA. Host: It sounds incredibly promising. Expert: It is. But the study also offers a word of caution. It's still early days, and human oversight is crucial. The key is to see this not as replacing humans, but as building powerful tools that augment your team's capabilities, allowing them to offload repetitive work and focus on what matters most. Host: So to summarize: Large Language Models are making business automation smarter, easier to build, and far more robust. The technology can now handle complex data and adapt to a changing environment, opening up new possibilities for efficiency. Host: Alex, thank you so much for breaking down this complex topic into such clear, actionable insights. Expert: My pleasure, Anna. Host: And thank you for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time as we continue to explore the intersection of business and technology.
Large Language Models (LLMs), Intelligent Process Automation (IPA), Intelligent Automation (IA), Cognitive Automation (CA), Tool Learning, Systematic Literature Review, Robotic Process Automation (RPA)
International Conference on Wirtschaftsinformatik (2025)
Unveiling the Influence of Personality, Identity, and Organizational Culture on Generative AI Adoption in the Workplace
Dugaxhin Xhigoli
This qualitative study examines how an employee's personality, professional identity, and company culture influence their engagement with generative AI (GenAI). Through 23 expert interviews, the research explores the underlying factors that shape different AI adoption behaviors, from transparent integration to strategic concealment.
Problem
As companies rapidly adopt generative AI, they encounter a wide range of employee responses, yet there is limited understanding of what drives this variation. This study addresses the research gap by investigating why employees differ in their AI usage, specifically focusing on how individual psychology and the organizational environment interact to shape these behaviors.
Outcome
- The study identified four key dimensions influencing GenAI adoption: Personality-driven usage behavior, AI-driven changes to professional identity, organizational culture factors, and the organizational risks of unmanaged AI use. - Four distinct employee archetypes were identified: 'Innovative Pioneers' who openly use and identify with AI, 'Hidden Users' who identify with AI but conceal its use for competitive advantage, 'Transparent Users' who openly use AI as a tool, and 'Critical Skeptics' who remain cautious and avoid it. - Personality traits, particularly those from the 'Dark Triad' like narcissism, and competitive work environments significantly drive the strategic concealment of AI use. - A company's culture is critical; open, innovative cultures foster ethical and transparent AI adoption, whereas rigid, hierarchical cultures encourage concealment and the rise of risky 'Shadow AI'.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into a fascinating new study that looks beyond the technology of generative AI and focuses on the people using it.
Host: The study is titled, "Unveiling the Influence of Personality, Identity, and Organizational Culture on Generative AI Adoption in the Workplace." It examines how an employee's personality, their professional identity, and the company culture they work in all shape how they engage with tools like ChatGPT. With me to break it all down is our expert analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Great to be here, Anna.
Host: So, Alex, let's start with the big picture. Companies everywhere are racing to integrate generative AI. What’s the core problem this study is trying to solve?
Expert: The problem is that as companies roll out these powerful tools, they're seeing a huge range of reactions from employees. Some are jumping in headfirst, while others are hiding their usage, and some are pushing back entirely. Until now, there hasn't been much understanding of *why* this variation exists.
Host: So it's about the human element behind the technology. How did the researchers investigate this?
Expert: They took a qualitative approach. Instead of a broad survey, they conducted in-depth interviews with 23 experts from diverse fields like AI startups, consulting, and finance. This allowed them to get past surface-level answers and really understand the nuanced motivations and behaviors at play.
Host: And what were the key findings from these conversations? What did they uncover?
Expert: The study identified four key dimensions, but the most compelling finding was the identification of four distinct employee archetypes when it comes to using GenAI. It’s a really practical way to think about the workforce.
Host: Four archetypes. That’s fascinating. Can you walk us through them?
Expert: Absolutely. First, you have the 'Innovative Pioneers'. These are employees who strongly identify with AI and are open about using it. They see it as a core part of their work and a driver of innovation.
Host: Okay, so they're the champions. Who's next?
Expert: Next are the 'Transparent Users'. They also openly use AI, but they see it purely as a tool. It helps them do their job, but it's not part of their professional identity. They don’t see it as a transformative part of who they are at work.
Host: That makes sense. A practical approach. What about the other two? They sound a bit more complex.
Expert: They are. Then we have the 'Critical Skeptics'. These are the employees who remain cautious. They don't identify with AI, and they generally avoid using it, often due to ethical concerns or a belief in traditional methods.
Host: And the last one?
Expert: This is the one that poses the biggest challenge for organizations: the 'Hidden Users'. These employees identify strongly with AI and use it frequently, but they conceal their usage. They might do this to maintain a competitive edge over colleagues or to make their own output seem more impressive than it is.
Host: Hiding AI use seems risky. The study must have looked into what drives that kind of behavior.
Expert: It did. The findings suggest that certain personality traits, sometimes referred to as the 'Dark Triad'—like narcissism or Machiavellianism—are strong drivers of this concealment. But it's not just personality. The organizational culture is critical. In highly competitive or rigid, top-down cultures, employees are much more likely to hide their AI use to avoid scrutiny.
Host: This is the crucial part for our audience. What does this all mean for business leaders? Why does it matter if you have a 'Hidden User' versus an 'Innovative Pioneer'?
Expert: It matters immensely. The biggest takeaway is that you can’t have a one-size-fits-all AI strategy. Leaders need to recognize these different archetypes exist in their teams and tailor their training and policies accordingly.
Host: So, understanding your people is step one. What’s the next practical step?
Expert: The next step is to actively shape your culture. The study clearly shows that open, innovative cultures encourage transparent and ethical AI use. In contrast, hierarchical, risk-averse cultures unintentionally create what's known as 'Shadow AI'—where employees use unapproved AI tools in secret. This opens the company up to huge risks, from data breaches to compliance violations.
Host: So the business imperative is to build a culture of transparency?
Expert: Exactly. Leaders need to create psychological safety where employees can experiment, ask questions, and even fail with AI without fear. This involves setting clear ethical guidelines, providing ongoing training, and fostering open dialogue. If you don't, you're not managing your company's AI adoption; your employees are, in secret.
Host: A powerful insight. So to summarize, successfully integrating generative AI is less about the technology itself and more about understanding the complex interplay of personality, identity, and, most importantly, organizational culture.
Host: Leaders need to be aware of the four archetypes—Pioneers, Transparent Users, Skeptics, and Hidden Users—and build an open culture to encourage ethical use and avoid the significant risks of 'Shadow AI'.
Host: Alex, thank you for making this complex topic so clear and actionable for us.
Expert: My pleasure, Anna.
Host: And thanks to our audience for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we continue to explore the intersection of business and technology.
Generative AI, Personality Traits, AI Identity, Organizational Culture, AI Adoption
International Conference on Wirtschaftsinformatik (2025)
The Role of Generative AI in P2P Rental Platforms: Investigating the Effects of Timing and Interactivity on User Reliance in Content (Co-)Creation Processes
Niko Spatscheck, Myriam Schaschek, Christoph Tomitza, and Axel Winkelmann
This study investigates how Generative AI can best assist users on peer-to-peer (P2P) rental platforms like Airbnb in writing property listings. Through an experiment with 244 participants, the researchers tested how the timing of when AI suggestions are offered and the level of interactivity (automatic vs. user-prompted) influence how much a user relies on the AI.
Problem
While Generative AI offers a powerful way to help property hosts create compelling listings, platforms don't know the most effective way to implement these tools. It's unclear if AI assistance is more impactful at the beginning or end of the writing process, or if users prefer to actively ask for help versus receiving it automatically. This study addresses this knowledge gap to provide guidance for designing better AI co-writing assistants.
Outcome
- Offering AI suggestions earlier in the writing process significantly increases how much users rely on them. - Allowing users to actively prompt the AI for assistance leads to a slightly higher reliance compared to receiving suggestions automatically. - Higher cognitive load (mental effort) reduces a user's reliance on AI-generated suggestions. - For businesses like Airbnb, these findings suggest that AI writing tools should be designed to engage users at the very beginning of the content creation process to maximize their adoption and impact.
Host: Welcome to A.I.S. Insights, the podcast where we connect Living Knowledge to your business. I'm your host, Anna Ivy Summers. Host: Today, we're diving into the world of e-commerce and artificial intelligence, looking at a fascinating new study titled: "The Role of Generative AI in P2P Rental Platforms: Investigating the Effects of Timing and Interactivity on User Reliance in Content (Co-)Creation Processes". Host: That’s a mouthful, so we have our analyst, Alex Ian Sutherland, here to break it down for us. Alex, welcome. Expert: Great to be here, Anna. Host: So, in simple terms, what is this study all about? Expert: It’s about finding the best way for platforms like Airbnb to use Generative AI to help hosts write their property descriptions. The researchers wanted to know if it matters *when* the AI offers help, and *how* it offers that help—for example, automatically or only when the user asks for it. Host: And that's a real challenge for these companies, isn't it? They have this powerful AI technology, but they don't necessarily know the most effective way to deploy it. Expert: Exactly. The core problem is this: if you're a host on a rental platform, a great listing description is crucial. It can be the difference between getting a booking or not. AI can help, but if it's implemented poorly, it can backfire. Host: How so? Expert: Well, the study points out that if a platform fully automates the writing process, it risks creating generic, homogenized content. All the listings start to sound the same, losing that unique, personal touch which is a key advantage of peer-to-peer platforms. It can even erode guest trust if the descriptions feel inauthentic. Host: So the goal is collaboration with the AI, not a complete takeover. How did the researchers test this? Expert: They ran a clever experiment with 244 participants using a simulated Airbnb-like interface. Each person was asked to write a property listing. Expert: The researchers then changed two key things for different groups. First, the timing. Some people got AI suggestions *before* they started writing, some got them halfway *during*, and others only *after* they had finished their own draft. Expert: The second factor was interactivity. For some, the AI suggestions popped up automatically. For others, they had to actively click a button to ask the AI for help. Host: A very controlled environment. So, what did they find? What's the magic formula? Expert: The clearest finding was about timing. Offering AI suggestions earlier in the writing process significantly increases how much people rely on them. Host: Why do you think that is? Expert: The study brings up a concept called "psychological ownership." Once you've spent time and effort writing your own description, you feel attached to it. An AI suggestion that comes in late feels more like an intrusive criticism. But when it comes in at the start, on a blank page, it feels like a helpful starting point. Host: That makes perfect sense. And what about that second factor, being prompted versus having it appear automatically? Expert: The results there showed that allowing users to actively prompt the AI for assistance leads to a slightly higher reliance. It wasn't a huge effect, but it points to the importance of user control. When people feel like they're in the driver's seat, they are more receptive to the AI's input. Host: Fascinating. So, let's get to the most important part for our listeners. Alex, what does this mean for business? What are the practical takeaways? Expert: There are a few crucial ones. First, if you're integrating a generative AI writing tool, design it to engage users right at the beginning of the task. Don't wait. A "help me write the first draft" button is much more effective than a "let me edit what you've already done" button. Expert: Second, empower your users. Give them agency. Designing features that allow users to request AI help, rather than just pushing it on them, can foster more trust and better adoption of the tool. Expert: And finally, a key finding was that when users felt a high cognitive load—meaning they were feeling mentally drained by the task—their reliance on the AI actually went down. So a well-designed tool should be simple, intuitive, and reduce the user's mental effort, not add to it. Host: So the big lesson is that implementation truly matters. It's not just about having the technology, but about integrating it in a thoughtful, human-centric way. Expert: Precisely. The goal isn't to replace the user, but to create an effective human-AI collaboration that makes their job easier while preserving the quality and authenticity of the final product. Host: Fantastic insights. So to recap: for the best results, bring the AI in early, give users control, and focus on true collaboration. Host: Alex Ian Sutherland, thank you so much for breaking down this complex topic for us. Expert: My pleasure, Anna. Host: And thank you to our audience for tuning into A.I.S. Insights — powered by Living Knowledge. We'll see you next time.
International Conference on Wirtschaftsinformatik (2025)
A Framework for Context-Specific Theorizing on Trust and Reliance in Collaborative Human-AI Decision-Making Environments
Niko Spatscheck
This study analyzes 59 empirical research papers to understand why findings on human trust in AI have been inconsistent. It synthesizes this research into a single framework that identifies the key factors influencing how people decide to trust and rely on AI systems for decision-making. The goal is to provide a more unified and context-aware understanding of the complex relationship between humans and AI.
Problem
Effective collaboration between humans and AI is often hindered because people either trust AI too much (overreliance) or too little (underreliance), leading to poor outcomes. Existing research offers conflicting explanations for this behavior, creating a knowledge gap for developers and organizations. This study addresses the problem that prior research has largely ignored the specific context—such as the user's expertise, the AI's design, and the nature of the task—which is crucial for explaining these inconsistencies.
Outcome
- The study created a comprehensive framework that categorizes the factors influencing trust and reliance on AI into three main groups: human-related (e.g., user expertise, cognitive biases), AI-related (e.g., performance, explainability), and decision-related (e.g., risk, complexity). - It concludes that trust is not static but is dynamically shaped by the interaction of these various contextual factors. - This framework provides a practical tool for researchers and businesses to better predict how users will interact with AI and to design systems that foster appropriate levels of trust, leading to better collaborative performance.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re exploring how to build better, more effective partnerships between people and artificial intelligence in the workplace. Host: We're diving into a fascinating study titled "A Framework for Context-Specific Theorizing on Trust and Reliance in Collaborative Human-AI Decision-Making Environments." Host: In short, it analyzes dozens of research studies to create one unified guide for understanding the complex relationship between humans and the AI tools they use for decision-making. Host: To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. Businesses are adopting AI everywhere, but the results are sometimes mixed. What’s the core problem this study tackles? Expert: The problem is all about trust, or more specifically, the *miscalibration* of trust. In business, we see people either trusting AI too much—what we call overreliance—or trusting it too little, which is underreliance. Host: And both of those can be dangerous, right? Expert: Exactly. If you over-rely on AI, you might follow flawed advice without question, leading to costly errors. If you under-rely, you might ignore perfectly good, data-driven insights and miss huge opportunities. Host: So why has this been so hard to get right? Expert: Because, as the study argues, previous research has often ignored the single most important element: context. It’s not just about whether an AI is "good" or not. It's about who is using it, for what purpose, and under what conditions. Without that context, the findings were all over the map. Host: So, how did the researchers build a more complete picture? What was their approach? Expert: They conducted a massive systematic review. They synthesized the findings from 59 different empirical studies on this topic. By looking at all this data together, they were able to identify the patterns and core factors that consistently appeared across different scenarios. Host: And what were those key patterns? What did they find? Expert: They developed a comprehensive framework that boils it all down to three critical categories of factors that influence our trust in AI. Host: What are they? Expert: First, there are Human-related factors. Second, AI-related factors. And third, Decision-related factors. Trust is formed by the interplay of these three. Host: Can you give us a quick example of each? Expert: Of course. A human-related factor is user expertise. An experienced doctor interacting with a diagnostic AI will trust it differently than a medical student will. Host: Okay, that makes sense. What about an AI-related factor? Expert: That could be the AI’s explainability. Can the AI explain *why* it made a certain recommendation? A "black box" AI that just gives an answer with no reasoning is much harder to trust than one that shows its work. Host: And finally, a decision-related factor? Expert: Think about risk. You're going to rely on an AI very differently if it's recommending a movie versus advising on a multi-million dollar corporate merger. The stakes of the decision itself are a huge piece of the puzzle. Host: This framework sounds incredibly useful for researchers. But let's bring it into the boardroom. Why does this matter for business leaders? Expert: It matters immensely because it provides a practical roadmap for deploying AI successfully. The biggest takeaway is that a one-size-fits-all approach to AI will fail. Host: So what should a business leader do instead? Expert: They can use this framework as a guide. When implementing a new AI system, ask these three questions. One: Who are our users? What is their expertise and what are their biases? That's the human factor. Expert: Two: Is our AI transparent? Does it perform reliably, and can we explain its outputs? That's the AI factor. Expert: And three: What specific, high-stakes decisions will this AI support? That's the decision factor. Expert: Answering these questions helps you design a system that encourages the *right* level of trust, avoiding those costly mistakes of over- or under-reliance. You get better collaboration and, ultimately, better, more accurate decisions. Host: So, to wrap it up, trust in AI isn't just a vague feeling. It’s a dynamic outcome based on the specific context of the user, the tool, and the task. Host: To get the most value from AI, businesses need to think critically about that entire ecosystem, not just the technology itself. Host: Alex, thank you so much for breaking that down for us. Expert: My pleasure, Anna. Host: And thank you for tuning in to A.I.S. Insights. We'll see you next time.
International Conference on Wirtschaftsinformatik (2025)
Generative Al in Business Process Optimization: A Maturity Analysis of Business Applications
Ralf Mengele
This study analyzes the current state of Generative AI (GAI) in the business world by systematically reviewing scientific literature. It identifies where GAI applications have been explored or implemented across the value chain and evaluates the maturity of these use cases. The goal is to provide managers and researchers with a clear overview of which business areas can already benefit from GAI and which require further development.
Problem
While Generative AI holds enormous potential for companies, its recent emergence means it is often unclear where the technology can be most effectively applied. Businesses lack a comprehensive, systematic overview that evaluates the maturity of GAI use cases across different business processes, making it difficult to prioritize investment and adoption.
Outcome
- The most mature and well-researched applications of Generative AI are in product development and in maintenance and repair within the manufacturing sector. - The manufacturing segment as a whole exhibits the most mature GAI use cases compared to other parts of the business value chain. - Technical domains show a higher level of GAI maturity and successful implementation than process areas dominated by interpersonal interactions, such as marketing and sales. - GAI models like Generative Adversarial Networks (GANs) are particularly mature, proving highly effective for tasks like generating synthetic data for early damage detection in machinery. - Research into GAI is still in its early stages for many business areas, with fields like marketing, sales, and human resources showing low implementation and maturity.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I'm your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating new analysis titled "Generative AI in Business Process Optimization: A Maturity Analysis of Business Applications." Host: With us is our expert analyst, Alex Ian Sutherland. Alex, this study aims to give managers a clear overview of which business areas can already benefit from Generative AI and which still need more work. Is that right? Expert: That's exactly it, Anna. It’s about cutting through the hype and creating a strategic roadmap for GAI adoption. Host: Great. Let's start with the big problem. We hear constantly about the enormous potential of Generative AI, but for many business leaders, it's a black box. Where do you even begin? Expert: That's the core issue the study addresses. The technology is so new that companies struggle to see where it can be most effectively applied. They lack a systematic overview that evaluates how mature the GAI solutions are for different business processes. Host: So they don't know whether to invest in GAI for marketing, for manufacturing, or somewhere else entirely. Expert: Precisely. Without that clarity, it's incredibly difficult to prioritize investment and adoption. Businesses risk either missing out or investing in applications that just aren't ready yet. Host: So how did the researchers tackle this? What was their approach? Expert: They conducted a systematic literature review. In simple terms, they analyzed 64 different scientific publications to see where GAI has been proposed or, more importantly, actually implemented in the business world. Expert: They then categorized every application they found based on two things: which part of the business it fell into—like manufacturing or sales—and its level of maturity, from just a proposal to a fully successful implementation. Host: It sounds like they created a map of the current GAI landscape. So, after all that analysis, what were the key findings? Where is GAI actually working today? Expert: The results were very clear. The most mature and well-researched applications of Generative AI are overwhelmingly found in one sector: manufacturing. Host: Manufacturing? That’s interesting. Not marketing or customer service? Expert: Not yet. Within manufacturing, two areas stood out: product development and maintenance and repair. These technical domains show a much higher level of GAI maturity than areas that rely more on interpersonal interactions. Host: Why is that? What makes manufacturing so different? Expert: A few things. Technical fields are often more data-rich, which is the fuel for any AI. Also, the study suggests employees in these domains are more accustomed to adopting new technologies as part of their job. Expert: There’s also the maturity of specific GAI models. For example, a model called a Generative Adversarial Network, or GAN, has been around since 2014. They are proving incredibly effective. Host: Can you give us an example? Expert: A fantastic one from the study is in predictive maintenance. It's hard to train an AI to detect machine failures because, hopefully, failures are rare, so you don't have much data. Expert: But you can use a GAN to generate vast amounts of realistic, synthetic data of what a machine failure looks like. You then use that data to train another AI model to detect the real thing. It’s a powerful and proven application that's saving companies significant money. Host: That’s a brilliant real-world application. So, Alex, this brings us to the most important question for our listeners: why does this matter for their business? What are the key takeaways? Expert: The first takeaway is for leaders in manufacturing or other technical industries. The message is clear: GAI is ready for you. You should be actively looking at mature applications in product design, process optimization, and predictive maintenance. The technology is proven. Host: And what about for those in other areas, like marketing or H.R., where the study found lower maturity? Expert: For them, the takeaway is different. It’s not about ignoring GAI, but understanding that you're in an earlier phase. This is the time for experimentation and pilot projects, not for expecting a mature, off-the-shelf solution. The study identifies these areas as promising, but they need more research. Host: So it helps businesses manage their expectations and their strategy. Expert: Exactly. This analysis provides a data-driven roadmap. It shows you where the proven wins are today and where you should be watching for the breakthroughs of tomorrow. It helps you invest with confidence. Host: Fantastic. So, to summarize: a comprehensive study on Generative AI's business use cases reveals that the technology is most mature in manufacturing, particularly for product development and maintenance. Host: Technical, data-heavy domains are leading the way, while areas like marketing and sales are still in their early stages. For business leaders, this provides a clear guide on where to invest now and where to experiment for the future. Host: Alex, thank you for breaking that down for us. It’s incredibly valuable insight. Expert: My pleasure, Anna. Host: And thank you to our audience for tuning in to A.I.S. Insights. We'll see you next time.
Generative AI, Business Processes, Optimization, Maturity Analysis, Literature Review, Manufacturing
MIS Quarterly Executive (2023)
How Siemens Democratized Artificial Intelligence
Benjamin van Giffen, Helmuth Ludwig
This paper presents an in-depth case study on how the global technology company Siemens successfully moved artificial intelligence (AI) projects from pilot stages to full-scale, value-generating applications. The study analyzes Siemens' journey through three evolutionary stages, focusing on the concept of 'AI democratization', which involves integrating the unique skills of domain experts, data scientists, and IT professionals. The findings provide a framework for how other organizations can build the necessary capabilities to adopt and scale AI technologies effectively.
Problem
Many companies invest in artificial intelligence but struggle to progress beyond small-scale prototypes and pilot projects. This failure to scale prevents them from realizing the full business value of AI. The core problem is the difficulty in making modern AI technologies broadly accessible to employees, which is necessary to identify, develop, and implement valuable applications across the organization.
Outcome
- Siemens successfully scaled AI by evolving through three stages: 1) Tactical AI pilots, 2) Strategic AI enablement, and 3) AI democratization for business transformation. - Democratizing AI, defined as the collaborative integration of domain experts, data scientists, and IT professionals, is crucial for overcoming key adoption challenges such as defining AI tasks, managing data, accepting probabilistic outcomes, and addressing 'black-box' fears. - Key initiatives that enabled this transformation included establishing a central AI Lab to foster co-creation, an AI Academy for upskilling employees, and developing a global AI platform to support scaling. - This approach allowed Siemens to transform manufacturing processes with predictive quality control and create innovative healthcare products like the AI-Rad Companion. - The study concludes that democratizing AI creates value by rooting AI exploration in deep domain knowledge and reduces costs by creating scalable infrastructures and processes.
Host: Welcome to A.I.S. Insights, the podcast powered by Living Knowledge where we break down complex research into actionable business strategy. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a fascinating study titled "How Siemens Democratized Artificial Intelligence." It’s an in-depth look at how a global giant like Siemens successfully moved AI projects from small pilots to full-scale, value-generating applications. Host: With me is our analyst, Alex Ian Sutherland. Alex, great to have you. Expert: Great to be here, Anna. Host: So, let's start with the big picture. We hear a lot about companies investing in AI, but the study suggests many are hitting a wall. What's the core problem they're facing? Expert: That's right. The problem is often called 'pilot purgatory'. Companies get excited, they run a few small-scale AI prototypes, and they work. But then, they get stuck. They fail to scale these projects across the organization, which means they never see the real business value. Host: Why is scaling so hard? What’s the roadblock? Expert: The study identifies a few key challenges. First, defining the right tasks for AI. This requires deep business knowledge. Second, dealing with data—you need massive amounts for training, and it has to be the *right* data. Expert: And perhaps the biggest hurdles are cultural. AI systems give probabilistic answers—'maybe' or 'likely'—not the black-and-white answers traditional software provides. That requires a shift in mindset. Plus, there’s the 'black-box' fear: if you don’t understand how the AI works, how can you trust it? Host: That makes sense. It's as much a people problem as a technology problem. So how did the researchers in this study figure out how Siemens cracked this code? Expert: They conducted an in-depth case study, looking at Siemens' journey over several years. They interviewed key leaders and practitioners across different divisions, from healthcare to manufacturing, to build a comprehensive picture of their transformation. Host: And what did they find? What was the secret sauce for Siemens? Expert: The key finding is that Siemens succeeded by intentionally evolving through three distinct stages. They didn't just jump into the deep end. Host: Can you walk us through those stages? Expert: Of course. Stage one, before 2016, was called "Let a thousand flowers bloom." It was very tactical. Lots of small, isolated AI pilot projects were happening, but they weren't connected to a larger strategy. Expert: Then came stage two, "Strategic AI Enablement." This is when senior leadership got serious, communicating that AI was critical for the company's future. They created an AI Lab to bring business experts and data scientists together to co-create solutions. Host: And the final stage? Expert: The third and current stage is "AI Democratization for Business Transformation." This is the real game-changer. The goal is to make AI accessible and usable for everyone, not just a small group of specialists. Host: The study uses that term a lot—'AI Democratization'. Can you break down what that means in practice? Expert: It’s not about giving everyone coding tools. It’s about creating a collaborative structure that integrates the unique skills of three specific groups: the domain experts—these are your engineers, doctors, or factory managers who know the business problems inside and out. Expert: Then you have the data scientists, who build the models. And finally, the IT professionals, who build the platforms and infrastructure to scale the solutions securely. Democratization is the process of making these three groups work together seamlessly. Host: This sounds great in theory. So, why does this matter for businesses listening right now? What is the practical takeaway? Expert: This is the most crucial part. The study frames the business impact in two ways: driving value and reducing cost. Expert: First, on the value side, democratization roots AI in deep domain knowledge. The study highlights a case at a Siemens factory where they initially just gave data scientists a huge amount of production data and said, "find the golden nugget." It didn't work. Host: Why not? Expert: Because the data scientists didn't have the context. It was only when they teamed up with the process engineers—the domain experts—that they could identify the most valuable problems to solve, like predicting quality control bottlenecks. Value comes from solving real problems, and your business experts are the ones who know those problems best. Host: Okay, so involving business experts drives value. What about the cost side? Expert: Democratization lowers the long-term cost of AI. By creating centralized resources—like an AI Academy to upskill employees and a global AI platform—you create a scalable foundation. Instead of every department reinventing the wheel for each new project, you have shared tools, shared knowledge, and a common infrastructure. This makes deploying new AI applications faster and much more cost-efficient. Host: So it's about building a sustainable, company-wide capability, not just a collection of one-off projects. Expert: Exactly. That's how you escape pilot purgatory and start generating real, transformative value. Host: Fantastic. So, to sum it up for our listeners: the promise of AI isn't just about hiring brilliant data scientists. According to this study, the key to unlocking its real value is 'democratization'. Host: This means moving through stages, from scattered experiments to a strategic, collaborative approach that empowers your business experts, data scientists, and IT teams to work as one. This not only creates more valuable solutions but also builds a scalable, cost-effective foundation for the future. Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us. Expert: My pleasure, Anna. Host: And thanks to all of you for tuning into A.I.S. Insights. Join us next time as we continue to translate research into results.
Artificial Intelligence, AI Democratization, Digital Transformation, Organizational Capability, Case Study, AI Adoption, Siemens
MIS Quarterly Executive (2025)
Promises and Perils of Generative AI in Cybersecurity
Pratim Datta, Tom Acton
This paper presents a case study of a fictional insurance company, based on real-life events, to illustrate how generative artificial intelligence (GenAI) can be used for both offensive and defensive cybersecurity purposes. It explores the dual nature of GenAI as a tool for both attackers and defenders, presenting a significant dilemma for IT executives. The study provides actionable recommendations for developing a comprehensive cybersecurity strategy in the age of GenAI.
Problem
With the rapid adoption of Generative AI by both cybersecurity defenders and malicious actors, IT leaders face a critical challenge. GenAI significantly enhances the capabilities of attackers to create sophisticated, large-scale, and automated cyberattacks, while also offering powerful new tools for defense. This creates a high-stakes 'AI arms race,' forcing organizations to decide how to strategically embrace GenAI for defense without being left vulnerable to adversaries armed with the same technology.
Outcome
- GenAI is a double-edged sword, capable of both triggering and defending against sophisticated cyberattacks, requiring a proactive, not reactive, security posture. - Organizations must integrate a 'Defense in Depth' (DiD) strategy that extends beyond technology to include processes, a security-first culture, and continuous employee education. - Robust data governance is crucial to manage and protect data, the primary target of attacks, by classifying its value and implementing security controls accordingly. - A culture of continuous improvement is essential, involving regular simulations of real-world attacks (red-team/blue-team exercises) and maintaining a zero-trust mindset. - Companies must fortify defenses against AI-powered social engineering by combining advanced technical filtering with employee training focused on skepticism and verification. - Businesses should embrace proactive, AI-driven defense mechanisms like AI-powered threat hunting and adaptive honeypots to anticipate and neutralize threats before they escalate.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a critical topic for every business leader: cybersecurity in the age of artificial intelligence. Host: We'll be discussing a fascinating study from the MIS Quarterly Executive, titled "Promises and Perils of Generative AI in Cybersecurity." Host: It explores how GenAI has become a tool for both attackers and defenders, creating a significant dilemma for IT executives. Host: To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. The study summary mentions an 'AI arms race'. What is the core problem that business leaders are facing right now? Expert: The problem is that the game has fundamentally changed. For years, cyberattacks were something IT teams reacted to. But Generative AI has supercharged the attackers. Expert: Malicious actors are now using what the study calls 'black-hat GenAI' to create incredibly sophisticated, large-scale, and automated attacks that are faster and more convincing than anything we've seen before. Expert: Think of phishing emails that perfectly mimic your CEO's writing style, or malware that can change its own code in real-time to avoid detection. This technology makes it easy for even non-technical criminals to launch devastating attacks. Host: So, how did the researchers actually go about studying this fast-moving threat? Expert: They used a very practical approach. The study presents a detailed case study of a fictional insurance company, "Surine," that suffers one of these advanced attacks. Expert: But what's crucial is that this fictional story is based on real-life events and constructed from interviews with actual cybersecurity professionals and their clients. It’s not just theory; it’s a reflection of what’s happening in the real world. Host: That's a powerful way to illustrate the risk. So, after analyzing this case, what were the main findings? Expert: The first, and most important, is that GenAI is a double-edged sword. It’s an incredible weapon for attackers, but it's also an essential shield for defenders. This means companies can no longer afford to be reactive. They must be proactive. Host: What does being proactive look like in this context? Expert: It means adopting what the study calls a 'Defense in Depth' strategy. This isn't just about buying the latest security software. It’s a holistic approach that integrates technology, processes, and people. Host: And that people element seems critical. The study mentions that GenAI is making social engineering, like phishing attacks, much more dangerous. Expert: Absolutely. In the Surine case, the attackers used GenAI to craft a perfectly convincing email, supposedly from the CIO, complete with a deepfake video. It tricked employees into giving up their credentials. Expert: This is why the study emphasizes the need for a security-first culture and continuous employee education. We need to train our teams to have a healthy skepticism. Host: It sounds like fighting an AI-powered attacker requires an AI-powered defender. Expert: Precisely. The other key finding is the need to embrace proactive, AI-driven defense. The company in the study fought back using AI-powered 'honeypots'. Host: Honeypots? Can you explain what those are? Expert: Think of them as smart traps. They are decoy systems designed to look like valuable targets. A defensive AI uses them to lure the attacking AI, study its methods, and learn how to defeat it—all without putting real company data at risk. It’s literally fighting fire with fire. Host: This is all so fascinating. Alex, let’s bring it to our audience. What are the key takeaways for business leaders listening right now? Why does this matter to them? Expert: First, recognize that cybersecurity is no longer just an IT problem; it’s a core business risk. It requires a company-wide culture of security, championed from the C-suite down. Expert: Second, you must know what you're protecting. The study stresses the importance of robust data governance. Classify your data, understand its value, and focus your defenses on your most critical assets. Expert: Third, you have to shift from a reactive to a proactive mindset. This means investing in continuous training, running real-world attack simulations, and adopting a 'zero-trust' culture where every access attempt is verified. Expert: And finally, you have to leverage AI in your defense. In this new landscape, human teams alone can't keep up with the speed and scale of AI-driven attacks. You need AI to help anticipate and neutralize threats before they escalate. Host: So the message is clear: the threat has evolved, and so must our defense. Generative AI is both a powerful weapon and an essential shield. Host: Business leaders need a holistic, culture-first strategy and must be proactive, using AI to fight AI. Host: Alex Ian Sutherland, thank you for sharing these invaluable insights with us today. Expert: My pleasure, Anna. Host: And thank you to our listeners for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time as we continue to explore the intersection of business and technology.
Generative AI, Cybersecurity, Black-hat AI, White-hat AI, Threat Hunting, Social Engineering, Defense in Depth
MIS Quarterly Executive (2025)
Successfully Mitigating AI Management Risks to Scale AI Globally
Thomas Hutzschenreuter, Tim Lämmermann, Alexander Sake, Helmuth Ludwig
This study presents an in-depth case study of the industrial AI pioneer Siemens AG to understand how companies can effectively scale artificial intelligence systems. It identifies five critical technology management risks associated with both generative and predictive AI and provides practical recommendations for mitigating them to create company-wide business impact.
Problem
Many companies struggle to effectively scale modern AI systems, with over 70% of implementation projects failing to create a measurable business impact. These failures stem from machine learning's unique characteristics, which amplify existing technology management challenges and introduce entirely new ones that firms are often unprepared to handle.
Outcome
- Missing or falsely evaluated potential AI use case opportunities. - Algorithmic training and data quality issues. - Task-specific system complexities. - Mismanagement of system stakeholders. - Threats from provider and system dependencies.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I'm your host, Anna Ivy Summers. Today, we're diving into one of the biggest challenges facing businesses: how to move artificial intelligence from a small-scale experiment to a global, value-creating engine.
Host: We're exploring a new study titled "Successfully Mitigating AI Management Risks to Scale AI Globally." It's an in-depth look at the industrial pioneer Siemens AG to understand how companies can effectively scale AI systems, identifying the critical risks and providing practical recommendations. To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: It's great to be here, Anna.
Host: Alex, the study opens with a pretty stark statistic: over 70% of AI projects fail to create a measurable business impact. Why is it so difficult for companies to get this right?
Expert: It's a huge problem. The study points out that modern AI, which is based on machine learning, is fundamentally different from traditional software. It's not programmed with rigid rules; it learns from data in a probabilistic way. This amplifies old technology management challenges and creates entirely new ones that most firms are simply unprepared to handle.
Host: So to understand how to succeed, the researchers took a closer look at a company that is succeeding. What was their approach?
Expert: They conducted an in-depth case study of Siemens. Siemens is an ideal subject because they're a global industrial leader that has been working with AI for over 50 years—from early expert systems in the 70s to the predictive and generative AI we see today. This long journey provides a rich, real-world playbook of what works and what doesn't when you're trying to scale.
Host: By studying a success story, we can learn what to do right. So, what were the main risks the study uncovered?
Expert: The researchers identified five critical risk categories. The first is missing or falsely evaluating potential AI opportunities. The field moves so fast that it’s hard to even know what's possible, let alone which ideas will actually create value.
Host: Okay, so just finding the right project is the first hurdle. What's next?
Expert: The second risk is all about data. Specifically, algorithmic training and data quality issues. Every business leader has heard the phrase "garbage in, garbage out," and for AI, this is make-or-break. The study emphasizes that high-quality data is a strategic resource, but it's often siloed away in different departments, incomplete, or biased.
Host: That makes sense. What's the third risk?
Expert: Task-specific system complexities. AI doesn't operate in a vacuum. It has to be integrated into existing, often messy, technological landscapes—hardware, cloud servers, enterprise software. Even a small change in the real world, like new lighting in a factory, can degrade an AI's performance if it isn't retrained.
Host: So it’s about the tech integration. What about the human side?
Expert: That's exactly the fourth risk: mismanagement of system stakeholders. This is about people. To succeed, you need buy-in from everyone—engineers, sales teams, customers, and even regulators. If people don't trust the AI or see it as a threatening "black box," the project is doomed to fail, no matter how good the technology is.
Host: And the final risk?
Expert: The fifth risk is threats from provider and system dependencies. This is essentially getting locked-in to a single external vendor for a critical AI model or service. It limits your flexibility, can be incredibly costly, and puts you at the mercy of another company's roadmap.
Host: Those are five very real business risks. So, Alex, for our listeners—the business leaders and managers—what are the key takeaways? How can they actually mitigate these risks?
Expert: The study provides some excellent, practical recommendations. To avoid missing opportunities, they suggest a "hub-and-spoke" model. Have a central AI team, but also empower decentralized teams in different business units to scout for use cases that solve their specific problems.
Host: So, democratize the innovation process. What about the data problem?
Expert: You have to treat data as a strategic asset. The key is to implement company-wide data-sharing principles to break down those silos. Siemens is creating a centralized data warehouse so their experts can find and use the data they need. And critically, they focus on owning and protecting their most valuable data sources.
Host: And for managing the complexity of these systems?
Expert: The recommendation is to build for modularity. Siemens uses what they call a "model zoo"—a library of reusable AI components. This way, you can update or swap out parts of a system without having to rebuild it from scratch. It makes the whole architecture more agile and future-proof.
Host: I like that idea of a 'model zoo'. Let's touch on the last two. How do you manage stakeholders and avoid being locked-in to a vendor?
Expert: For stakeholders, the advice is to integrate them into the development process step-by-step. Educate them through workshops and hands-on "playground" sessions to build trust. Siemens even cultivates internal "AI ambassadors" who champion the technology among their peers.
Expert: And to avoid dependency, the strategy is simple but powerful: dual-sourcing. For any critical AI project, partner with at least two comparable providers. This maintains competition, gives you leverage, and ensures you're never completely reliant on a single external company.
Host: Fantastic advice, Alex. So to summarize for our listeners: successfully scaling AI means systematically scouting for the right opportunities, treating your data as a core strategic asset, building for modularity and change, bringing your people along on the journey, and actively avoiding vendor lock-in.
Host: Alex Ian Sutherland, thank you so much for breaking down this crucial research for us.
Expert: My pleasure, Anna.
Host: And thanks to all of you for tuning in to A.I.S. Insights. Join us next time as we explore the future of work in the age of intelligent automation.
AI management, risk mitigation, scaling AI, generative AI, predictive AI, technology management, case study
MIS Quarterly Executive (2024)
The Promise and Perils of Low-Code AI Platforms
Maria Kandaurova, Daniel A. Skog, Petra M. Bosch-Sijtsema
This study investigates the adoption of a low-code conversational Artificial Intelligence (AI) platform within four multinational corporations. Through a case study approach, the research identifies significant challenges that arise from fundamental, yet incorrect, assumptions about low-code technologies. The paper offers recommendations for companies to better navigate the implementation process and unlock the full potential of these platforms.
Problem
As businesses increasingly turn to AI for process automation, they often encounter significant hurdles during adoption. Low-code AI platforms are marketed as a solution to simplify this process, but there is limited research on their real-world application. This study addresses the gap by showing how companies' false assumptions about the ease of use, adaptability, and integration of these platforms can limit their effectiveness and return on investment.
Outcome
- The usability of low-code AI platforms is often overestimated; non-technical employees typically face a much steeper learning curve than anticipated and still require a foundational level of coding and AI knowledge. - Adapting low-code AI applications to specific, complex business contexts is challenging and time-consuming, contrary to the assumption of easy tailoring. It often requires significant investment in standardizing existing business processes first. - Integrating low-code platforms with existing legacy systems and databases is not a simple 'plug-and-play' process. Companies face significant challenges due to incompatible data formats, varied interfaces, and a lack of a comprehensive data strategy. - Successful implementation requires cross-functional collaboration between IT and business teams, thorough platform testing before procurement, and a strategic approach to reengineering business processes to align with AI capabilities.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a very timely topic for any business looking to innovate: the real-world challenges of adopting new technology. We’ll be discussing a fascinating study titled "The Promise and Perils of Low-Code AI Platforms." Host: This study looks at how four major corporations adopted a low-code conversational AI platform, and it uncovers some crucial, and often incorrect, assumptions that businesses make about these powerful tools. Here to break it down for us is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. Businesses are constantly hearing about AI and automation. What’s the core problem that these low-code AI platforms are supposed to solve? Expert: The problem is a classic one: a gap between ambition and resources. Companies want to automate processes, build chatbots, and leverage AI, but they often lack large teams of specialized AI developers. Low-code platforms are marketed as the perfect solution. Host: The 'democratization' of AI we hear so much about. Expert: Exactly. The promise is that you can use a simple, visual, drag-and-drop interface to build complex AI applications, empowering your existing business-focused employees to innovate without needing to write a single line of code. But as the study found, that promise often doesn't match the reality. Host: So how did the researchers investigate this gap between promise and reality? Expert: They took a very practical approach. They didn't just survey people; they conducted an in-depth case study. They followed the journey of four large multinational companies—in the energy, automotive, and retail sectors—as they all tried to implement the very same low-code conversational AI platform. Host: That’s great. So by studying the same platform across different industries, they could really pinpoint the common challenges. What were the main findings? Expert: The findings centered on three major false assumptions businesses made. The first was about usability. The assumption was that ‘low-code’ meant anyone could do it. Host: And that wasn't the case? Expert: Not at all. While the IT staff found it user-friendly, the business-side employees—the ones who were supposed to be empowered—faced a much steeper learning curve than anyone anticipated. One domain expert in the study described the experience as being "like Greek," saying it was far more complex than just "dragging and dropping." Host: So you still need a foundational level of technical knowledge. What was the second false assumption? Expert: It was about adaptability. The idea was that you could easily tailor these platforms to any specific business need. But creating applications to handle complex, real-world customer queries proved incredibly challenging and time-consuming. Host: Why was that? Expert: Because real business processes are often messy and rely on human intuition. The study found that before companies could automate a process, they first had to invest heavily in understanding and standardizing it. You can't teach an AI a process that isn't clearly defined. Host: That makes sense. You have to clean your house before you can automate the cleaning. What was the final key finding? Expert: This one is huge for any CIO: integration. The belief was that these platforms would be a simple 'plug-and-play' solution that could easily connect to existing company databases and systems. Host: I have a feeling it wasn't that simple. Expert: Far from it. The companies ran into major roadblocks trying to connect the platform to their legacy systems. They faced incompatible data formats and a lack of a unified data strategy. The study showed that you often need someone with knowledge of coding and APIs to build the bridges between the new platform and the old systems. Host: So, Alex, this is the crucial part for our listeners. If a business leader is considering a low-code AI tool, what are the key takeaways? What should they do differently? Expert: The study provides a clear roadmap. First, thoroughly test the platform before you buy it. Don't just watch the vendor's demo. Have your actual employees—the business users—try to build a real-world application with it. This will reveal the true learning curve. Host: A 'try before you buy' approach. What else? Expert: Second, success requires cross-functional collaboration. It’s not an IT project or a business project; it's both. The study highlighted that the most successful implementations happened when IT experts and business domain experts worked together in blended teams from day one. Host: So break down those internal silos. Expert: Absolutely. And finally, be prepared to change your processes, not just your tools. You can't just layer AI on top of existing workflows. You need to re-evaluate and often redesign your processes to align with the capabilities of the AI. It's as much about business process re-engineering as it is about technology. Host: This is incredibly insightful. It seems low-code AI platforms are powerful, but they are certainly not a magic bullet. Host: To sum it up: the promise of simplicity with these platforms often hides significant challenges in usability, adaptation, and integration. Success depends less on the drag-and-drop interface and more on a strategic approach that involves rigorous testing, deep collaboration between teams, and a willingness to rethink your fundamental business processes. Host: Alex, thank you so much for shedding light on the perils, and the real promise, of these platforms. Expert: My pleasure, Anna. Host: And a big thank you to our audience for tuning into A.I.S. Insights. We’ll see you next time.
Low-Code AI Platforms, Artificial Intelligence, Conversational AI, Implementation Challenges, Digital Transformation, Business Process Automation, Case Study
MIS Quarterly Executive (2024)
How GuideCom Used the Cognigy.AI Low-Code Platform to Develop an AI-Based Smart Assistant
Imke Grashoff, Jan Recker
This case study investigates how GuideCom, a medium-sized German software provider, utilized the Cognigy.AI low-code platform to create an AI-based smart assistant. The research follows the company's entire development process to identify the key ways in which low-code platforms enable and constrain AI development. The study illustrates the strategic trade-offs companies face when adopting this approach.
Problem
Small and medium-sized enterprises (SMEs) often lack the extensive resources and specialized expertise required for in-house AI development, while off-the-shelf solutions can be too rigid. Low-code platforms are presented as a solution to democratize AI, but there is a lack of understanding regarding their real-world impact. This study addresses the gap by examining the practical enablers and constraints that firms encounter when using these platforms for AI product development.
Outcome
- Low-code platforms enable AI development by reducing complexity through visual interfaces, facilitating cross-functional collaboration between IT and business experts, and preserving resources. - Key constraints of using low-code AI platforms include challenges with architectural integration into existing systems, ensuring the product is expandable for different clients and use cases, and managing security and data privacy concerns. - Contrary to the 'no-code' implication, existing software development skills are still critical for customizing solutions, re-engineering code, and overcoming platform limitations, especially during testing and implementation. - Establishing a strong knowledge network with the platform provider (for technical support) and innovation partners like clients (for domain expertise and data) is a crucial factor for success. - The decision to use a low-code platform is a strategic trade-off; it significantly lowers the barrier to entry for AI innovation but requires careful management of platform dependencies and inherent constraints.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a fascinating case study called "How GuideCom Used the Cognigy.AI Low-Code Platform to Develop an AI-Based Smart Assistant". Host: It explores how a medium-sized company built its first AI product using a low-code platform, and what that journey reveals about the strategic trade-offs of this popular approach. Host: To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Thanks for having me, Anna. Host: Alex, let's start with the big picture. What's the real-world problem this study is tackling? Expert: The problem is something many businesses, especially small and medium-sized enterprises or SMEs, are facing. They know they need to adopt AI to stay competitive, but they often lack the massive budgets or specialized teams of data scientists and AI engineers to build solutions from scratch. Host: And I imagine off-the-shelf products can be too restrictive? Expert: Exactly. They’re often not a perfect fit. Low-code platforms promise a middle ground—a way to "democratize" AI development. But there's been a gap in understanding what really happens when a company takes this path. This study fills that gap. Host: So how did the researchers approach this? What did they do? Expert: They conducted an in-depth case study. They followed a German software provider, GuideCom, for over 16 months as they developed their first AI product—a smart assistant for HR services—using a low-code platform called Cognigy.AI. Host: It sounds like they had a front-row seat to the entire process. So, what were the key findings? Did the low-code platform live up to the hype? Expert: It was a story of enablers and constraints. On the positive side, the platform absolutely enabled AI development. Its visual, drag-and-drop interface dramatically reduced complexity. Host: How did that help in practice? Expert: It was crucial for fostering collaboration. Suddenly, the business experts from the HR department could work directly with the IT developers. They could see the logic, understand the process, and contribute meaningfully, which is often a huge challenge in tech projects. It also saved a significant amount of resources. Host: That sounds fantastic. But you also mentioned constraints. What were the challenges? Expert: The constraints were very real. The first was architectural integration. Getting the AI tool, built on an external platform, to work smoothly with GuideCom’s existing software suite was a major hurdle. Host: And what else? Expert: Security and expandability. They needed to ensure the client’s data was secure, and they wanted the product to be scalable for many different clients, each with unique needs. The platform had limitations that made this complex. Host: So 'low-code' doesn't mean 'no-skills needed'? Expert: That's perhaps the most critical finding. GuideCom's existing software development skills were absolutely essential. They had to write custom code and re-engineer parts of the solution to overcome the platform's limitations and meet their security and integration needs. The promise of 'no-code' wasn't the reality. Host: This brings us to the most important question for our listeners: why does this matter for business? What are the practical takeaways? Expert: The biggest takeaway is that adopting a low-code AI platform is a strategic trade-off, not a magic bullet. It brilliantly lowers the barrier to entry, allowing companies to start innovating with AI without a massive upfront investment. That’s a game-changer. Host: But there's a 'but'. Expert: Yes. But you must manage the trade-offs. Firstly, you become dependent on the platform provider, so you need to choose your partner carefully. Secondly, you cannot neglect in-house technical skills. You still need people who can code to handle customization and integration. Host: The study also mentioned the importance of partnerships, didn't it? Expert: It was a crucial factor for success. GuideCom built a strong knowledge network. They had a close relationship with the platform provider, Cognigy, for technical support, and they partnered with a major bank as their first client. This client provided invaluable domain expertise and real-world data to train the AI. Host: A powerful combination of technical and business partners. Expert: Precisely. You need both to succeed. Host: This has been incredibly insightful. So to summarize for our listeners: Low-code platforms can be a powerful gateway for companies to start building AI solutions, as they reduce complexity and foster collaboration. Host: However, it's a strategic trade-off. Businesses must be prepared for challenges with integration and security, retain in-house software skills for customization, and build a strong network with both the platform provider and innovation partners. Host: Alex, thank you so much for breaking this down for us. Expert: My pleasure, Anna. Host: And thank you for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time as we continue to explore the future of business and technology.
low-code development, AI development, smart assistant, conversational AI, case study, digital transformation, SME