Communications of the Association for Information Systems (2025)
The Impact of Gamification on Cybersecurity Learning: Multi-Study Analysis
J.B. (Joo Baek) Kim, Chen Zhong, Hong Liu
This paper systematically assesses the impact of gamification on cybersecurity education through a four-semester, multi-study approach. The research compares learning outcomes between gamified and traditional labs, analyzes student perceptions and motivations using quantitative methods, and explores learning experiences through qualitative interviews. The goal is to provide practical strategies for integrating gamification into cybersecurity courses.
Problem
There is a critical and expanding cybersecurity workforce gap, emphasizing the need for more effective, practical, and engaging training methods. Traditional educational approaches often struggle to motivate students and provide the necessary hands-on, problem-solving skills required for the complex and dynamic field of cybersecurity.
Outcome
- Gamified cybersecurity labs led to significantly better student learning outcomes compared to traditional, non-gamified labs. - Well-designed game elements, such as appropriate challenges and competitiveness, positively influence student motivation. Intrinsic motivation (driven by challenge) was found to enhance learning outcomes, while extrinsic motivation (driven by competition) increased career interest. - Students found gamified labs more engaging due to features like instant feedback, leaderboards, clear step-by-step instructions, and story-driven scenarios that connect learning to real-world applications. - Gamification helps bridge the gap between theoretical knowledge and practical skills, fostering deeper learning, critical thinking, and a greater interest in pursuing cybersecurity careers.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: In a world of ever-growing digital threats, how can businesses train a more effective cybersecurity workforce? Today, we're diving into a fascinating multi-study analysis titled "The Impact of Gamification on Cybersecurity Learning." Host: This study systematically assesses how using game-like elements in training can impact learning, motivation, and even career interest in cybersecurity. Host: And to help us break it down, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. What is the real-world problem this study is trying to solve? Expert: The problem is massive, and it's growing every year. It’s the cybersecurity workforce gap. The study cites a 2024 report showing the global shortage of professionals has expanded to nearly 4.8 million. Host: Almost 5 million people. That’s a staggering number. Expert: It is. And the core issue is that traditional educational methods often fail. They can be dry, theoretical, and they don't always build the practical, hands-on problem-solving skills needed to fight modern cyber threats. Companies need people who are not just knowledgeable, but also engaged and motivated. Host: So how did the researchers approach this challenge? How do you even begin to measure the impact of something like gamification? Expert: They used a really comprehensive mixed-method approach over four university semesters. It was essentially three studies in one. Host: Tell us about them. Expert: First, they directly compared the performance of students in gamified labs against those in traditional, non-gamified labs. They measured this with quizzes and final exam scores. Host: So, a direct A/B test on learning outcomes. Expert: Exactly. Second, they used quantitative surveys to understand the "why" behind the performance. They looked at what motivated the students – things like challenge, competition, and how that affected their learning and career interests. Host: And the third part? Expert: That was qualitative. The researchers conducted in-depth interviews with students to get rich, subjective feedback on their actual learning experience. They wanted to know what it felt like, in the students' own words. Host: So, after all that research, what were the key findings? Did making cybersecurity training a 'game' actually work? Expert: It worked, and in very specific ways. The first major finding was clear: students in the gamified labs achieved significantly better learning outcomes. Their scores were higher. Host: And the study gave some clues as to why? Expert: It did. This is the second key finding. Well-designed game elements had a powerful effect on motivation, but it's important to distinguish between two types. Host: Intrinsic and extrinsic? Expert: Precisely. Intrinsic motivation—the internal drive from feeling challenged and a sense of accomplishment—was found to directly enhance learning outcomes. Students learned the material better because they enjoyed the puzzle. Host: And extrinsic motivation? The external rewards? Expert: That’s things like leaderboards and points. The study found that this type of motivation, driven by competition, had a huge impact on increasing students' interest in pursuing a career in cybersecurity. Host: That’s a fascinating distinction. So one drives learning, the other drives career interest. What did the students themselves say made the gamified labs so much more engaging? Expert: From the interviews, three things really stood out. First, instant feedback. Knowing immediately if they solved a challenge correctly was highly rewarding. Second, the use of story-driven scenarios. It made the tasks feel like real-world problems, not just abstract exercises. And third, breaking down complex topics into clear, step-by-step instructions. It made difficult concepts much less intimidating. Host: This is all incredibly insightful. Let’s get to the bottom line: why does this matter for business? What are the key takeaways for leaders and managers? Expert: This is the most important part. For any business struggling with the cybersecurity skills gap, this study provides a clear, evidence-based path forward. Host: So, what’s the first step? Expert: Acknowledge that gamification is not just about making training 'fun'; it's a powerful tool for building your talent pipeline. By incorporating competitive elements, you can actively spark career interest and identify promising internal candidates you didn't know you had. Host: And for designing the training itself? Expert: The takeaway is that design is everything. Corporate training programs should use realistic, story-driven scenarios to bridge the gap between theory and practice. Provide instant feedback mechanisms and break down complex tasks into manageable challenges. This fosters deeper learning and real, applicable skills. Host: It sounds like it helps create the on-the-job experience that hiring managers are looking for. Expert: Exactly. Finally, businesses need to understand that motivation isn't one-size-fits-all. The most effective training programs will offer a blend of challenges that appeal to intrinsic learners and competitive elements that engage extrinsic learners. It’s about creating a rich, diverse learning environment. Host: Fantastic. So, to summarize for our listeners: the cybersecurity skills gap is a serious business threat, but this study shows that well-designed gamified training is a proven strategy to fight it. It improves learning, boosts both intrinsic and extrinsic motivation, and can directly help build a stronger talent pipeline. Host: Alex, thank you so much for breaking down this complex study into such clear, actionable insights. Expert: My pleasure, Anna. Host: And thank you for tuning in to A.I.S. Insights — powered by Living Knowledge.
Communications of the Association for Information Systems (2025)
Procuring Accessible Third-Party Web-Based Software Applications for Inclusivity: A Socio-technical Approach
Niamh Daly, Ciara Heavin, James Northridge
This study investigates how universities can improve their decision-making processes when procuring third-party web-based software to enhance accessibility for students and staff. Using a socio-technical systems framework, the research conducts a case study at a single university, employing qualitative interviews with procurement experts and users to evaluate current practices.
Problem
The procurement process for web-based software in higher education often fails to adequately consider web accessibility standards. This oversight creates barriers for an increasingly diverse student population, including those with disabilities, and represents a failure to integrate equality, diversity, and inclusion into critical technology-related decisions.
Outcome
- Procurement processes often lack standardized, early-stage accessibility testing, with some evaluations occurring after the software has already been acquired. - A significant misalignment exists between the accessibility testing practices of software vendors and the actual needs of the higher education institution. - Individuals with disabilities are not typically involved in the initial evaluation phase, though their feedback might be sought after implementation, leading to reactive rather than proactive solutions. - Accessible software directly improves student engagement and fosters a more inclusive campus environment, benefiting the entire university community. - The research proposes using the SEIPS 2.0 model as a structured framework to map the procurement work system, improve accessibility evaluation, and better integrate diverse expertise into the decision-making process.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge, the podcast where we break down cutting-edge research for today’s business leaders. I’m your host, Anna Ivy Summers.
Host: Today, we’re diving into a fascinating study from the Communications of the Association for Information Systems titled, "Procuring Accessible Third-Party Web-Based Software Applications for Inclusivity: A Socio-technical Approach".
Host: It investigates how large organizations, specifically universities in this case, can make better decisions when buying software to ensure it’s accessible and inclusive for everyone. Here to unpack it all is our analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Thanks for having me, Anna.
Host: So, let's start with the big picture. When a company or a university buys new software, they're looking at cost, features, and security. Why is accessibility often an afterthought, and what problem does that create?
Expert: That’s the core of the issue. The study found that the typical procurement process often fails to properly consider web accessibility standards. This creates significant barriers for a growing number of people, including those with disabilities. It’s a failure to integrate equality and inclusion into critical technology decisions.
Host: It sounds like a classic case of not thinking about all the end-users from the start.
Expert: Exactly. The researchers found that crucial accessibility evaluations often happen *after* the software has already been bought and paid for. One professional in the study put it perfectly, saying their team often has "no say in that until the software actually arrives." At that point, fixing the problems is far more costly and complex than getting it right from the beginning.
Host: So how did the researchers get inside this complex process to understand what’s going wrong?
Expert: They took a really interesting approach called a socio-technical systems framework. In simple terms, they didn't just look at the technology itself. They mapped out the entire system: the people involved, the tasks they perform, the organizational rules, and the tools they use.
Host: And they did this within a real-world setting?
Expert: Yes, they conducted a case study at a large university. They interviewed ten key people, from the IT and procurement experts who buy the software, to the students and staff with disabilities who actually use it every day. This gave them a 360-degree view of where the process was breaking down.
Host: A 360-degree view often reveals some surprising things. What were the key findings?
Expert: There were a few that really stood out. First, as we mentioned, accessibility testing happens far too late, if at all. It's not a standardized, early-stage checkpoint.
Host: So it's reactive, not proactive.
Expert: Precisely. The second key finding was a major misalignment between what software vendors say about accessibility and what the organization actually needs. There's a lack of rigorous, standardized testing.
Host: And what about the users themselves? Were they part of the process?
Expert: That was the third major finding. Individuals with disabilities—the real expert users—are almost never involved in the initial evaluation. Their feedback might be sought after the tool is already implemented, but by then it’s about patching problems, not choosing the right solution from the start.
Host: That seems like a huge missed opportunity. But the study also found a silver lining, right? When the software *is* accessible, what’s the impact?
Expert: The impact is huge. Accessible software directly improves engagement and creates a more inclusive environment. One user in the study said, "I now want to actively participate in class. I'm not sitting there panicked... I now realize that I know what I'm doing, and I can participate easier." That’s a powerful testament to getting it right.
Host: It absolutely is. Alex, this study was based in a university, but our listeners are in the corporate world. Why does this matter for a CEO, a CTO, or a product manager?
Expert: This is the most crucial part. The lessons are universal. First, businesses need to reframe accessibility not as a legal compliance checkbox, but as a core design value and a strategic advantage. It expands your potential customer base and strengthens your brand.
Host: So it’s a market opportunity, not just a requirement.
Expert: Exactly. Second, proactive procurement is a powerful risk management tool. The study highlights the high cost of retrofitting. By building accessibility into your purchasing process from day one, you avoid expensive re-engineering projects down the line. It’s simply smart business.
Host: That makes perfect sense. What else can businesses take away?
Expert: The idea that inclusive design is simply good design. One of the professionals interviewed noted that when you make content more accessible for an inclusive community, you "enhance the quality of the content for all of the community." A clear, simple interface designed for accessibility benefits every single user.
Host: So, to wrap this up, what is the single most important action a business leader can take away from this research?
Expert: It's about changing the process. Don't just ask vendors if their product is accessible; demand proof. More importantly, bring your actual users—including those with disabilities—into the evaluation process early. Their insight is invaluable and will save you from making costly mistakes.
Host: In short: prioritize accessibility from the start, involve your users, and recognize it not just as a compliance issue, but as a strategic driver for better products and a more inclusive culture.
Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we translate another key piece of research into actionable business intelligence.
Communications of the Association for Information Systems (2025)
Unveiling Enablers to the Use of Generative AI Artefacts in Rural Educational Settings: A Socio-Technical Perspective
Pramod K. Patnaik, Kunal Rao, Gaurav Dixit
This study investigates the factors that enable the use of Generative AI (GenAI) tools in rural educational settings within developing countries. Using a mixed-method approach that combines in-depth interviews and the Grey DEMATEL decision-making method, the research identifies and analyzes these enablers through a socio-technical lens to understand their causal relationships.
Problem
Marginalized rural communities in developing countries face significant challenges in education, including a persistent digital divide that limits access to modern learning tools. This research addresses the gap in understanding how Generative AI can be practically leveraged to overcome these education-related challenges and improve learning quality in under-resourced regions.
Outcome
- The study identified fifteen key enablers for using Generative AI in rural education, grouped into social and technical categories. - 'Policy initiatives at the government level' was found to be the most critical enabler, directly influencing other key factors like GenAI training for teachers and students, community awareness, and school leadership commitment. - Six novel enablers were uncovered through interviews, including affordable internet data, affordable telecommunication networks, and the provision of subsidized devices for lower-income groups. - An empirical framework was developed to illustrate the causal relationships among the enablers, helping stakeholders prioritize interventions for effective GenAI adoption.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're looking at how Generative AI can transform education, not in Silicon Valley, but in some of the most under-resourced corners of the world.
Host: We're diving into a fascinating new study titled "Unveiling Enablers to the Use of Generative AI Artefacts in Rural Educational Settings: A Socio-Technical Perspective". It investigates the key factors that can help bring powerful AI tools to classrooms in developing countries. With me today is our expert analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Thanks for having me, Anna. It's a critical topic.
Host: Let's start with the big picture. What is the real-world problem this study is trying to solve?
Expert: The core problem is the digital divide. In many marginalized rural communities, especially in developing nations, students and teachers face huge educational challenges. We're talking about a lack of resources, infrastructure, and access to modern learning tools. While we see Generative AI changing industries in developed countries, there's a real risk these rural communities get left even further behind.
Host: So the question is, can GenAI be a bridge across that divide, instead of making it wider?
Expert: Exactly. The study specifically looks at how we can practically leverage these AI tools to overcome those long-standing challenges and actually improve the quality of education where it's needed most.
Host: So how did the researchers approach such a complex issue? It must be hard to study on the ground.
Expert: It is, and they used a really smart mixed-method approach. First, they went directly to the source, conducting in-depth interviews with teachers, government officials, and community members in rural India. This gave them rich, qualitative data—the real stories and challenges. Then, they took all the factors they identified and used a quantitative analysis to find the causal relationships between them.
Host: So it’s not just a list of problems, but a map of how one factor influences another?
Expert: Precisely. It allows them to say, 'If you want to achieve X, you first need to solve for Y'. It creates a clear roadmap for intervention.
Host: That sounds powerful. What were the key findings? What are the biggest levers we can pull?
Expert: The study identified fifteen key 'enablers', which are the critical ingredients for success. But the single most important finding, the one that drives almost everything else, is 'Policy initiatives at the government level'.
Host: That's surprising. I would have guessed something more technical, like internet access.
Expert: And that's crucial, but the study shows that strong government policy is the 'cause' factor. It directly enables other key things like funding, GenAI training for teachers and students, creating community awareness, and getting school leadership on board. Without that top-down strategic support, everything else struggles.
Host: What other enablers stood out?
Expert: The interviews uncovered some really practical, foundational needs that go beyond just theory. Things we might take for granted, like affordable internet data plans, reliable telecommunication networks, and providing subsidized devices like laptops or tablets for lower-income families. It highlights that access isn't just about availability; it’s about affordability.
Host: This is the most important question for our listeners, Alex. This research is clearly vital for educators and policymakers, but why should business professionals pay attention? What are the takeaways for them?
Expert: I see three major opportunities here. First, this study is essentially a market-entry roadmap for a massive, untapped audience. For EdTech companies, telecoms, and hardware manufacturers, it lays out exactly what is needed to succeed in these emerging markets. It points directly to opportunities for public-private partnerships to provide those subsidized devices and affordable data plans we just talked about.
Host: So it’s a blueprint for doing business in these regions.
Expert: Absolutely. Second, it's a guide for product development. The study found that 'ease of use' and 'localized language support' are critical enablers. This tells tech companies that you can't just parachute in a complex, English-only product. Your user interface needs to be simple, intuitive, and available in local languages to gain any traction. That’s a direct mandate for product and design teams.
Host: That makes perfect sense. What’s the third opportunity?
Expert: It redefines effective Corporate Social Responsibility, or CSR. Instead of just one-off donations, a company can use this framework to make strategic investments. They could fund teacher training programs or develop technical support hubs in rural areas. This creates sustainable, long-term impact, builds immense brand loyalty, and helps develop the very ecosystem their business will depend on in the future.
Host: So to sum it up: Generative AI holds incredible promise for bridging the educational divide in rural communities, but technology alone isn't the answer.
Expert: That's right. Success hinges on a foundation of supportive government policy, which then enables crucial factors like training, awareness, and true affordability.
Host: And for businesses, this isn't just a social issue—it’s a clear roadmap for market opportunity, product design, and creating strategic, high-impact investments. Alex, thank you so much for breaking this down for us.
Expert: My pleasure, Anna.
Host: And thank you for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we continue to explore the intersection of business, technology, and groundbreaking research.
Generative AI, Rural, Education, Digital Divide, Interviews, Socio-technical Theory
MIS Quarterly Executive (2025)
Exploring the Agentic Metaverse's Potential for Transforming Cybersecurity Workforce Development
Ersin Dincelli, Haadi Jafarian
This study explores how an 'agentic metaverse'—an immersive virtual world powered by intelligent AI agents—can be used for cybersecurity training. The researchers presented an AI-driven metaverse prototype to 53 cybersecurity professionals to gather qualitative feedback on its potential for transforming workforce development.
Problem
Traditional cybersecurity training methods, such as classroom instruction and static online courses, are struggling to keep up with the fast-evolving threat landscape and high demand for skilled professionals. These conventional approaches often lack the realism and adaptivity needed to prepare individuals for the complex, high-pressure situations they face in the real world, contributing to a persistent skills gap.
Outcome
- The concept of an AI-driven agentic metaverse for training was met with strong enthusiasm, with 92% of professionals believing it would be effective for professional training. - Key challenges to implementing this technology include significant infrastructure demands, the complexity of designing realistic AI-driven scenarios, ensuring security and privacy, and managing user adoption. - The study identified five core challenges: infrastructure, multi-agent scenario design, security/privacy, governance of social dynamics, and change management. - Six practical recommendations are provided for organizations to guide implementation, focusing on building a scalable infrastructure, developing realistic training scenarios, and embedding security, privacy, and safety by design.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating new study titled "Exploring the Agentic Metaverse's Potential for Transforming Cybersecurity Workforce Development." With me is our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: This study sounds like it’s straight out of science fiction. Can you break it down for us? What exactly is an ‘agentic metaverse’ and how does it relate to cybersecurity training? Expert: Absolutely. Think of it as a super-smart, immersive virtual world. The 'metaverse' part is the 3D, interactive environment, like a sophisticated simulation. The 'agentic' part means it's populated by intelligent AI agents that can think, adapt, and act on their own to create dynamic training scenarios. Host: So, we're talking about a virtual reality training ground run by AI. Why is this needed? What's wrong with how we train cybersecurity professionals right now? Expert: That’s the core of the problem the study addresses. The cyber threat landscape is evolving at an incredible pace. Traditional methods, like classroom lectures or static online courses, just can't keep up. Host: They’re too slow? Expert: Exactly. They lack realism and the ability to adapt. Real cyber attacks are high-pressure, collaborative, and unpredictable. A multiple-choice quiz doesn’t prepare you for that. This contributes to a massive global skills gap and high burnout rates among professionals. We need a way to train for the real world, in a safe environment. Host: So how did the researchers actually test this idea of an agentic metaverse? Expert: They built a functional prototype. It was an AI-driven, 3D environment that simulated cybersecurity incidents. They then presented this prototype to a group of 53 experienced cybersecurity professionals to get their direct feedback. Host: They let the experts kick the tires, so to speak. Expert: Precisely. The professionals could see firsthand how AI agents could play the role of attackers, colleagues, or even mentors, creating quests and scenarios that adapt in real-time based on the trainee's actions. It makes abstract threats feel tangible and urgent. Host: And what was the verdict from these professionals? Were they impressed? Expert: The response was overwhelmingly positive. A massive 92% of them believed this approach would be effective for professional training. They highlighted how engaging and realistic the scenarios felt, calling it a "great learning tool." Host: That’s a strong endorsement. But I imagine it’s not all smooth sailing. What are the hurdles to actually implementing this in a business? Expert: You're right. The enthusiasm was matched with a healthy dose of pragmatism. The study identified five core challenges for businesses to consider. Host: And what are they? Expert: First, infrastructure. Running a persistent, immersive 3D world with multiple AIs is computationally expensive. Second is scenario design. Creating AI-driven narratives that are both realistic and effective for learning is incredibly complex. Host: That makes sense. It's not just programming; it's like directing an intelligent, interactive movie. Expert: Exactly. The other key challenges were ensuring security and privacy within the training environment itself, managing the social dynamics in an immersive world, and finally, the big one: change management and user adoption. There's a learning curve, especially for employees who aren't gamers. Host: This is the crucial question for our listeners, Alex. Given those challenges, why should a business leader care? What are the practical takeaways here? Expert: This is where the study provides a clear roadmap. The biggest takeaway is that this technology can create a hyper-realistic, safe space for your teams to practice against advanced threats. It's like a flight simulator for cyber defenders. Host: So it moves training from theory to practice. Expert: It’s a complete shift. The AI agents can simulate anything from a phishing attack to a nation-state adversary, adapting their tactics based on your team's response. This allows you to identify skills gaps proactively and build real muscle memory for crisis situations. Host: What's the first step for a company that finds this interesting? Expert: The study recommends starting with small, focused pilot programs. Don't try to build a massive corporate metaverse overnight. Target a specific, high-priority training need, like incident response for a junior analyst team. Measure the results, prove the value, and then scale. Host: And it’s crucial to involve more than just the IT department, right? Expert: Absolutely. This has to be a cross-functional effort. You need your cybersecurity experts, your AI developers, your instructional designers from HR, and legal to think about privacy from day one. It's about building a scalable, secure, and truly effective training ecosystem. The payoff is a more resilient and adaptive workforce. Host: A fascinating look into the future of professional development. So, to sum it up: traditional cybersecurity training is falling behind. The 'agentic metaverse' offers a dynamic, AI-powered solution that’s highly realistic and engaging. While significant challenges in infrastructure and design exist, the potential to effectively close the skills gap is immense. Host: Alex, thank you so much for breaking this down for us. Expert: My pleasure, Anna. Host: And thank you for tuning in to A.I.S. Insights. We’ll see you next time.
Agentic Metaverse, Cybersecurity Training, Workforce Development, AI Agents, Immersive Learning, Virtual Reality, Training Simulation
International Conference on Wirtschaftsinformatik (2025)
To VR or not to VR? A Taxonomy for Assessing the Suitability of VR in Higher Education
Nadine Bisswang, Georg Herzwurm, Sebastian Richter
This study proposes a taxonomy to help educators in higher education systematically assess whether virtual reality (VR) is suitable for specific learning content. The taxonomy is grounded in established theoretical frameworks and was developed through a multi-stage process involving literature reviews and expert interviews. Its utility is demonstrated through an illustrative scenario where an educator uses the framework to evaluate a specific course module.
Problem
Despite the increasing enthusiasm for using virtual reality (VR) in education, its suitability for specific topics remains unclear. University lecturers, particularly those without prior VR experience, lack a structured approach to decide when and why VR would be an effective teaching tool. This gap leads to uncertainty about its educational benefits and hinders its effective adoption.
Outcome
- Developed a taxonomy that structures the reasons for and against using VR in higher education across five dimensions: learning objective, learning activities, learning assessment, social influence, and hedonic motivation. - The taxonomy provides a balanced overview by organizing 24 distinct characteristics into factors that favor VR use ('+') and factors that argue against it ('-'). - This framework serves as a practical decision-support tool for lecturers to make an informed initial assessment of VR's suitability for their specific learning content without needing prior technical experience. - The study demonstrates the taxonomy's utility through an application to a 'warehouse logistics management' learning scenario, showing how it can guide educators' decisions.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into the world of virtual reality in education and training, looking at a study titled, "To VR or not to VR? A Taxonomy for Assessing the Suitability of VR in Higher Education". Host: With me is our analyst, Alex Ian Sutherland. Alex, this study seems timely. It proposes a framework to help educators systematically assess if VR is actually the right tool for specific learning content. Expert: That's right, Anna. It’s about moving beyond the hype and making informed decisions. Host: So, let's start with the big problem. We hear constantly that VR is the future, but what's the real-world challenge this study is addressing? Expert: The core problem is uncertainty. An educator, or a corporate trainer for that matter, might be excited by VR's potential, but they lack a clear, structured way to decide if it's genuinely effective for their specific topic. Host: So they’re asking themselves, "Should I invest time and money into creating a VR module for this?" Expert: Exactly. And without a framework, that decision is often based on gut feeling rather than evidence. This can lead to ineffective adoption, where the technology doesn't actually improve learning outcomes, or it gets used for the wrong things. Host: It’s the classic ‘shiny new toy’ syndrome. So how did the researchers create a tool to solve this? What was their approach? Expert: It was a very practical, multi-stage process. They didn't just theorize. They combined established educational frameworks with real-world experience. They conducted sixteen in-depth interviews with experts—university lecturers with years of VR experience and the developers who actually build these applications. Host: So they grounded the theory in practical wisdom. Expert: Precisely. This allowed them to build a comprehensive framework that is both academically sound and relevant to the people who would actually use it. Host: And this framework is what the study calls a 'taxonomy'. For our listeners, what does that actually look like? Expert: Think of it as a detailed decision-making checklist. It organizes the reasons for and against using VR across five key dimensions. Host: What are those dimensions? Expert: The first three are directly about the teaching process: the **Learning Objective**—what you want people to learn; the **Learning Activities**—how they will learn it; and the **Learning Assessment**—how you’ll measure if they've learned it. Host: That makes sense. Objective, activity, and assessment. What are the other two? Expert: The other two are about the human and social context. One is **Social Influence**, which considers whether colleagues and the organization support the use of VR. The other is **Hedonic Motivation**, which is really about whether people are personally and professionally motivated to use the technology. Host: And I understand the framework gives a balanced view, right? Expert: Yes, and that’s a key strength. For each of those five areas, the taxonomy lists characteristics that favor using VR—marked with a plus—and those that argue against it—marked with a minus. It gives you a clear, balanced scorecard to inform your decision. Host: This is fascinating. While the study focuses on higher education, the implications for the business world seem enormous, particularly for corporate training. What is the key takeaway for a business leader? Expert: The takeaway is that this framework provides a strategic tool for investing in training technology. You can substitute 'lecturer' for 'corporate L&D manager,' and the challenges are identical. It helps a business move from asking, "Should we use VR?" to the much smarter question, "Where will VR deliver the best return on investment for us?" Host: Could you walk us through a business example? Expert: Of course. The study uses the example of teaching 'warehouse logistics management.' For a large retail or logistics company, training new employees on the layout and flow of a massive fulfillment center is a real challenge. It can be costly, disruptive to operations, and even unsafe. Host: So how would the taxonomy help here? Expert: A training manager would see a strong case for VR. The *learning objective* is to understand a complex physical space. The *learning activity* is exploration. VR allows a new hire to do that safely, on-demand, and without setting foot on a busy warehouse floor. It makes training scalable and reduces disruption. Host: And importantly, it also helps identify where *not* to use VR. Expert: Exactly. If your training module is on new compliance regulations or software that's purely text and forms, the taxonomy would quickly show that VR is overkill. You don't need an immersive, 3D world for that. This prevents companies from wasting money on VR for tasks where a simple video or e-learning module is more effective. Host: So, in essence, it’s not about being for or against VR, but about being strategic in its application. This framework gives organizations a clear, evidence-based method to decide where this powerful technology truly fits. Host: A brilliant tool for any business leader exploring immersive learning technologies. Alex Ian Sutherland, thank you for breaking down this study for us. Expert: My pleasure, Anna. Host: And to our audience, thank you for tuning in to A.I.S. Insights — powered by Living Knowledge.
International Conference on Wirtschaftsinformatik (2025)
Generative AI Usage of University Students: Navigating Between Education and Business
Fabian Walke, Veronika Föller
This study investigates how university students who also work professionally use Generative AI (GenAI) in both their academic and business lives. Using a grounded theory approach, the researchers interviewed eleven part-time students from a distance learning university to understand the characteristics, drivers, and challenges of their GenAI usage.
Problem
While much research has explored GenAI in education or in business separately, there is a significant gap in understanding its use at the intersection of these two domains. Specifically, the unique experiences of part-time students who balance professional careers with their studies have been largely overlooked.
Outcome
- GenAI significantly enhances productivity and learning for students balancing work and education, helping with tasks like writing support, idea generation, and summarizing content. - Students express concerns about the ethical implications, reliability of AI-generated content, and the risk of academic misconduct or being falsely accused of plagiarism. - A key practical consequence is that GenAI tools like ChatGPT are replacing traditional search engines for many information-seeking tasks due to their speed and directness. - The study highlights a strong need for universities to provide clear guidelines, regulations, and formal training on using GenAI effectively and ethically. - User experience is a critical factor; a positive, seamless interaction with a GenAI tool promotes continuous usage, while a poor experience diminishes willingness to use it.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business, technology, and Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating new study titled "Generative AI Usage of University Students: Navigating Between Education and Business." Host: It explores a very specific group: university students who also hold professional jobs. It investigates how they use Generative AI tools like ChatGPT in both their academic and work lives. And here to help us unpack it is our analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. Why focus on this particular group of working students? What’s the problem this study is trying to solve? Expert: Well, there's a lot of research on GenAI in the classroom and a lot on GenAI in the workplace, but very little on the bridge between them. Expert: These part-time students are a unique group. They are under immense time pressure, juggling deadlines for both their studies and their jobs. The study wanted to understand if GenAI is helping them cope, how they use it, and what challenges they face. Expert: Essentially, their experience is a sneak peek into the future of a workforce that will be constantly learning and working with AI. Host: So, how did the researchers get these insights? What was their approach? Expert: They took a very direct, human-centered approach. Instead of a broad survey, they conducted in-depth, one-on-one interviews with eleven of these working students. Expert: This allowed them to move beyond simple statistics and really understand the nuances, the strategies, and the genuine concerns people have when using these powerful tools in their day-to-day lives. Host: That makes sense. So let's get to it. What were the key findings? Expert: The first major finding, unsurprisingly, is that GenAI is a massive productivity booster for them. They use it for everything from summarizing articles and generating ideas for papers to drafting emails and even debugging code for work. It saves them precious time. Host: But I imagine it's not all smooth sailing. Were there concerns? Expert: Absolutely. That was the second key finding. Students are very aware of the risks. They worry about the accuracy of the information, with one participant noting, "You can't blindly trust everything he says." Expert: There’s also a significant fear around academic integrity. They’re anxious about being falsely accused of plagiarism, especially when university guidelines are unclear. As one student put it, "I think that's a real shame because you use Google or even your parents to correct your work and... that is absolutely allowed." Host: That’s a powerful point. Did any other user behaviors stand out? Expert: Yes, and this one is huge. For many information-seeking tasks, GenAI is actively replacing traditional search engines like Google. Expert: Nearly all the students said they now turn to ChatGPT first. It’s faster. Instead of sifting through pages of links, they get a direct, synthesized answer. One student even said, "Googling is a skill itself," implying it's a skill they need less often now. Host: That's a fundamental shift. So bringing all these findings together, what's the big takeaway for businesses? Why does this study matter for our listeners? Expert: It matters immensely, Anna, for several reasons. First, this is your incoming workforce. New graduates and hires will arrive expecting to use AI tools. They'll be looking for companies that don't just permit it, but actively integrate it into workflows to boost efficiency. Host: So businesses need to be prepared for that. What else? Expert: Training and guidelines are non-negotiable. This study screams that users need and want direction. Companies can’t afford a free-for-all. Expert: They need to establish clear policies on what data can be used, how to verify AI-generated content, and how to use it ethically. One student worked at a bank where public GenAI tools were banned due to sensitive customer data. That's a risk every company needs to assess. Proactive training isn't just a nice-to-have; it's essential risk management. Host: That seems critical, especially with data privacy. Any final takeaway for business leaders? Expert: Yes: user experience is everything. The study found that a smooth, intuitive, and fast AI tool encourages continuous use, while a clunky interface kills adoption. Expert: If you're building or buying AI solutions for your team, the quality of the user experience is just as important as the underlying model. If it's not easy to use, your employees simply won't use it. Host: So, to recap: we have an incoming AI-native workforce, a critical need for clear corporate guidelines and training, and the lesson that user experience will determine success or failure. Host: Alex, this has been incredibly insightful. Thank you for breaking down this study for us. Expert: My pleasure, Anna. Host: And thank you to our audience for tuning in to A.I.S. Insights, powered by Living Knowledge. We’ll see you next time.
International Conference on Wirtschaftsinformatik (2025)
Fostering Active Student Engagement in Flipped Classroom Teaching with Social Normative Feedback Research Paper
Maximilian May, Konstantin Hopf, Felix Haag, Thorsten Staake, and Felix Wortmann
This study examines the effectiveness of social normative feedback in improving student engagement within a flipped classroom setting. Through a randomized controlled trial with 140 undergraduate students, researchers provided one group with emails comparing their assignment progress to their peers, while a control group received no such feedback during the main study period.
Problem
The flipped classroom model requires students to be self-regulated, but many struggle with procrastination, leading to late submissions of graded assignments and underuse of voluntary learning materials. This behavior negatively affects academic performance, creating a need for scalable digital interventions that can encourage more timely and active student participation.
Outcome
- The social normative feedback intervention significantly reduced late submissions of graded assignments by 8.4 percentage points (an 18.5% decrease) compared to the control group. - Submitting assignments earlier was strongly correlated with higher correctness rates and better academic performance. - The feedback intervention helped mitigate the decline in assignment quality that was observed in later course modules for the control group. - The intervention did not have a significant effect on students' engagement with optional, voluntary assignments during the semester.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a study that has some fascinating implications for how we motivate people, not just in the classroom, but in the workplace too. Host: It’s titled, "Fostering Active Student Engagement in Flipped Classroom Teaching with Social Normative Feedback," and it explores how a simple psychological nudge can make a big difference. Host: With me is our analyst, Alex Ian Sutherland, who has looked deep into this study. Alex, welcome. Expert: Great to be here, Anna. Host: So, let's start with the big picture. What's the real-world problem this study is trying to solve? Expert: The problem is something many of us can relate to: procrastination. The study focuses on the "flipped classroom" model, which is becoming very common in both universities and corporate training. Host: And a flipped classroom is where you watch lectures or read materials on your own time, and then use class time for more hands-on, collaborative work, right? Expert: Exactly. It puts a lot of responsibility on the learner to be self-motivated. But what often happens is the "student syndrome"—people postpone their work until the last minute. This leads to late assignments, cramming, and ultimately, poorer performance. Host: It sounds like a common headache for any organization running online training programs. So how did the researchers try to tackle this? Expert: They ran a randomized controlled trial with 140 university students. They split the students into two groups. One was the control group, who just went through the course as usual. Expert: The other, the treatment group, received a simple intervention: a weekly email. This email included a visual progress bar showing them how many assignments they had correctly completed compared to their peers. Host: So it showed them where they stood? Like, 'you are here' in relation to the average student? Expert: Precisely. It showed them their progress relative to the median and the top 10% of their classmates who were active in the module. It’s a classic behavioral science technique called social normative feedback—a gentle nudge using our inherent desire to keep up with the group. Host: A simple email nudge... it sounds almost too simple. Did it actually work? What were the key findings? Expert: It was surprisingly effective, but in specific ways. First, for graded assignments, the feedback worked wonders. The group receiving the emails reduced their late submissions by 18.5%. Host: Wow, that's a significant drop just from knowing how they compared to others. Expert: Yes, and that timing is critical. The study confirmed what you’d expect: students who submitted their work earlier also had higher scores. So the nudge didn't just change timing, it indirectly improved performance. Host: What else did they find? Expert: They also noticed that over the semester, the quality of work from the control group—the ones without the emails—started to decline slightly. The feedback nudge helped the other group maintain a higher quality of work throughout the course. Host: That’s interesting. But I hear a 'but' coming. Where did the intervention fall short? Expert: It didn't have any real effect on optional, voluntary assignments. Students were still putting those off. The takeaway seems to be that when people are busy, they focus on the mandatory, graded tasks. The social nudge was powerful, but not powerful enough to get them to do the 'extra credit' work during a busy semester. Host: That makes a lot of sense. This is fascinating for education, but we're a business and tech podcast. Alex, why does this matter for our listeners in the business world? Expert: This is the most exciting part, Anna. The applications are everywhere. First, think about corporate training and employee onboarding. So many companies use self-paced digital learning platforms and struggle with completion rates. Host: The same procrastination problem. Expert: Exactly. This study provides a blueprint for a low-cost, automated solution. Imagine a new hire getting a weekly email saying, "You've completed 3 of 5 onboarding modules. You're right on track with 70% of your new-hire cohort." It’s a scalable way to keep people engaged and moving forward. Host: That's a great point. It applies a bit of positive social pressure. Where else could this be used? Expert: In performance management and sales. Instead of just showing a salesperson their individual progress to quota, a dashboard could anonymously show them where they are relative to the team median. It can motivate the middle performers to catch up without creating a cutthroat environment. Host: So it's about using data to provide context for performance. Expert: Right. But the key is to apply it correctly. Remember how the nudge failed with optional tasks? For businesses, this means these interventions are most effective when tied to core responsibilities and key performance indicators—the things that really matter—not optional, 'nice-to-have' activities. Host: So focus the nudges on the KPIs. That’s a crucial takeaway. Expert: One last thing—this is huge for digital product design. Anyone building a fitness app, a financial planning tool, or any platform that relies on user engagement can use this. A simple message like, "You’ve saved more this month than 60% of users your age," can be a powerful driver of behavior and retention. Host: So, to summarize, this study shows that simple, automated social feedback is a powerful tool to combat procrastination and boost performance on critical tasks. Host: And for business leaders, the lesson is that these light-touch nudges can be applied in training, performance management, and product design to drive engagement, as long as they're focused on what truly counts. Host: Alex Ian Sutherland, thank you for these fantastic insights. Expert: My pleasure, Anna. Host: And thank you to our listeners for tuning into A.I.S. Insights, powered by Living Knowledge.
Flipped Classroom, Social Normative Feedback, Self Regulated Learning, Digital Interventions, Student Engagement, Higher Education
MIS Quarterly Executive (2024)
How to Design a Better Cybersecurity Readiness Program
This study explores the common pitfalls of four types of cybersecurity training by interviewing employees at large accounting firms. It identifies four unintended negative consequences of mistraining and overtraining and, in response, proposes the LEAN model, a new framework for designing more effective cybersecurity readiness programs.
Problem
Organizations invest heavily in cybersecurity readiness programs, but these initiatives often fail due to poor design, leading to mistraining and overtraining. This not only makes the training ineffective but can also create adverse effects like employee anxiety and fatigue, paradoxically amplifying an organization's cyber vulnerabilities instead of reducing them.
Outcome
- Conventional cybersecurity training often leads to four adverse effects on employees: threat anxiety, security fatigue, risk passivity, and cyber hesitancy. - These individual effects cause significant organizational problems, including erosion of individual performance, fragmentation of team dynamics, disruption of client experiences, and stagnation of the security culture. - The study proposes the LEAN model to counteract these issues, based on four strategies: Localize, Empower, Activate, and Normalize. - The LEAN model recommends tailoring training to specific roles (Localize), fostering ownership and authority (Empower), promoting coordinated action through collaborative exercises (Activate), and embedding security into daily operations to build a proactive culture (Normalize).
Host: Welcome to A.I.S. Insights, the podcast where we connect Living Knowledge with business innovation. I'm your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating new study called "How to Design a Better Cybersecurity Readiness Program." With me is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: This study explores the common pitfalls of cybersecurity training, looking at what happens when we mistrain or overtrain employees. More importantly, it proposes a new framework for getting it right. Host: So, Alex, let's start with the big picture. Companies are pouring billions into cybersecurity training. What's the problem this study identified? Expert: The problem is that much of that investment is wasted. The study shows that poorly designed training doesn't just fail to work; it can actually make things worse. Host: Worse? How so? Expert: Instead of reducing risk, it can create what the study calls adverse effects, like extreme anxiety about security, or a kind of burnout called security fatigue. Paradoxically, this can amplify an organization's vulnerabilities. Host: So our attempts to build a human firewall are actually creating cracks in it. How did the researchers uncover this? What was their approach? Expert: They went straight to the source. They conducted in-depth interviews with 23 employees at the four major U.S. accounting firms—organizations that are on the front lines of handling sensitive client data. Host: And what were the key findings from those interviews? What are these negative side effects you mentioned? Expert: The study identified four main consequences. The first is Threat Anxiety, where employees become so hyper-aware and fearful of making a mistake that their productivity drops. They second-guess every email they open. Host: I can imagine that. What's next? Expert: Second is Security Fatigue. This is cognitive burnout from constant alerts, repetitive training, and complex rules. Employees get overwhelmed and simply tune out, which is incredibly dangerous. Host: It sounds like alarm fatigue for the inbox. Expert: Exactly. The third is Risk Passivity, which is a paradoxical outcome. Some employees become so desensitized by constant warnings they start ignoring real threats. Others become paralyzed by the perceived risk of every action. Host: And the last one? Expert: The fourth is Cyber Hesitancy. This is a reluctance to use new tools or even collaborate with colleagues for fear of blame. It creates a culture of suspicion, not security. The study found this fragments team dynamics and stalls innovation. Host: These sound like serious cultural issues, not just IT problems. This brings us to the most important question for our listeners: Why does this matter for business, and what's the solution? Expert: It matters because the old approach is broken. The study proposes a new framework to fix it, called the LEAN model. It's an acronym for four key strategies. Host: Okay, break it down for us. What does LEAN stand for? Expert: The 'L' is for Localize. It means stop the one-size-fits-all training. Tailor the content to an employee's specific role. What an accountant needs to know is different from someone in marketing. Host: That makes sense. What about 'E'? Expert: 'E' is for Empower. This is about fostering ownership. Instead of just pushing rules, involve employees in creating and improving security protocols. This gives them a real stake in the outcome. Host: From passive recipient to active participant. I like it. What's 'A'? Expert: 'A' is for Activate. This means moving beyond solo quizzes to collaborative, team-based exercises. Let teams practice responding to a simulated threat together, fostering coordinated action and mastery. Host: And finally, 'N'? Expert: 'N' is for Normalize. This is the goal: embed security so deeply into daily operations that it becomes a natural part of the workflow, not a separate, dreaded task. It reframes security as a business enabler, not a barrier. Host: So, to summarize, it seems the core message is that our cybersecurity training is often counterproductive, creating negative effects like fatigue and anxiety. Host: The solution is a more human-focused, LEAN approach: Localize the training, Empower employees to take ownership, Activate teamwork through practice, and Normalize security into the company culture. Host: Alex, thank you for breaking that down for us. It’s a powerful new way to think about security. Expert: My pleasure, Anna. Host: And thank you to our listeners for tuning into A.I.S. Insights — powered by Living Knowledge. Join us next time as we explore the latest research impacting your business.
How Siemens Empowered Workforce Re- and Upskilling Through Digital Learning
Leonie Rebecca Freise, Eva Ritz, Ulrich Bretschneider, Roman Rietsche, Gunter Beitinger, and Jan Marco Leimeister
This case study examines how Siemens successfully implemented a human-centric, bottom-up approach to employee reskilling and upskilling through digital learning. The paper presents a four-phase model for leveraging information systems to address skill gaps and provides five key recommendations for organizations to foster lifelong learning in dynamic manufacturing environments.
Problem
The rapid digital transformation in manufacturing is creating a significant skills gap, with a high percentage of companies reporting shortages. Traditional training methods are often not scalable or adaptable enough to meet these evolving demands, presenting a major challenge for organizations trying to build a future-ready workforce.
Outcome
- The study introduces a four-phase model for developing human-centric digital learning: 1) Recognizing employee needs, 2) Identifying key employee traits (like self-regulation and attitude), 3) Developing tailored strategies, and 4) Aligning strategies with organizational goals. - Key employee needs for successful digital learning include task-oriented courses, peer exchange, on-the-job training, regular feedback, personalized learning paths, and micro-learning formats ('learning nuggets'). - The paper proposes four distinct learning strategies based on employees' attitude and self-regulated learning skills, ranging from community mentoring for those low in both, to personalized courses for those high in both. - Five practical recommendations for companies are provided: 1) Foster a lifelong learning culture, 2) Tailor digital learning programs, 3) Create dedicated spaces for collaboration, 4) Incorporate flexible training formats, and 5) Use analytics to provide feedback.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge, the podcast where we break down complex research into actionable business strategy. I'm your host, Anna Ivy Summers.
Host: Today, we're diving into a fascinating case study called "How Siemens Empowered Workforce Re- and Upskilling Through Digital Learning." It examines how the manufacturing giant successfully implemented a human-centric, bottom-up approach to employee training in the digital age. With me to unpack this is our analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Great to be here, Anna.
Host: Alex, let's start with the big picture. We hear about digital transformation constantly, but this study highlights a serious challenge that comes with it. What's the core problem they're addressing?
Expert: The core problem is a massive and growing skills gap. As manufacturing becomes more automated and digitized, the skills employees need are changing faster than ever. The study notes that in Europe alone, a staggering 77% of companies report skills shortages.
Expert: The old model of sending employees to a week-long training course once a year just doesn't work anymore. It's not scalable, it's not adaptable, and it often doesn't stick. Companies are struggling to build a future-ready workforce.
Host: So how did the researchers get inside this problem to find a solution? What was their approach?
Expert: They conducted an in-depth case study at Siemens Digital Industries. This wasn't about looking at spreadsheets from a distance. They went right to the source, conducting detailed interviews with employees from all levels—from the factory floor to management—to understand their genuine needs, challenges, and motivations when it comes to digital learning.
Host: Taking a human-centric approach to the research itself. So, what did they find? What were the key takeaways from those conversations?
Expert: They uncovered several critical insights, which they organized into a four-phase model for success. The first and most important finding is that you have to start by recognizing what employees actually need, not what the organization thinks they need.
Host: And what do employees say they need? Is it just more training courses?
Expert: Not at all. They need task-oriented training that’s directly relevant to their job. They want opportunities to exchange knowledge with their peers and mentors. And they really value flexible, bite-sized learning—what Siemens calls 'learning nuggets'. These are short, focused videos or tutorials they can access right on the factory floor during a short production stop.
Host: That makes so much sense. It's about integrating learning into the workflow. What else stood out?
Expert: A crucial finding was that a one-size-fits-all approach is doomed to fail because employees are not all the same. The research identified two key traits that determine how a person engages with learning: their attitude, meaning how motivated they are, and their skill at self-regulated learning, which is their ability to manage their own progress.
Expert: Based on those two traits, the study proposes four distinct strategies. For an employee with a great attitude and high self-regulation, you can offer a rich library of personalized courses and let them drive. But for someone with a low attitude and weaker self-regulation skills, you need to start with community mentoring and guided support to build their confidence.
Host: This is the most important part for our listeners. Alex, what does this all mean for a business leader? Why does this matter and how can they apply these lessons?
Expert: It matters because it offers a clear roadmap to solving the skills gap, and it creates immense business value through a more engaged and capable workforce. The study boils it down to five key recommendations. First, you have to foster a lifelong learning culture. Siemens's company-wide slogan is "Making learning a habit." It has to be a core value, not just an HR initiative.
Host: Okay, so culture is number one. What’s next?
Expert: Second, tailor the learning programs. Move away from generic content and use technology to create personalized learning paths for different roles and skill levels. This is far more cost-efficient and effective.
Host: You mentioned peer exchange. How does that fit in?
Expert: That’s the third recommendation: create dedicated spaces for collaboration. This can be digital or physical. Siemens successfully uses "digi-coaches"—employees who are trained to help their peers use the digital learning tools. It builds a supportive ecosystem.
Expert: The fourth is to incorporate flexible training formats. Those 'learning nuggets' are a perfect example. It respects the employee's time and workflow, which boosts engagement.
Expert: And finally, number five: use analytics to provide feedback. This isn't for surveillance, but to help employees track their own progress and for managers to identify where support is needed. It helps make learning a positive, data-informed journey.
Host: So, to summarize, the old top-down training model is broken. This study of Siemens proves that the path forward is a human-centric, bottom-up strategy. It's about truly understanding your employees' needs and tailoring learning to them.
Host: It seems that by empowering the individual, you empower the entire organization. Alex, thank you for these fantastic insights.
Expert: My pleasure, Anna.
Host: And thank you for tuning in to A.I.S. Insights. Join us next time as we continue to connect knowledge with opportunity.
digital learning, upskilling, reskilling, workforce development, human-centric, manufacturing, case study
MIS Quarterly Executive (2024)
Establishing a Low-Code/No-Code-Enabled Citizen Development Strategy
Björn Binzer, Edona Elshan, Daniel Fürstenau, Till J. Winkler
This study analyzes the low-code/no-code adoption journeys of 24 different companies to understand the challenges and best practices of citizen development. Drawing on these insights, the paper proposes a seven-step strategic framework designed to guide organizations in effectively implementing and managing these powerful tools. The framework helps structure critical design choices to empower employees with little or no IT background to create digital solutions.
Problem
There is a significant gap between the high demand for digital solutions and the limited availability of professional software developers, which constrains business innovation and problem-solving. While low-code/no-code platforms enable non-technical employees (citizen developers) to build applications, organizations often lack a coherent strategy for their adoption. This leads to inefficiencies, security risks, compliance issues, and wasted investments.
Outcome
- The study introduces a seven-step framework for creating a citizen development strategy: Coordinate Architecture, Launch a Development Hub, Establish Rules, Form the Workforce, Orchestrate Liaison Actions, Track Successes, and Iterate the Strategy. - Successful implementation requires a balance between centralized governance and individual developer autonomy, using 'guardrails' rather than rigid restrictions. - Key activities for scaling the strategy include the '5E Cycle': Evangelize, Enable, Educate, Encourage, and Embed citizen development within the organization's culture. - Recommendations include automating governance tasks, promoting business-led development initiatives, and encouraging the use of these tools by IT professionals to foster a collaborative relationship between business and IT units.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a fascinating new study titled "Establishing a Low-Code/No-Code-Enabled Citizen Development Strategy". Host: It explores how companies can strategically empower their own employees—even those with no IT background—to create digital solutions using low-code and no-code tools. Joining me to unpack this is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, let’s start with the big picture. Why is a study like this so necessary right now? What’s the core problem businesses are facing? Expert: The problem is a classic case of supply and demand. The demand for digital solutions, for workflow automations, for new apps, is skyrocketing. But the supply of professional software developers is extremely limited and expensive. This creates a huge bottleneck that slows down innovation. Host: And companies are turning to low-code platforms as a solution? Expert: Exactly. They hope to turn regular employees into “citizen developers.” The issue is, most companies just buy the software and hope for the best, a sort of "build it and they will come" approach. Expert: But without a real strategy, this can lead to chaos. We're talking security risks, compliance issues, duplicated efforts, and ultimately, wasted money. It's like giving everyone power tools without any blueprints or safety training. Host: That’s a powerful analogy. So how did the researchers in this study figure out what the right approach should be? Expert: They went straight to the source. They conducted in-depth interviews with leaders, managers, and citizen developers at 24 different companies that were already on this journey. They analyzed their successes, their failures, and the best practices that emerged. Host: A look inside the real-world lab. What were some of the key findings that came out of that? Expert: The study's main outcome is a seven-step strategic framework. It covers everything from coordinating the technology architecture to launching a central support hub and tracking successes. Host: Can you give us an example? Expert: One of the most critical findings was the need for balance between control and freedom. The study found that rigid, restrictive rules don't work. Instead, successful companies create ‘guardrails.’ Expert: One manager used a great analogy, saying, "if the guardrails are only 50 centimeters apart, I can only ride through with a bicycle, not a truck. Ultimately, we want to achieve that at least cars can drive through." It’s about enabling people safely, not restricting them. Host: I love that. So it's not just about rules, but about creating the right environment. Expert: Precisely. The study also identified what it calls the ‘5E Cycle’: Evangelize, Enable, Educate, Encourage, and Embed. This is a process for making citizen development part of the company’s DNA, to build a culture where people are excited and empowered to innovate. Host: This is where it gets really practical. Let's talk about why this matters for a business leader. What are the key takeaways they can act on? Expert: The first big takeaway is to promote business-led citizen development. This shouldn't be just another IT project. The study shows that the most successful initiatives are driven by the business units themselves, with 'digital leads' or champions who understand their department's specific needs. Host: So, ownership moves from the IT department to the business itself. What else? Expert: The second is to automate governance wherever possible. Instead of manual checks for every new app, companies can use automated tools—often built with low-code itself—to check for security issues or compliance. This frees up IT to focus on bigger problems and empowers citizen developers to move faster. Host: And the final key takeaway? Expert: It’s about fostering a new, symbiotic relationship between business and IT. For decades, IT has often been seen as the department of "no." This study shows how citizen development can be a bridge. One leader admitted that building trust was their biggest hurdle, but now IT is seen as a valuable partner that enables transformation. Host: It sounds like this is about much more than just technology; it’s a fundamental shift in how work gets done. Expert: Absolutely. It’s about democratizing digital innovation. Host: Fantastic insights, Alex. To sum it up for our listeners: the developer shortage is a major roadblock, but simply buying low-code tools isn't the answer. Host: This study highlights the need for a clear strategy, one that uses flexible guardrails, builds a supportive culture, and transforms the relationship between business and IT from a source of friction to a true partnership. Host: Alex Ian Sutherland, thank you so much for breaking that down for us. Expert: My pleasure, Anna. Host: And thank you to our listeners for tuning into A.I.S. Insights. Join us next time as we continue to explore the ideas shaping the future of business.
Citizen Development, Low-Code, No-Code, Digital Transformation, IT Strategy, Governance Framework, Upskilling
Communications of the Association for Information Systems (2024)
Design Knowledge for Virtual Learning Companions from a Value-centered Perspective
Ricarda Schlimbach, Bijan Khosrawi-Rad, Tim C. Lange, Timo Strohmann, Susanne Robra-Bissantz
This study develops design principles for Virtual Learning Companions (VLCs), which are AI-powered chatbots designed to help students with motivation and time management. Using a design science research approach, the authors conducted interviews, workshops, and built and tested several prototypes with students. The research aims to create a framework for designing VLCs that not only provide functional support but also build a supportive, companion-like relationship with the learner.
Problem
Working students in higher education often struggle to balance their studies with their jobs, leading to challenges with motivation and time management. While conversational AI like ChatGPT is becoming common, these tools often lack the element of companionship and a holistic approach to learning support. This research addresses the gap in how to design AI learning tools that effectively integrate motivation, time management, and relationship-building from a user-value-centered perspective.
Outcome
- The study produced a comprehensive framework for designing Virtual Learning Companions (VLCs), resulting in 9 design principles, 28 meta-requirements, and 33 design features. - The findings are structured around a “value-in-interaction” model, which proposes that a VLC's value is created across three interconnected layers: the Relationship Layer, the Matching Layer, and the Service Layer. - Key design principles include creating a human-like and adaptive companion, enabling proactive and reactive behavior, building a trustworthy relationship, providing supportive content, and fostering a motivational and ethical learning environment. - Evaluation of a coded prototype revealed that different student groups have different preferences, emphasizing that VLCs must be adaptable to their specific educational context and user needs to be effective.
Host: Welcome to A.I.S. Insights, the podcast where we connect academic research to real-world business strategy, powered by Living Knowledge. I’m your host, Anna Ivy Summers.
Host: Today, we’re exploring a topic that’s becoming increasingly relevant in our AI-driven world: how to make our digital tools not just smarter, but more supportive. We’re diving into a study titled "Design Knowledge for Virtual Learning Companions from a Value-centered Perspective".
Host: In simple terms, it's about creating AI-powered chatbots that act as true companions, helping students with the very human challenges of motivation and time management. Here to break it all down for us is our expert analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Thanks for having me, Anna. It’s a fascinating study with huge implications.
Host: Let's start with the big picture. What is the real-world problem that this study is trying to solve?
Expert: Well, think about anyone trying to learn something new while juggling a job and a personal life. It could be a university student working part-time or an employee trying to upskill. The biggest hurdles often aren't the course materials themselves, but staying motivated and managing time effectively.
Host: That’s a struggle many of our listeners can probably relate to.
Expert: Exactly. And while we have powerful AI tools like ChatGPT that can answer questions, they function like a know-it-all tutor. They provide information, but they don't provide companionship. They don't check in on you, encourage you when you're struggling, or help you plan your week. This study addresses that gap.
Host: So it's about making AI more of a partner than just a tool. How did the researchers go about figuring out how to build something like that?
Expert: They used a very hands-on approach called design science research. Instead of just theorizing, they went through multiple cycles of building and testing. They started by conducting in-depth interviews with working students to understand their real needs. Then, they held workshops, designed a couple of conceptual prototypes, and eventually built and coded a fully functional AI companion that they tested with different student groups.
Host: So it’s a methodology that’s really grounded in user feedback. What were the key findings? What did they learn from all this?
Expert: The main outcome is a powerful framework for designing these Virtual Learning Companions, or VLCs. The big idea is that the companion's value is created through the interaction itself, which they break down into three distinct but connected layers.
Host: Three layers. Can you walk us through them?
Expert: Of course. First is the Relationship Layer. This is all about creating a human-like, trustworthy companion. The AI should be able to show empathy, maybe use a bit of humor, and build a sense of connection with the user over time. It’s the foundation.
Host: Okay, so it’s about the personality and the bond. What's next?
Expert: The second is the Matching Layer. This is about adaptation and personalization. The study found that a one-size-fits-all approach fails. The VLC needs to adapt to the user's individual learning style, their personality, and even their current mood or context.
Host: And the third layer?
Expert: That's the Service Layer. This is where the more functional support comes in. It includes features for time management, like creating to-do lists and setting reminders, as well as providing supportive learning content and creating a motivational environment, perhaps with gentle nudges or rewards.
Host: This all sounds great in theory, but did they see it work in practice?
Expert: They did, and they also uncovered a critical insight. When they tested their prototype, they found that full-time university students thought the AI’s language was too informal and colloquial. But a group of working professionals in a continuing education program found the exact same AI to be too formal!
Host: Wow, that’s a direct confirmation of what you said about the Matching Layer. The companion has to be adaptable.
Expert: Precisely. It proves that to be effective, these tools must be tailored to their specific audience and context.
Host: Alex, this is the crucial part for our audience. Why does this matter for business? What are the practical takeaways?
Expert: The implications are huge, Anna, and they go way beyond the classroom. Think about corporate training and HR. Imagine a new employee getting an AI companion that doesn't just teach them software systems, but helps them manage the stress of their first month and checks in on their progress and motivation. That could have a massive impact on engagement and retention.
Host: I can see that. It’s a much more holistic approach to onboarding. Where else?
Expert: For any EdTech company, this framework is a blueprint for building more effective and engaging products. It's about moving from simple content delivery to creating a supportive learning ecosystem. But you can also apply these principles to customer-facing bots. An AI that can build a relationship and adapt to a customer's technical skill or frustration level will provide far better service and build long-term loyalty.
Host: So the key business takeaway is to shift our thinking.
Expert: Exactly. The value of AI in these roles isn't just in the functional task it completes, but in the supportive, adaptive relationship it builds with the user. It’s the difference between an automated tool and a true digital partner.
Host: A fantastic insight. So, to summarize: today's professionals face real challenges with motivation and time management. This study gives us a three-layer framework—Relationship, Matching, and Service—to build AI companions that truly help. For businesses, this opens up new possibilities in corporate training, EdTech, and even customer relations.
Host: Alex, thank you so much for translating this complex study into such clear, actionable insights.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning in. This has been A.I.S. Insights — powered by Living Knowledge. Join us next time as we uncover more valuable knowledge for your business.
Conversational Agent, Education, Virtual Learning Companion, Design Knowledge, Value