Smart Bins: Case study-based benefit evaluation of filling level sensors in smart waste containers
David Hoffmann, Ruben Franz, Florian Hawlitschek, Nico Jahn
This study evaluates the potential benefits of using filling level sensors in waste containers, transforming them into "smart bins" for more efficient waste management. Through a multiple case study with three German waste management companies, the paper explores the practical application of different sensor technologies to identify key challenges, provide recommendations for pilot projects, and outline requirements for future development.
Problem
Traditional waste management relies on emptying containers at fixed intervals, regardless of how full they are. This practice is inefficient, leading to unnecessary costs and emissions from premature collections or overflowing bins and littering from late collections. Furthermore, existing research on smart bin technology is fragmented and often limited to simulations, lacking practical insights from real-world deployments.
Outcome
- Pilot studies revealed significant optimization potential, with analyses showing that some containers were only 50% full at their scheduled collection time. - The implementation of sensor technology requires substantial effort in planning, installation, calibration, and maintenance, including the need for manual data collection to train algorithms. - Fill-level sensors are not precision instruments and are prone to outliers, but they are sufficiently accurate for waste management when used to classify fill levels into broad categories (e.g., quartiles). - Different sensor types are suitable for different waste materials; for example, vibration-based sensors proved 94.5% accurate for paper and cardboard, which can expand after being discarded. - Major challenges include the lack of technical standards for sensor installation and data interfaces, as well as the difficulty of integrating proprietary sensor platforms with existing logistics and IT systems.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re digging into a topic that affects every city and nearly every business: waste management. We've all seen overflowing public trash cans or collection trucks emptying bins that are practically empty. Host: We're looking at a fascinating study titled "Smart Bins: Case study-based benefit evaluation of filling level sensors in smart waste containers". Host: It explores how turning regular bins into "smart bins" with sensors can make waste management much more efficient. To help us understand the details, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. What is the fundamental problem with the way we've traditionally handled waste collection? Expert: The core problem is inefficiency. Most waste management operates on fixed schedules. A truck comes every Tuesday, for example, regardless of whether a bin is 10% full or 110% full and overflowing. Host: And that creates two different problems, I imagine. Expert: Exactly. If the truck collects a half-empty bin, you've wasted fuel, labor costs, and created unnecessary emissions. If it's collected too late, you get overflowing containers, which leads to littering and public health concerns. The study points out that much of the existing research on this was based on simulations, not real-world data. Host: So this study took a more hands-on approach. How did the researchers actually test this technology? Expert: They conducted practical pilot projects with three different waste management companies in Germany. They installed various types of sensors in a range of containers—from public litter bins to large depot containers for glass and paper—to see how they performed in the real world. Host: A real-world stress test. So, what were the most significant findings? Was there real potential for optimization? Expert: The potential is massive. The analysis from one pilot showed that some containers were only 50% full at their scheduled collection time. That's a huge window for efficiency gains. Host: That's a significant number. But I'm guessing it's not as simple as just plugging in a sensor and saving money. Expert: You're right. A key finding was that the implementation requires substantial effort. We're talking about the whole lifecycle: planning, physical installation, and importantly, calibration. To make the sensors accurate, they had to manually collect data on fill levels to train the system's algorithms. Host: That's a hidden cost for sure. How reliable is the sensor data itself? Expert: That was another critical insight. These fill-level sensors are not precision instruments. They can have outliers, for instance, if a piece of trash lands directly on the sensor. Host: So they're not perfectly accurate? Expert: They don't have to be. The study found they are more than accurate enough for waste management if you reframe the goal. You don't need to know if a bin is 71% full versus 72%. You just need to classify it into broad categories, like quartiles—empty, 25%, 50%, 75%, or full. That's enough to make a smart collection decision. Host: That makes a lot of sense. Did they find that certain sensors work better for certain types of waste? Expert: Absolutely. This was one of the most interesting findings. For paper and cardboard, which can often expand after being discarded, a standard ultrasonic sensor might get a false reading. The study found that vibration-based sensors, which detect the vibrations of new waste being thrown in, proved to be 94.5% accurate for those materials. Host: Fascinating. So let's get to the most important part for our audience: why does this matter for business? What are the key takeaways? Expert: The primary takeaway is the move from static to dynamic logistics. Instead of a fixed route, a company can generate an optimized collection route each day based only on the bins that are actually full. This directly translates to savings in fuel, vehicle maintenance, and staff hours, while also reducing a company's carbon footprint. Host: The return on investment seems clear. But what are the major challenges a business leader should be aware of before diving in? Expert: The study highlights two major hurdles. The first is integration. Many sensor providers offer their own proprietary software platforms. Getting this new data to integrate smoothly with a company's existing logistics and IT systems is a significant technical challenge. Expert: The second hurdle is the lack of industry standards. There are no common rules for how sensors should be installed or what format the data should be in. This complicates deployment, especially at a large scale. Host: So it's powerful technology, but the ecosystem around it is still maturing. Expert: Precisely. The takeaway for businesses is to view this not as a simple plug-and-play device, but as a strategic logistics project. It requires upfront investment in planning and calibration, but the potential for long-term efficiency and sustainability gains is enormous. Host: A perfect summary. So, to recap: Traditional waste collection is inefficient. Smart bins with sensors offer a powerful way to optimize routes, saving money and reducing emissions. However, businesses must be prepared for significant implementation challenges, especially around calibrating the system and integrating it with existing software. Host: Alex, thank you so much for breaking that down for us. Expert: My pleasure, Anna. Host: And thank you for listening to A.I.S. Insights — powered by Living Knowledge. Join us next time as we decode another key study for your business.
Waste management, Smart bins, Filling level measurement, Sensor technology, Internet of Things
Personnel Review (2024)
Beyond the office: an examination of remote work, social and job features on individual satisfaction and engagement
Rossella Cappetta, Sara Lo Cascio, Massimo Magni, Alessia Marsico
This study examines the effects of remote work on employees' satisfaction and engagement, aiming to identify which factors enhance these outcomes. The research is based on a survey of 1,879 employees and 262 managers within a large company that utilizes a hybrid work model.
Problem
The rapid and widespread adoption of remote work has fundamentally transformed work environments and disrupted traditional workplace dynamics. However, its effects on individual employees remain inconclusive, with conflicting evidence on whether it is a source of support or discomfort, creating a need to understand the key drivers of satisfaction and engagement in this new context.
Outcome
- Remote work frequency is negatively associated with employee engagement and has no significant effect on job satisfaction. - Positive social features, such as supportive team and leader relationships, significantly increase both job satisfaction and engagement. - Job features like autonomy were found to be significant positive drivers for employees, but not for managers. - A high-quality relationship between a leader and an employee (leader-member exchange) can alleviate the negative effects of exhaustion on satisfaction and engagement.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge, where we translate complex research into actionable business intelligence. I’m your host, Anna Ivy Summers. Host: Today, we're looking at a new study that tackles one of the biggest questions in the modern workplace. It’s titled, "Beyond the office: an examination of remote work, social and job features on individual satisfaction and engagement". Host: Essentially, it takes a deep dive into how remote and hybrid work models are really affecting employees, aiming to identify the specific factors that make them thrive. With me today to unpack this is our analyst, Alex Ian Sutherland. Expert: Great to be here, Anna. Host: Alex, we've all lived through this massive shift to remote work. The big question on every leader's mind is: is it actually working for our people? The conversation seems so polarized. Expert: It is, and that’s the core problem this study addresses. The evidence has been contradictory. Some praise remote work for its flexibility, while others point to widespread burnout and isolation. The researchers call this the "telecommuting paradox." Expert: Businesses need to cut through that noise to understand what truly drives satisfaction and engagement in this new environment. It’s no longer a perk for a select few; it’s a fundamental part of how we operate. Host: So how did the researchers go about solving this paradox? What was their approach? Expert: They went straight to the source with a large-scale survey. They collected data from nearly 1,900 employees and over 260 managers, all within a large company that uses a flexible hybrid model. Expert: This gave them a fantastic real-world snapshot of how different variables—from the number of days someone works remotely to the quality of their team relationships—actually connect to those feelings of satisfaction and engagement. Host: Let's get right to the findings then. What was the most surprising result? Expert: The big surprise was that the frequency of remote work, meaning the number of days spent working from home, was actually negatively associated with employee engagement. Host: So, working from home more often meant people felt less engaged? Expert: Exactly. And even more surprisingly, it had no significant effect on their overall job satisfaction. People weren't necessarily happier, and they were measurably less connected to their work. Host: That seems completely counterintuitive. Why would that be? Expert: The study suggests that satisfaction is a short-term, day-to-day feeling. The benefits of remote work, like no commute, likely balance out the negatives, like social isolation, so satisfaction stays neutral. Expert: But engagement is different. It’s a deeper, long-term emotional and intellectual connection to your work, your team, and the company's mission. That connection appears to weaken with sustained physical distance. Host: If it’s not the schedule, then what does boost satisfaction and engagement? Expert: It all comes down to people. The study was very clear on this. Positive social features, especially having a high-quality, supportive relationship with your direct manager, were the most powerful drivers of both satisfaction and engagement. Good team relationships were also very important. Host: And what about the work itself? Did things like autonomy play a role? Expert: They did, but in a nuanced way. For employees, having autonomy—more control over how and when they do their work—was a significant positive factor. But for managers, their own autonomy wasn't as critical for their personal satisfaction. Expert: And there was one more critical finding related to this: a strong leader-employee relationship acts as a buffer. It can actually alleviate the negative impact of exhaustion and burnout on an employee's well-being. Host: This is incredibly useful. Let's move to the bottom line. What are the key takeaways for business leaders listening to us right now? Expert: The first and most important takeaway is to shift the conversation. Stop focusing obsessively on the number of days in or out of the office. The real leverage is in building and maintaining strong social fabric and supportive relationships within your teams. Host: And how can leaders practically do that in a hybrid setting? Expert: By investing in their middle managers. They are the lynchpin. The study's implications show that managers need to be trained to lead differently—to foster collaboration and psychological safety, not just monitor tasks. This means encouraging meaningful, regular conversations that go beyond simple status updates. Host: That makes sense, especially for those employees who might be at higher risk of feeling isolated. Expert: Precisely. Leaders should pay special attention to new hires, younger workers, and anyone working mostly remotely, as they have fewer opportunities to build those crucial networks organically. Host: And what about that finding on burnout and the role of the manager as a buffer? Expert: It means that a supportive manager is one of your best defenses against burnout. When an employee feels exhausted, a good leader can be the critical factor that keeps them satisfied and engaged. This means training leaders to recognize the signs of burnout and empowering them to offer real support. Host: So, to summarize: the success of a remote or hybrid model isn't about finding the perfect schedule. It’s about cultivating the quality of our connections, ensuring our leaders are supportive, and giving employees autonomy over their work. Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us. Expert: It was my pleasure, Anna. Host: And thank you to our listeners for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time as we continue to translate research into results.
Remote work, Social exchanges, Job characteristics, Job satisfaction, Engagement
International Conference on Wirtschaftsinformatik (2023)
Building Habits in the Digital Age: Incorporating Psychological Needs and Knowledge from Practitioners to Inform the Design of Digital Therapeutics
Jeannette Stark, Thure Weimann, Felix Reinsch, Emily Hickmann, Maren Kählig, Carola Gißke, and Peggy Richter
This study reviews the psychological requirements for forming habits and analyzes how these requirements are implemented in existing mobile habit-tracking apps. Through a content analysis of 57 applications, the research identifies key design gaps and proposes a set of principles to inform the creation of more effective Digital Therapeutics (DTx) for long-term behavioral change.
Problem
Noncommunicable diseases (NCDs), a leading cause of death, often require sustained lifestyle and behavioral changes. While many digital apps aim to support habit formation, they often fail to facilitate the entire process, particularly the later stages where a habit becomes automatic and reliance on technology should decrease, creating a gap in effective long-term support.
Outcome
- Conventional habit apps primarily support the first two stages of habit formation: deciding on a habit and translating it into an initial behavior. - Most apps neglect the crucial later stages of habit strengthening, where technology use should be phased out to allow the habit to become truly automatic. - A conflict of interest was identified, as the commercial need for continuous user engagement in many apps contradicts the goal of making a user's new habit independent of the technology. - The research proposes specific design principles for Digital Therapeutics (DTx) to better support all four stages of habit formation, offering a pathway for developing more effective tools for NCD prevention and treatment.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge, the podcast where we translate complex research into actionable business strategy. I'm your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating study titled "Building Habits in the Digital Age: Incorporating Psychological Needs and Knowledge from Practitioners to Inform the Design of Digital Therapeutics". Host: With me is our expert analyst, Alex Ian Sutherland. Alex, in a nutshell, what is this study about? Expert: Hi Anna. This study looks at the psychology behind how we form habits and then analyzes how well current mobile habit-tracking apps actually support that process. It identifies some major design gaps and proposes a new set of principles for creating more effective health apps, known as Digital Therapeutics. Host: Let's start with the big picture problem. Why is building better habits so critical? Expert: It's a huge issue. The study highlights that noncommunicable diseases like diabetes and heart disease are the leading cause of death worldwide, and many are directly linked to our daily lifestyle choices. Host: So things like diet and exercise. And we have countless apps that promise to help us with that. Expert: We do, and that's the core of the problem this study addresses. While thousands of apps aim to help us build good habits, they often fail to support the entire journey. They're good at getting you started, but they don't help you finish. Host: What do you mean by "finish"? Isn't habit formation an ongoing thing? Expert: It is, but the end goal is for the new behavior to become automatic—something you do without thinking. The study finds that current apps often fail in those crucial later stages, where your reliance on technology should actually decrease, not increase. Host: That’s a really interesting point. How did the researchers go about studying this? Expert: Their approach was very methodical. First, they reviewed psychological research to map out a clear, four-stage model of habit formation. It starts with the decision to act and ends with the habit becoming fully automatic. Expert: Then, they performed a detailed content analysis of 57 popular habit-tracking apps. They downloaded them, used them, and systematically scored their features against the requirements of those four psychological stages. Host: And what were the key findings from that analysis? Expert: The results were striking. The vast majority of apps are heavily focused on the first two stages: deciding on a habit and starting the behavior. They excel at things like daily reminders and tracking streaks. Host: But they're missing the later stages? Expert: Almost completely. For example, the study found that not a single one of the 57 apps they analyzed had features to proactively phase out reminders or rewards as a user's habit gets stronger. They keep you hooked on the app's triggers. Host: Why would that be? It seems counterintuitive to the goal of forming a real habit. Expert: It is, and that points to the second major finding: a fundamental conflict of interest. The business model for most of these apps relies on continuous user engagement. They need you to keep opening the app every day. Expert: But the psychological goal of habit formation is for the behavior to become independent of the app. So the app’s commercial need is often directly at odds with the user's health goal. Host: Okay, this is the critical part for our listeners. What does this mean for businesses in the health-tech space? Why does this matter? Expert: It matters immensely because it reveals a massive opportunity. The study positions this as a blueprint for a more advanced category of apps called Digital Therapeutics, or DTx. Host: Remind us what those are. Expert: DTx are essentially "prescription apps"—software that is clinically validated and prescribed by a doctor to treat or prevent a disease. Because they have a clear medical purpose, their goal isn't just engagement; it's a measurable health outcome. Host: So they can be designed to make themselves obsolete for a particular habit? Expert: Precisely. A DTx doesn't need to keep a user forever. Its success is measured by the patient getting better. The study provides a roadmap with specific design principles for this, like building in features for "tapered reminding," where notifications fade out over time. Host: So the business takeaway is to shift the focus from engagement metrics to successful user "graduation"? Expert: Exactly. For any company in the digital health or wellness space, the future isn't just about keeping users, it's about proving you can create lasting, independent behavioral change. That is a far more powerful value proposition for patients, doctors, and insurance providers. Host: A fascinating perspective. So, to summarize: today's habit apps get us started but often fail at the finish line due to a conflict between their business model and our psychological needs. Host: This study, however, provides a clear roadmap for the next generation of Digital Therapeutics to bridge that gap, focusing on clinical outcomes rather than just app usage. Host: Alex, thank you for making that so clear for us. Expert: My pleasure, Anna. Host: And thank you for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we uncover more valuable insights from the world of research.
Behavioral Change, Digital Therapeutics, Habits, Habit Apps, Non-communicable diseases
Journal of the Association for Information Systems (2025)
Uncovering the Structural Assurance Mechanisms in Blockchain Technology-Enabled Online Healthcare Mutual Aid Platforms
Zhen Shao, Lin Zhang, Susan A. Brown, Jose Benitez
This study investigates how to build user trust in online healthcare mutual aid platforms that use blockchain technology. Drawing on institutional trust theory, the research examines how policy and technology assurances influence users' intentions and actual usage by conducting a two-part field survey with users of a real-world platform.
Problem
Online healthcare mutual aid platforms, which act as a form of peer-to-peer insurance, struggle with user adoption due to widespread distrust. Frequent incidents of fraud, false claims, and misappropriation of funds have created skepticism, making it a significant challenge to facilitate user trust and ensure the sustainable growth of these platforms.
Outcome
- Both strong institutional policies (policy assurance) and reliable technical features enabled by blockchain (technology assurance) significantly increase users' trust in the platform. - Higher user trust is directly linked to a greater intention to use the online healthcare mutual aid platform. - The intention to use the platform positively influences actual usage behaviors, such as the frequency and intensity of use. - Trust acts as a full mediator, meaning that the platform's assurances build trust, which in turn drives user intention and behavior.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. In a world of digital services, how do you build user trust from the ground up? Today, we’re exploring a fascinating study that tackles this very question. Host: It’s titled, "Uncovering the Structural Assurance Mechanisms in Blockchain Technology-Enabled Online Healthcare Mutual Aid Platforms". In short, it’s about how to build user trust in new peer-to-peer insurance platforms that are using blockchain technology. Host: Here to unpack this for us is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Thanks for having me, Anna. Host: So, let’s start with the big picture. What are these online healthcare mutual aid platforms, and why is trust such a huge challenge for them? Expert: These platforms are essentially a form of peer-to-peer insurance. A group of people joins a digital pool to support each other financially if someone gets sick. It's a great concept, but it has been plagued by a massive trust issue. Host: What’s driving that distrust? Expert: The study points to frequent and highly public incidents of fraud. We’re talking about everything from people making false claims to the outright misappropriation of funds. The researchers highlight news reports where, for example, a person needed about seven thousand yuan for treatment but raised three hundred thousand on a platform and used it for personal expenses. Host: Wow, that would definitely make me hesitant to contribute. Expert: Exactly. These incidents create widespread skepticism. In fact, one report cited in the study found that over 70 percent of potential donors harbored distrust for these platforms, which is a huge barrier to adoption and growth. Host: It’s a classic problem for any new marketplace. So how did the researchers go about studying a solution? How do you scientifically measure something like trust? Expert: They took a very practical approach. They conducted a two-part field survey with over 200 actual users of a real-world platform in China called Xianghubao. In the first phase, they measured the users' perceptions of the platform's safety features and their level of trust. Expert: Then, six months later, they followed up with those same users to capture their actual usage behavior—how often they were using the platform and which features they engaged with. This allowed them to statistically connect the dots between the platform's design, the user's feeling of trust, and their real-world actions. Host: A two-part study sounds really thorough. So, Alex, what were the key findings? What actually works to build that trust? Expert: The study found two critical components. The first is what they call 'policy assurance'. These are the institutional structures—clear rules, contractual guarantees, and transparent legal policies that show the platform is well-governed and accountable. Expert: The second component is 'technology assurance'. In this case, that means the specific, reliable features enabled by blockchain. Host: So it's not just about having the latest tech. The company's old-fashioned rules and promises matter just as much. Expert: Precisely. And both of them were shown to significantly increase users' trust in the platform. That higher trust, in turn, was directly linked to a greater intention to use the platform, which then translated into actual, sustained usage. Host: The summary of the study mentions that trust acts as a 'full mediator'. What does that mean in simple terms for a business leader? Expert: It’s a really important point. It means that having great policies and secure technology isn't enough on its own. Those features don't directly make people use your service. Their primary function is to build trust. It is that feeling of trust that then drives user behavior. So, for any business, the goal of your safety mechanisms should be to make the user *feel* secure, because that feeling is what actually powers the business. Host: That’s a powerful insight. Trust is the engine, not just a nice-to-have feature. So, let’s get to the bottom line. What are the key takeaways for businesses, even those outside of healthcare or blockchain? Expert: The first takeaway is that you need a two-pronged approach. You can't just rely on cutting-edge technology, and you can't just rely on a good rulebook. The study shows you need both strong policy assurances and strong technology assurances working together. Host: And how do you make those assurances effective? Expert: That’s the second key takeaway: make them tangible. For policy assurance, this means establishing and clearly communicating your auditing rules, your feedback policies, and any user protections. Don't hide them in the fine print. Expert: For technology assurance, it means giving users a way to see the security in action. The platform they studied, Xianghubao, uses blockchain to let users view a tamper-proof record of how funds are used for every single claim. This transparency moves the platform from saying "trust us" to showing "here is the proof." Host: So, the lesson for any business launching a new digital service is to actively demonstrate both your operational integrity through clear policies and your technical security through features the user can actually see and understand. Expert: Exactly that. It’s about building a system where trust is an outcome of transparent design, not a leap of faith. Host: This is incredibly relevant for so many emerging business models. To recap: building user trust in a skeptical environment requires a combination of strong, clear policies and transparent, verifiable technology. And crucially, these assurances work by building user trust, which is the real engine for adoption and usage. Host: Alex, thank you for breaking down this complex topic into such clear, actionable insights. Expert: My pleasure, Anna. Host: And thanks to our audience for tuning in. Join us next time on A.I.S. Insights.
Journal of the Association for Information Systems (2025)
Responsible AI Design: The Authenticity, Control, Transparency Theory
Andrea Rivera, Kaveh Abhari, Bo Xiao
This study explores how to design Artificial Intelligence (AI) responsibly from the perspective of AI designers. Using a grounded theory approach based on interviews with industry professionals, the paper develops the Authenticity, Control, Transparency (ACT) theory as a new framework for creating ethical AI.
Problem
Current guidelines for responsible AI are fragmented and lack a cohesive theory to guide practice, leading to inconsistent outcomes. Existing research often focuses narrowly on specific attributes like algorithms or harm minimization, overlooking the broader design decisions that shape an AI's behavior from its inception.
Outcome
- The study introduces the Authenticity, Control, and Transparency (ACT) theory as a practical framework for responsible AI design. - It identifies three core mechanisms—authenticity, control, and transparency—that translate ethical design decisions into responsible AI behavior. - These mechanisms are applied across three key design domains: the AI's architecture, its algorithms, and its functional affordances (capabilities offered to users). - The theory shifts the focus from merely minimizing harm to also maximizing the benefits of AI, providing a more balanced approach to ethical design.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into a foundational topic: how to build Artificial Intelligence responsibly from the ground up. We'll be discussing a fascinating study from the Journal of the Association for Information Systems titled, "Responsible AI Design: The Authenticity, Control, Transparency Theory".
Host: With me is our expert analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Thanks for having me, Anna.
Host: So, Alex, let's start with the big picture. We hear a lot about AI ethics and responsible AI, but this study suggests there’s a fundamental problem with how we're approaching it. What's the issue?
Expert: The core problem is fragmentation. Right now, companies get bombarded with dozens of different ethical guidelines, principles, and checklists. It’s like having a hundred different recipes for the same dish, all with slightly different ingredients. It leads to confusion and inconsistent results.
Host: And the study argues this misses the point somehow?
Expert: Exactly. It points out three major misconceptions. First, we treat responsibility like a feature to be checked off a list, rather than a behavior designed into the AI's core. Second, we focus almost exclusively on the algorithm, ignoring the AI’s overall architecture and the actual capabilities it offers to users.
Host: And the third misconception?
Expert: It's that we're obsessed with only minimizing harm. That’s crucial, of course, but it's only half the story. True responsible design should also focus on maximizing the benefits and the value the AI provides.
Host: So how did the researchers get past these misconceptions to find a solution? What was their approach?
Expert: They went directly to the source. They conducted in-depth interviews with 24 professional AI designers—the people actually in the trenches, making the decisions that shape these systems every day. By listening to them, they built a theory from the ground up based on real-world practice, not just abstract ideals.
Host: That sounds incredibly practical. What were the key findings that emerged from those conversations?
Expert: The main outcome is a new framework called the Authenticity, Control, and Transparency theory—or ACT theory for short. It proposes that for an AI to behave responsibly, its design must be guided by these three core mechanisms.
Host: Okay, let's break those down. What do they mean by Authenticity?
Expert: Authenticity means the AI does what it claims to do, reliably and effectively. It’s about ensuring the AI's performance aligns with its intended purpose and ethical values. It has to be dependable and provide genuine utility.
Host: That makes sense. What about Control?
Expert: Control is about empowering users. It means giving people meaningful agency over the AI's behavior and its outputs. This could be anything from customization options to clear data privacy controls, ensuring the user is in the driver's seat.
Host: And the final piece, Transparency?
Expert: Transparency is about making the AI's operations clear and understandable. It’s not just about seeing the code, but understanding how the AI works, why it makes certain decisions, and what its limitations are. It’s the foundation for accountability and trust.
Host: So the ACT theory combines Authenticity, Control, and Transparency. Alex, this is the most important question for our listeners: why does this matter for business? What are the practical takeaways?
Expert: For business leaders, the ACT theory provides a clear, actionable roadmap. It moves responsible AI out of a siloed ethics committee and embeds it directly into the product design lifecycle. It gives your design, engineering, and product teams a shared language to build better AI.
Host: So it's about making responsibility part of the process, not an afterthought?
Expert: Precisely. And that has huge business implications. An AI that is authentic, controllable, and transparent is an AI that customers will trust. And in the digital economy, trust is everything. It drives adoption, enhances brand reputation, and ultimately, creates more valuable and successful products.
Host: It sounds like it’s a framework for building a competitive advantage.
Expert: It absolutely is. By adopting a framework like ACT, businesses aren't just managing risk or preparing for future regulation; they are actively designing better, safer, and more user-centric products that can win in the market.
Host: A powerful insight. To summarize for our listeners: the current approach to responsible AI is often fragmented. This study offers a solution with the ACT theory—a practical framework built on Authenticity, Control, and Transparency that can help businesses build AI that is not only ethical but more trustworthy and valuable.
Host: Alex Ian Sutherland, thank you for breaking this down for us.
Expert: My pleasure, Anna.
Host: And thank you for tuning in to A.I.S. Insights. We'll see you next time.
Responsible AI, AI Ethics, AI Design, Authenticity, Transparency, Control, Algorithmic Accountability
Journal of the Association for Information Systems (2025)
An Organizational Routines Theory of Employee Well-Being: Explaining the Love-Hate Relationship Between Electronic Health Records and Clinicians
Ankita Srivastava, Surya Ayyalasomayajula, Chenzhang Bao, Sezgin Ayabakan, Dursun Delen
This study investigates the causes of clinician burnout by analyzing over 55,000 online reviews from clinicians on Glassdoor.com. Using topic mining and econometric modeling, the research proposes and tests a new theory on how integrating various Electronic Health Record (EHR) applications to streamline organizational routines affects employee well-being.
Problem
Clinician burnout is a critical problem in healthcare, often attributed to the use of Electronic Health Records (EHRs). However, the precise reasons for this contentious relationship are not well understood, and there is a research gap in explaining how organizational-level IT decisions, such as how different systems are integrated, contribute to clinician stress or satisfaction.
Outcome
- Routine operational issues, such as workflow and staffing, were more frequently discussed by clinicians as sources of dissatisfaction than EHR-specific factors like usability. - Integrating applications to streamline clinical workflows across departments (e.g., emergency, lab, radiology) significantly improved clinician well-being. - In contrast, integrating applications focused solely on documentation did not show a significant impact on clinician well-being. - The positive impact of workflow integration was stronger in hospitals with good work-life balance policies and weaker in hospitals with high patient-to-nurse ratios, highlighting the importance of organizational context.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're exploring the friction between technology and employee well-being in a high-stakes environment: healthcare. With me is our expert analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Great to be here, Anna.
Host: We're diving into a study titled, "An Organizational Routines Theory of Employee Well-Being: Explaining the Love-Hate Relationship Between Electronic Health Records and Clinicians". It investigates the causes of clinician burnout by analyzing a massive dataset of online employee reviews.
Expert: That’s right. It uses over 55,000 reviews from clinicians on Glassdoor to understand how the technology choices hospitals make impact the day-to-day stress of their staff.
Host: Clinician burnout is a critical issue, and we often hear that Electronic Health Records, or EHRs, are the main culprit. But this study suggests the problem is more complex, right?
Expert: Exactly. EHRs are often blamed for increasing workloads and causing frustration, but the precise reasons for this love-hate relationship aren't well understood. The real issue the study tackles is the gap in our knowledge about how high-level IT decisions—like which software systems a hospital buys and how they are connected—trickle down to affect the well-being of the nurses and physicians on the front lines.
Host: So it's not just about one piece of software, but the entire digital ecosystem. How did the researchers get to the bottom of such a complex issue?
Expert: They used a very clever, data-driven approach. Instead of traditional surveys, they turned to Glassdoor, where clinicians leave anonymous and often very candid reviews about their employers. They used topic mining and other analytical methods to identify the most common themes in what clinicians praised or complained about over a nine-year period.
Host: It’s like listening in on the real breakroom conversation. So what did they find? Was it all about clunky software and bad user interfaces?
Expert: Surprisingly, no. That was one of the most interesting findings. When clinicians talked about dissatisfaction, they focused far more on routine operational issues—things like inefficient workflows, staffing shortages, and poor coordination between departments—than they did on the specific usability of the EHR software itself.
Host: So it's less about the tool, and more about how the work itself is structured.
Expert: Precisely. And that led to the study's most powerful finding. When hospitals used technology to streamline workflows *across* departments—for example, making sure the systems in the emergency room, the lab, and radiology all communicated seamlessly—clinician well-being significantly improved.
Host: That makes perfect sense. A smooth handoff of information prevents a lot of headaches. What about other types of tech integration?
Expert: This is where it gets really insightful. In contrast, when hospitals integrated applications that were focused only on documentation, it had no significant impact on well-being. So, just digitizing paperwork isn’t the answer. The real value comes from connecting the systems that support the actual flow of patient care.
Host: That’s a crucial distinction. The study also mentioned that the hospital’s environment played a role.
Expert: It was a massive factor. The positive impact of that workflow integration was much stronger in hospitals that already had good work-life balance policies. But in hospitals with high patient-to-nurse ratios, where staff were stretched thin, the benefits of the technology were much weaker.
Host: So, Alex, this brings us to the most important question for our listeners. These findings are from healthcare, but the lessons seem universal. What are the key business takeaways?
Expert: There are three big ones. First, focus on the workflow, not just the tool. When you're rolling out new technology, the most important question isn't "is this good software?", it's "how does this software improve our core operational routines and make collaboration between teams easier?" The real return on investment comes from smoothing out the friction between departments.
Host: That's a great point. What's the second takeaway?
Expert: Technology is a complement, not a substitute. You cannot use technology to solve fundamental organizational problems. The best integrated system in the world won't make up for understaffing or a culture that burns people out. You have to invest in your people and your processes right alongside your technology.
Host: And the third?
Expert: Listen for the "real" feedback. Employees might not complain directly about the new CRM software, but they will complain about the new hurdles in their daily routines. This study's use of Glassdoor reviews is a lesson for all leaders: find ways to understand how your decisions are affecting the ground-level workflow. The problem might not be the tech itself, but the operational chaos it’s inadvertently creating.
Host: Fantastic insights. So to recap: Clinician burnout isn't just about bad software, but about broken operational routines. The key is to strategically integrate technology to streamline how teams work together. And critically, that technology is only truly effective when it's built on a foundation of a supportive work environment.
Host: Alex Ian Sutherland, thank you so much for breaking this down for us.
Expert: My pleasure, Anna.
Host: And thanks to our audience for tuning in to A.I.S. Insights — powered by Living Knowledge.
Journal of the Association for Information Systems (2025)
Sunk Cost Fallacy, Price Adjustment, and Subscription Services for Information Goods
Mingyue Zhang, Jesse Bockstedt, Tingting Song, Xuan Wei
This study investigates how adjusting the upfront subscription price for information goods, like a movie service, influences customer consumption behavior. Using a quasi-natural experiment involving a movie subscription service's sudden price drop and a follow-up randomized experiment, the research analyzes the impact on movie-watching habits through the lens of the sunk cost fallacy.
Problem
Subscription services often adjust their pricing, but it remains unclear how changes in the fixed upfront fee—a sunk cost for the consumer—affect subsequent consumption. While traditional economic theory suggests sunk costs should be ignored, behavioral economics indicates people often try to 'get their money's worth'. This study addresses this gap by examining how a significant price reduction impacts user consumption and whether it's a profitable strategy for providers.
Outcome
- A sharp downward price adjustment of a movie subscription fee increased box office revenues for an average movie by 12% to 35% in the following six months. - The price drop primarily attracted highly price-conscious consumers who are more susceptible to the sunk cost fallacy, leading them to increase their consumption to justify the initial fee. - Niche information goods, particularly those with high quality and narrow appeal, benefited the most from the price adjustment strategy. - The impact of the price change on consumption decreases over time, a phenomenon known as 'payment depreciation,' as consumers gradually adapt to the initial cost.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating new study titled "Sunk Cost Fallacy, Price Adjustment, and Subscription Services for Information Goods." Host: It explores how adjusting the upfront price for a subscription, like a movie service, can dramatically influence how much customers actually use that service. With us to unpack the details is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. Subscription pricing is everywhere, from streaming services to software. What's the core business problem this study tackles? Expert: The problem is a clash between two ideas. Traditional economic theory says that an upfront, non-refundable fee—what economists call a sunk cost—shouldn't affect your future decisions. Once the money's gone, it's gone. Expert: But behavioral economics tells us that people are not always that rational. We feel a psychological need to ‘get our money's worth’. Host: So you pay for a gym membership and feel guilty if you don't go. Expert: Exactly. The big question for businesses was: what happens if you suddenly drop your subscription price? Does the lower sunk cost mean people will use the service less? For some businesses, where usage creates a real cost, like a movie-ticket subscription, getting that answer right is critical to profitability. Host: So how did the researchers figure this out? What was their approach? Expert: They had a perfect real-world test case: a movie subscription service called MoviePass. In 2017, MoviePass suddenly slashed its monthly price from around fifty dollars down to just under ten. Expert: This created what's called a quasi-natural experiment. The researchers could compare movie consumption in the U.S., where the price drop happened, with consumption in a similar market like Australia, where it didn't. This allowed them to isolate the impact of the price change. Expert: They also followed up with a controlled, randomized experiment to confirm the psychological reasons behind the behavior they observed. Host: A real-world business decision providing the data. So, the moment of truth: when the price dropped, what happened to movie-watching? Expert: This is the first key finding, and it’s a bit counterintuitive. The sharp price drop actually *increased* overall consumption. The study found that box office revenues for an average movie increased by 12% to 35% in the six months following the price cut. Host: Wow. So paying less made people watch *more* movies? Why on earth would that happen? Expert: It's all about who you attract. The second finding is that the much lower price primarily brought in a new type of customer: highly price-conscious consumers. And it turns out, this group is more susceptible to the sunk cost fallacy. Expert: Even though the ten-dollar fee was small, these new customers were intensely motivated to justify that expense, so they went to the movies more often to feel like they were getting a good deal. Host: That is fascinating. Did this apply to all movies equally? Did people just watch more blockbusters? Expert: No, and this is the third major finding. The strategy most benefited niche information goods. In this case, that meant movies with high quality ratings but narrower appeal. Expert: Essentially, the subscription model made new, price-conscious users more adventurous. The psychological cost of trying a movie they might not like was zero, so they explored beyond the big hits. Host: So the effect is an increase in consumption, driven by price-conscious users, especially for niche products. Was this effect permanent? Expert: It was not. The final key finding was a phenomenon the study calls 'payment depreciation'. The impact of the price change on consumption was strongest at the beginning and then decreased over time as subscribers got used to the cost. The psychological weight of that initial payment simply faded. Host: This is where it gets really important for our listeners. Alex, what are the key business takeaways here? Why does this matter? Expert: There are three big ones. First, think of your subscription price not just as a revenue lever, but as a customer segmentation tool. A lower price doesn't just make your service cheaper; it can attract a fundamentally different user base that is more motivated to engage. Expert: Second, if your business has a large catalog of high-quality, long-tail content—not just a few big hits—a low-cost subscription can be a powerful strategy. It encourages users to explore your entire library, increasing the perceived value of the service. Expert: And third, businesses must manage that 'payment depreciation' effect. The boost in engagement is strongest right after a customer pays. That's the critical window to onboard them, recommend content, and solidify the habit before the feeling of that sunk cost wears off. Host: Let's quickly recap. A strategic price drop can paradoxically boost consumption by attracting price-conscious customers who are more motivated by the sunk cost fallacy. This particularly benefits high-quality, niche products, but businesses should remember that this engagement boost is strongest just after payment. Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us. Expert: My pleasure, Anna. Host: And thank you for tuning in to A.I.S. Insights — powered by Living Knowledge.
Journal of the Association for Information Systems (2025)
Continuous Contracting in Software Outsourcing: Towards A Configurational Theory
Thomas Huber, Kalle Lyytinen
This study investigates how governance configurations are formed, evolve, and influence outcomes in software outsourcing projects that use continuous contracting. Through a longitudinal, multimethod analysis of 33 governance episodes across three projects, the research identifies how different combinations of contract design and project control achieve alignment and flexibility. The methodology combines thematic analysis with crisp-set qualitative comparative analysis (csQCA) to develop a new theory.
Problem
Contemporary software outsourcing increasingly relies on continuous contracting, where an initial umbrella agreement is followed by periodic contracts. However, there is a significant gap in understanding how managers should combine contract design and project controls to balance the competing needs for project alignment and operational flexibility, and how these choices evolve to impact overall project performance.
Outcome
- Identified eight distinct governance configurations, each consistently linked to specific outcomes of alignment and flexibility. - Found that project outcomes depend on how governance elements interact within a configuration, either by substituting for each other or compensating for each other's limitations. - Showed that as trust and knowledge accumulate, managers' governance strategies evolve from simple configurations (achieving either alignment or flexibility) to more sophisticated ones that achieve both simultaneously. - Concluded that by deliberately evolving governance configurations, managers can better steer projects and enhance overall performance.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. In today's complex business world, outsourcing software development is common, but making it work is anything but simple. Today, we're diving into a fascinating study titled "Continuous Contracting in Software Outsourcing: Towards A Configurational Theory."
Host: It explores how companies can better manage these relationships, not through a single, rigid contract, but as an evolving partnership. With me to break it all down is our analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Thanks for having me, Anna.
Host: So, Alex, let's start with the big picture. When a company outsources a major software project, what's the core problem this research is trying to solve?
Expert: The central problem is a classic business tension: you need to ensure the project stays on track and meets its goals, which we call 'alignment'. But you also need to be able to adapt to changes and new ideas, which is 'flexibility'.
Host: And traditional contracts aren't great at handling both, are they?
Expert: Exactly. A traditional, iron-clad contract might be good for alignment, but it's too rigid. So, many companies now use 'continuous contracting'—an initial umbrella agreement followed by smaller, periodic contracts or statements of work. The challenge is, there's been very little guidance on how managers should actually combine the contract details with day-to-day project management to get that balance right.
Host: It sounds like a real juggling act. So how did the researchers get inside these complex relationships to figure out what works?
Expert: They conducted a really deep, multi-year study of three large software projects. They analyzed 33 different contracting periods, or 'episodes', looking at all the contractual documents and project plans. Crucially, they also conducted in-depth interviews with managers from both the client and the vendor side to understand their thinking and the results of their decisions.
Host: So they weren't just looking at the documents; they were looking at the entire process in action. What were the key findings?
Expert: They had a few big 'aha' moments. First, there is no single 'best' way to manage an outsourcing contract. Instead, they identified eight distinct recipes, or what they call 'governance configurations'. Each one is a specific mix of contract design and project controls that consistently leads to a predictable outcome.
Host: And these outcomes relate back to that tension you mentioned between alignment and flexibility?
Expert: Precisely. Some of these recipes were great at achieving alignment, keeping the project strictly on task. Others were designed to maximize flexibility, allowing for innovation. But the most interesting finding was how the different elements within a recipe work together.
Host: What do you mean by that?
Expert: Some elements can substitute for each other. For instance, if your contract isn't very detailed, you can substitute for that with very close, hands-on project monitoring. Other elements compensate for each other's weaknesses. A detailed contract might provide alignment, but you can compensate for its rigidity by including a 'task buffer' that gives the vendor freedom to solve unforeseen problems.
Host: That makes sense. It’s about the combination, not just the individual parts. Was there another key finding?
Expert: Yes, and it’s a crucial one. These configurations evolve over time. The study showed that as trust and project-specific knowledge build between the client and the vendor, their approach matures. They might start with simple setups that achieve only alignment *or* flexibility, but they learn to use more sophisticated recipes that achieve both at the same time.
Host: This is the part our listeners are waiting for. What does this all mean for a business leader managing an outsourcing partner?
Expert: The most important takeaway is to stop seeing contracts as static legal documents that you file away. You need to see contracting as an active, dynamic management tool. It’s a set of levers you can pull throughout the project.
Host: So managers need to be more strategic and deliberate.
Expert: Exactly. Be deliberate about the recipe you're using. Ask yourself: in this phase of the project, do I need to prioritize alignment, flexibility, or both? Then, choose the right combination of tools—like how specific the contract is, whether you grant the vendor autonomy on certain tasks, and how you formalize changes.
Host: And what about the role of trust that you mentioned?
Expert: It's fundamental. The study clearly shows that investing time and effort in building a trusting relationship and shared knowledge pays dividends. It literally expands your management toolkit, allowing you to use those more advanced, high-performing configurations that deliver better results in the long run.
Host: So, to summarize: managers should view software outsourcing contracts not as a single event, but as a continuous management process. Success comes from deliberately choosing the right recipe of contract and control elements for the job. And by investing in the relationship, you can evolve that recipe over time to achieve both tight alignment and crucial flexibility, driving superior project performance.
Host: Alex Ian Sutherland, thank you for bringing this research to life for us.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning into A.I.S. Insights, powered by Living Knowledge.
Journal of the Association for Information Systems (2025)
Do Good and Do No Harm Too: Employee-Related Corporate Social (Ir)responsibility and Information Security Performance
Qian Wang, Dan Pienta, Shenyang Jiang, Eric W. T. Ngai, Jason Bennett Thatcher
This study investigates the relationship between a company's social performance toward its employees and its information security outcomes. Using an eight-year analysis of publicly listed firms and a scenario-based experiment, the research examines how both positive actions (employee-related Corporate Social Responsibility) and negative actions (employee-related Corporate Social Irresponsibility) affect a firm's security risks.
Problem
Information security breaches are frequently caused by human error, which often stems from a misalignment between employee goals and a firm's security objectives. This study addresses the gap in human-centric security strategies by exploring whether improving employee well-being and social treatment can align these conflicting interests, thereby reducing security vulnerabilities and data breaches.
Outcome
- A firm's engagement in positive, employee-related corporate social responsibility (CSR) is associated with reduced information security risks. - Conversely, a firm's involvement in socially irresponsible activities toward employees (CSiR) is positively linked to an increase in security risks. - The impact of these positive and negative actions on security is amplified when the actions are unique compared to industry peers. - Experimental evidence confirmed that these effects are driven by changes in employees' security commitment, willingness to monitor peers for security compliance, and overall loyalty to the firm.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I'm your host, Anna Ivy Summers. Host: Today, we're diving into a study that connects two areas of business we don't often talk about together: human resources and cybersecurity. Host: The study is titled, "Do Good and Do No Harm Too: Employee-Related Corporate Social (Ir)responsibility and Information Security Performance." Host: In short, it investigates whether a company’s social performance toward its employees is directly linked to its information security. With me to unpack this is our analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, we all hear about massive data breaches in the news. We tend to imagine sophisticated external hackers. But this study points the finger in a different direction, doesn't it? Expert: It certainly does. The real-world problem is that the vast majority of information security breaches—one report from Verizon suggests over 80%—involve a human element inside the company. Host: So, it's not always malicious? Expert: Rarely, in fact. It’s often unintentional human error or negligence. The study highlights a fundamental misalignment: for the company, security is paramount. For an employee, security protocols can feel like an obstacle to just getting their job done. Host: The classic example being someone who writes their password on a sticky note. Expert: Exactly. That employee isn't trying to harm the company; they're just trying to log in quickly. The study frames this using what’s known as the principal-agent theory—the goals of the company, the principal, aren't automatically aligned with the goals of the employee, the agent. This research asks if treating employees better can fix that misalignment. Host: A fascinating question. So how did the researchers connect the dots between something like an employee wellness program and the risk of a data breach? Expert: They used a really robust multi-study approach. First, they conducted a large-scale analysis, looking at eight years of data from thousands of publicly listed firms. They matched up data on employee treatment—both positive and negative—with records of data breaches. Host: So that established a correlation. Expert: Correct. But to understand the "why," they followed it up with a scenario-based experiment. They presented participants with stories about a fictional company that either treated its employees very well or very poorly, and then measured how the participants would behave regarding security in that environment. Host: Let's get to the results then. What were the key findings from this work? Expert: The connection was incredibly clear and worked in both directions. First, a firm's engagement in positive, employee-related corporate social responsibility, or CSR, was directly associated with reduced information security risks. Host: So, doing good is good for security. What about the opposite? Expert: The opposite was just as true. Firms involved in socially irresponsible activities toward their employees—think labor disputes or safety violations—had a significantly higher risk of data breaches. The study calls this CSiR, with an 'i' for irresponsibility. Host: That’s a powerful link. Was there anything else that stood out? Expert: Yes, a really intriguing finding on what they called 'uniqueness'. The impact was amplified when a company’s actions stood out from their industry peers. Host: What do you mean? Expert: If your company offers benefits that are uniquely good for your sector, employees value that more, and the positive security effect is even stronger. Conversely, if your company treats employees in a way that is uniquely bad compared to competitors, the negative security risk goes up even more. Being an outlier really matters. Host: This is the critical part for our audience, Alex. Why does this matter for business leaders, and what should they do with this information? Expert: The most crucial takeaway is that investing in employee well-being is not just an HR or ethics initiative—it is a core cybersecurity strategy. You cannot simply buy more technology to solve this problem; you have to invest in your people. Host: So a company's Chief People Officer should be in close contact with their Chief Information Security Officer. Expert: Absolutely. The experimental part of the study proved why this works. When employees feel valued, three things happen: their personal commitment to security goes up; they become more willing to monitor their peers and foster a security-conscious culture; and their overall loyalty to the firm increases. Host: And that loyalty prevents both carelessness and, in worst-case scenarios, actual data theft by disgruntled employees. Expert: Precisely. For a leader listening now, the advice is twofold. First, you have to play both offense and defense. Promoting positive programs isn't enough; you must actively prevent and address negative behaviors. Second, benchmark against your industry and strive to be a uniquely good employer. That differentiation is a powerful, and often overlooked, security advantage. Host: So, to summarize this fascinating study: how you treat your people is a direct predictor of your vulnerability to a data breach. Doing good reduces risk, doing harm increases it, and being an exceptional employer can give you an exceptional edge in security. Host: It’s a compelling case that your employees truly are your first and most important line of defense. Alex, thank you for breaking this down for us. Expert: My pleasure, Anna. Host: And thank you for tuning in to A.I.S. Insights. We'll see you next time.
Information Security, Data Breach, Employee-Related Social Performance, Corporate Social Responsibility, Agency Theory, Cybersecurity Risk
Journal of the Association for Information Systems (2025)
What Is Augmented? A Metanarrative Review of AI-Based Augmentation
Inès Baer, Lauren Waardenburg, Marleen Huysman
This paper conducts a comprehensive literature review across five research disciplines to clarify the concept of AI-based augmentation. Using a metanarrative review method, the study identifies and analyzes four distinct targets of what AI augments: the body, cognition, work, and performance. Based on this framework, the authors propose an agenda for future research in the field of Information Systems.
Problem
In both academic and public discussions, Artificial Intelligence is often described as a tool for 'augmentation' that helps humans rather than replacing them. However, this popular term lacks a clear, agreed-upon definition, and there is little discussion about what specific aspects of human activity are the targets of this augmentation. This research addresses the fundamental question: 'What is augmented by AI?'
Outcome
- The study identified four distinct metanarratives, or targets, of AI-based augmentation: the body (enhancing physical and sensory functions), cognition (improving decision-making and knowledge), work (creating new employment opportunities and improving work practices), and performance (increasing productivity and innovation). - Each augmentation target is underpinned by a unique human-AI configuration, ranging from human-AI symbiosis for body augmentation to mutual learning loops for cognitive augmentation. - The paper reveals tensions and counternarratives for each target, showing that augmentation is not purely positive; for example, it can lead to over-dependence on AI, deskilling, or a loss of human agency. - The four augmentation targets are interconnected, creating potential conflicts (e.g., prioritizing performance over meaningful work) or dependencies (e.g., cognitive augmentation relies on augmenting bodily senses).
Host: Welcome to A.I.S. Insights, the podcast where we connect Living Knowledge to your business. I'm your host, Anna Ivy Summers. Host: We hear it all the time: AI isn't here to replace us, but to *augment* us. It's a reassuring idea, but what does it actually mean? Host: Today, we’re diving into a fascinating new study from the Journal of the Association for Information Systems. It's titled, "What Is Augmented? A Metanarrative Review of AI-Based Augmentation." Host: The study looks across multiple research fields to clarify this very concept. It identifies four distinct things that AI can augment: our bodies, our cognition, our work, and our performance. Host: To help us unpack this is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So Alex, let's start with the big problem. Why did we need a study to define a word we all think we understand? Expert: That's the core of the issue. In business, 'augmentation' has become a popular, optimistic buzzword. It's used to ease fears about automation and job loss. Expert: But the study points out that the term is incredibly vague. When a company says it's using AI for augmentation, it's not clear what they're actually trying to improve. Expert: The researchers ask a simple but powerful question that's often overlooked: if we're making something 'more,' what is that something? More skills? More productivity? This lack of clarity is a huge barrier to forming an effective AI strategy. Host: So the first step is to get specific. How did the study go about creating a clearer picture? Expert: They took a really interesting approach. Instead of just looking at one field, they analyzed research from five different disciplines, including computer science, management, and economics. Expert: They were looking for the big, overarching storylines—or metanarratives—that different experts tell about AI augmentation. This allowed them to cut through the jargon and identify the fundamental targets of what's being augmented. Host: And that led them to the key findings. What were these big storylines they uncovered? Expert: They distilled it all down to four clear targets. The first is augmenting the **body**. This is about enhancing our physical and sensory functions—think of a surgeon using a robotic arm for greater precision or an engineer using AR glasses to see schematics overlaid on real-world equipment. Host: Okay, so a very direct, physical enhancement. What’s the second? Expert: The second is augmenting **cognition**. This is about improving our thinking and decision-making. For example, AI can help financial analysts identify subtle market patterns or assist doctors in making a faster, more accurate diagnosis. It's about enhancing our mental capabilities. Host: That makes sense. And the third? Expert: Augmenting **work**. This focuses on changing the nature of jobs and tasks. A classic example is an AI chatbot handling routine customer queries. This doesn't replace the human agent; it frees them up to handle more complex, emotionally nuanced problems, making their work potentially more fulfilling. Host: And the final target? Expert: That would be augmenting **performance**. This is the one many businesses default to, and it's all about increasing productivity, efficiency, and innovation at a systemic level. Think of AI optimizing a global supply chain or accelerating the R&D process for a new product. Host: That's a fantastic framework. But the study also found that augmentation isn't a purely positive story, is it? Expert: Exactly. This is a critical insight. For each of those four targets, the study identified tensions or counternarratives. Expert: For example, augmenting cognition can lead to over-dependence and deskilling if we stop thinking for ourselves. Augmenting work can backfire if AI dictates every action, turning an employee into someone who just follows a script, which reduces their agency and job satisfaction. Host: This brings us to the most important question, Alex. Why does this matter for business leaders? How can they use this framework? Expert: It matters immensely. First, it forces strategic clarity. A leader can now move beyond saying "we're using AI to augment our people." They should ask, "Which of the four targets are we aiming for?" Expert: Is the goal to augment the physical abilities of our warehouse team? That's a **body** strategy. Is it to improve the decisions of our strategy team? That's a **cognition** strategy. Being specific is the first step. Host: And what comes after getting specific? Expert: Understanding the trade-offs. The study shows these targets can be in conflict. A strategy that relentlessly pursues **performance** by automating everything possible might directly undermine a goal to augment **work** by making jobs more meaningful. Leaders need to see this tension and make conscious choices about their priorities. Host: So it’s about choosing a target and understanding its implications. Expert: Yes, and finally, it's about designing the right kind of human-AI partnership. Augmenting the body implies a tight, almost symbiotic relationship. Augmenting cognition requires creating mutual learning loops, where humans train the AI and the AI provides insights that train the humans. It's not one-size-fits-all. Host: So to sum up, it seems the key message for business leaders is to move beyond the buzzword. Host: This study gives us a powerful framework for doing just that. By identifying whether you are trying to augment the body, cognition, work, or performance, you can build a much smarter, more intentional AI strategy. Host: You can anticipate the risks, navigate the trade-offs, and ultimately create a more effective collaboration between people and technology. Host: Alex, thank you for making that so clear for us. Expert: My pleasure, Anna. Host: And thank you for listening to A.I.S. Insights — powered by Living Knowledge. We'll see you next time.
Journal of the Association for Information Systems (2025)
Corporate Nomads: Working at the Boundary Between Corporate Work and Digital Nomadism
Julian Marx, Milad Mirbabaie, Stefan Stieglitz
This study explores the emerging phenomenon of 'corporate nomads'—individuals who maintain permanent employment while adopting a nomadic, travel-based lifestyle. Through qualitative interviews with 37 corporate nomads, the research develops a process model to understand how these employees and their organizations negotiate the boundaries between traditional corporate structures and the flexibility of digital nomadism.
Problem
Highly skilled knowledge workers increasingly desire the flexibility of a nomadic lifestyle, a concept traditionally seen as incompatible with permanent corporate employment. This creates a tension for organizations that need to attract and retain top talent but are built on location-dependent work models, leading to a professional paradox for employees wanting both stability and freedom.
Outcome
- The study develops a three-phase process model (splintering, calibrating, and harmonizing) that explains how corporate nomads and their organizations successfully negotiate this new work arrangement. - The integration of corporate nomads is not a one-sided decision but a mutual process of 'boundary work' requiring engagement, negotiation, and trade-offs from both the employee and the company. - Corporate nomads operate as individual outliers who change their personal work boundaries (e.g., location and time) without transforming the entire organization's structure. - Information Technology (IT) is crucial in managing the inherent tensions of this lifestyle, helping to balance organizational control with employee autonomy and enabling integration from a distance.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. In today's episode, we're diving into the future of work with a fascinating new study titled "Corporate Nomads: Working at the Boundary Between Corporate Work and Digital Nomadism". It explores how some people are successfully combining a permanent corporate job with a globetrotting lifestyle. To help us unpack this, we have our analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Great to be here, Anna.
Host: So Alex, let's start with the big picture. We hear a lot about the 'great resignation' and the demand for flexibility. What's the specific problem this study addresses?
Expert: It tackles a real tension in the modern workplace. You have highly skilled professionals who want the freedom and travel of a digital nomad, but also the stability and benefits of a permanent job. For decades, those two things were seen as completely incompatible.
Host: A professional paradox, wanting both stability and total freedom.
Expert: Exactly. And companies are caught in the middle. They need to attract and retain this top talent, but their entire structure—from HR policies to tax compliance—is built for employees who are in a specific location. This study explores how some employees and companies are actually making this paradox work.
Host: So how did the researchers figure out how they're making it work? What was their approach?
Expert: They went straight to the source. The research team conducted in-depth, qualitative interviews with 37 of these ‘corporate nomads’. They collected detailed stories about their journeys, their negotiations with their bosses, and the challenges they faced, which allowed them to build a model based on real-world experience.
Host: And what did that model reveal? What are the key findings?
Expert: The study found that successfully integrating a corporate nomad isn't just a simple decision; it's a mutual process that unfolds in three distinct phases: splintering, calibrating, and harmonizing.
Host: Splintering, calibrating, harmonizing. That sounds very methodical. Can you walk us through what each of those mean?
Expert: Of course. 'Splintering' is the initial break from the norm. It’s when an employee, as an individual, starts to deviate from the company's standard location-based practices. This often begins as a test period, maybe a three-month 'workation', to see if it's feasible.
Host: So it’s a trial run, not a sudden, permanent change.
Expert: Precisely. Next comes 'calibrating'. This is the negotiation phase where both the employee and the company establish the new rules. It involves trade-offs. For example, the employee might agree to overlap their working hours with the home office, while the company agrees to manage them based on output, not hours spent online.
Host: And the final phase, 'harmonizing'?
Expert: Harmonizing is when the arrangement becomes the new, stable reality for that individual. New habits and communication rituals are established, often heavily reliant on technology. It’s a crucial finding that these corporate nomads operate as individual outliers; their arrangement doesn't transform the entire company, but it proves it’s possible.
Host: You mentioned technology. I assume IT is the glue that holds all of this together?
Expert: Absolutely. Technology is what makes this entire concept viable. The study highlights that IT tools, from communication platforms like Slack to project management software, are essential for balancing organizational control with the employee’s need for autonomy. It allows for integration from a distance.
Host: This brings us to the most important question for our listeners, Alex. Why does this matter for business? What are the practical takeaways for managers and leaders?
Expert: This is incredibly relevant. The first and biggest takeaway is about talent. In the fierce competition for skilled workers, offering this level of flexibility is a powerful advantage for attracting and retaining top performers who might otherwise leave for freelance life.
Host: So it's a strategic tool in the war for talent.
Expert: Yes, and it also opens up a global talent pool. A company is no longer limited to hiring people within commuting distance. They can hire the best software developer or marketing strategist, whether they live in Berlin, Bali, or Brazil.
Host: What advice does this give a manager who gets a request like this from a top employee?
Expert: The key is to see it as a negotiated process, not a simple yes-or-no policy decision. The study’s three-phase model provides a roadmap. Start with a trial period—the splintering phase. Then, collaboratively define the rules and trade-offs—the calibrating phase. Don't try to create a one-size-fits-all policy from the start.
Host: It sounds like it requires a real shift in managerial mindset.
Expert: It does. Success hinges on moving away from managing by presence to managing by trust and results. One person interviewed put it bluntly: if a manager doesn't trust their employees to work remotely, they're either a bad boss or they've hired the wrong people. It’s about focusing on the output, not the location.
Host: That's a powerful thought to end on. So, to recap: corporate nomads represent a new fusion of job stability and lifestyle freedom. Making it work is a three-phase process of splintering, calibrating, and harmonizing, built on mutual negotiation and enabled by technology. For businesses, this is a strategic opportunity to win and keep top talent, provided they are willing to embrace a culture of trust and flexibility.
Host: Alex, thank you so much for breaking down this insightful study for us.
Expert: My pleasure, Anna.
Host: And thank you to our audience for listening to A.I.S. Insights — powered by Living Knowledge. Join us next time as we continue to explore the ideas shaping business and technology.
Corporate Nomads, Digital Nomads, Boundary Work, Digital Work, Information Systems
Journal of the Association for Information Systems (2025)
Capturing the “Social” in Social Networks: The Conceptualization and Empirical Application of Relational Quality
Christian Meske, Iris Junglas, Matthias Trier, Johannes Schneider, Roope Jaakonmäki, Jan vom Brocke
This study introduces and validates a concept called "relational quality" to better understand the social dynamics within online networks beyond just connection counts. By analyzing over 440,000 messages from two large corporate social networks, the researchers developed four measurable markers—being personal, curious, respectful, and sharing—to capture the richness of online relationships.
Problem
Traditional analysis of social networks focuses heavily on structural aspects, such as who is connected to whom, but often overlooks the actual quality and nature of the interactions. This creates a research gap where the 'social' element of social networks is not fully understood, limiting our ability to see how online relationships create value. This study addresses this by developing a framework to conceptualize and measure the quality of these digital social interactions.
Outcome
- Relational quality is a distinct and relevant dimension that complements traditional structural social network analysis (SNA), which typically only focuses on network structure. - The study identifies and measures four key facets of relational quality: being personal, being curious, being polite, and sharing. - Different types of users exhibit distinct patterns of relational quality; for instance, 'connectors' (users with many connections but low activity) are the most personal, while 'broadcasters' (users with high activity but few connections) share the most resources. - As a user's activity (e.g., number of posts) increases, their interactions tend to become less personal, curious, and polite, while their sharing of resources increases. - In contrast, as a user's number of connections grows, their interactions become more personal and curious, but they tend to share fewer resources.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we’re diving into a fascinating study that rethinks how we measure the value of our professional networks. It’s titled "Capturing the “Social” in Social Networks: The Conceptualization and Empirical Application of Relational Quality".
Host: With me is our expert analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Great to be here, Anna.
Host: So, this study introduces a concept called "relational quality". What's that all about?
Expert: It’s about looking past the surface. This study suggests that to truly understand online networks, we need to go beyond just counting connections or posts. It developed four measurable markers—being personal, curious, respectful, and sharing—to capture the actual richness of the relationships people build online.
Host: That brings us to the big problem. When businesses look at their internal social networks, say on platforms like Slack or Yammer, what are they usually measuring, and what are they missing?
Expert: Traditionally, they rely on what’s called Social Network Analysis, or SNA. It’s great at creating a structural map—it shows who is connected to whom and who the central hubs are. But it often overlooks the actual substance of those interactions.
Host: So it’s like seeing the roads on a map, but not the traffic?
Expert: Exactly. You see the connections, but you don't know the nature of the conversation. Is it a quick, transactional question, or is it a deep, trust-building exchange? Traditional analysis was missing the 'social' element of social networks, which limits our ability to see how these online relationships actually create value.
Host: So how did the researchers in this study try to measure that missing social element?
Expert: Their approach was to analyze the language itself. They looked at over 440,000 messages posted by more than 24,000 employees across two large corporate social networks. Using linguistic analysis, they measured the content of the messages against those four key markers I mentioned: how personal, how curious, how polite, and how much sharing was going on.
Host: And what did this new lens reveal? What were the key findings?
Expert: First, they confirmed that this "relational quality" is a totally distinct and relevant dimension that complements the traditional structural analysis. It adds a whole new layer of understanding.
Host: You mentioned it helps identify different types of users. Could you give us an example?
Expert: Absolutely. They identified some fascinating profiles. For instance, they found what they call 'Connectors'. These are people with many connections but relatively low posting activity. The study found that when they do interact, they are the most personal.
Host: So they’re quiet but effective relationship builders. Who else?
Expert: On the other end of the spectrum are 'Broadcasters'. These users are highly active, sending lots of messages, but to a more confined group of people. They excelled at sharing resources, like links and documents, but their messages ranked the lowest on being personal, curious, and polite.
Host: That implies a trade-off then. As your activity level changes, the quality of your interactions might change too?
Expert: Precisely. The study found that as a user's number of posts increases, their interactions tend to become less personal and less curious. They shift from dialogue to monologue. In contrast, as a user's number of connections grows, their interactions actually become more personal and curious. It shows building a wide network is different from just being a loud voice.
Host: This is where it gets really interesting. Alex, why does this matter for a business leader? What are the practical takeaways here?
Expert: The implications are significant. First, it shows that simply encouraging "more engagement" on your enterprise network might not be the right goal. You could just be creating more broadcasters, not better collaborators. It’s about fostering the right *kind* of interaction.
Host: It's about quality over quantity. What's another key takeaway?
Expert: It helps businesses identify their hidden influencers. A 'Connector' might be overlooked by traditional metrics that favor high activity. But these are the people quietly building trust and bridging silos between departments. They are cultivating the social capital that is crucial for innovation and collaboration.
Host: So you could use this kind of analysis to get a health check on your company’s internal network?
Expert: Absolutely. It provides a diagnostic tool. Is your network fostering transactional broadcasting, or is it building real, collaborative relationships? Are new hires being welcomed into curious, supportive conversations, or are they just being hit with a firehose of information? This framework helps you see and improve the true social fabric of your organization.
Host: So, to recap: looking beyond just who's connected to whom and measuring the *quality* of interactions—how personal, curious, polite, and sharing they are—paints a much richer, more actionable picture of our internal networks. It reveals different, important user roles like 'Connectors' and 'Broadcasters', proving that more activity doesn't always mean better collaboration.
Host: Alex, thank you so much for breaking down this insightful study for us.
Expert: My pleasure, Anna.
Host: And thank you to our listeners for tuning into A.I.S. Insights, powered by Living Knowledge.
Enterprise Social Network, Social Capital, Relational Quality, Social Network Analysis, Linguistic Analysis, Computational Research
Journal of the Association for Information Systems (2025)
What Goals Drive Employees' Information Systems Security Behaviors? A Mixed Methods Study of Employees' Goals in the Workplace
Sebastian Schuetz, Heiko Gewald, Allen Johnston, Jason Bennett Thatcher
This study investigates the work-related goals that motivate employees' information systems security behaviors. It employs a mixed-methods approach, first using qualitative interviews to identify key employee goals and then using a large-scale quantitative survey to evaluate their importance in predicting security actions.
Problem
Prior research on information security behavior often relies on general theories from criminology or public health, which do not fully capture the specific goals employees have in a workplace context. This creates a gap in understanding the primary motivations for why employees choose to follow or ignore security protocols during their daily work.
Outcome
- Employees' security behaviors are primarily driven by the goals of achieving good work performance and avoiding blame for security incidents. - Career advancement acts as a higher-order goal, giving purpose to security behaviors by motivating the pursuit of subgoals like work performance and blame avoidance. - The belief that security behaviors help meet a supervisor's performance expectations (work performance alignment) is the single most important predictor of those behaviors. - Organizational citizenship (the desire to be a 'good employee') was not a significant predictor of security behavior when other goals were considered. - A strong security culture encourages secure behaviors by strengthening the link between these behaviors and the goals of work performance and blame avoidance.
Host: Hello and welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers.
Host: Today we’re diving into a question that keeps executives up at night: Why do employees click on that phishing link or ignore security warnings? We’re looking at a study titled, "What Goals Drive Employees' Information Systems Security Behaviors? A Mixed Methods Study of Employees' Goals in the Workplace."
Host: It investigates the work-related goals that truly motivate employees to act securely. And to help us unpack this, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Great to be here, Anna.
Host: Alex, companies invest fortunes in firewalls and security software, but we constantly hear that the ‘human factor’ is the weakest link. What’s the big problem this study wanted to solve?
Expert: The core problem is that for decades, we’ve been trying to understand employee security behavior using the wrong lens. Much of the previous research was based on general theories from fields like public health or even criminology.
Host: Criminology? How does that apply to an accountant in an office?
Expert: Exactly. Those theories focus on goals like avoiding punishment or avoiding physical harm. But an employee’s daily life isn’t about that. They're trying to meet deadlines, impress their boss, and get their work done. This study argues that we’ve been missing the actual, on-the-ground goals that drive people in a workplace context.
Host: So how did the researchers get closer to those real-world goals? What was their approach?
Expert: They used a really smart two-part method. First, instead of starting with a theory, they started with the employees. They conducted in-depth interviews across various industries to simply ask people about their career goals and how security fits in.
Host: So they were listening first, not testing a hypothesis.
Expert: Precisely. Then, they took all the goals that emerged from those conversations—things like performance, career advancement, and avoiding blame—and built a large-scale survey. They gave this to over 1,200 employees to measure which of those goals were the most powerful predictors of secure behaviors.
Host: A great way to ground the research in reality. So, after speaking to all these people, what did they find? What really makes an employee follow the rules?
Expert: The results were incredibly clear, and the number one driver was not what you might expect. It’s the goal of achieving good work performance.
Host: Not fear of being fired or protecting the company, but simply doing a good job?
Expert: Yes. The belief that secure behaviors help an employee meet their supervisor's performance expectations was the single most important factor. It boils down to a simple calculation in the employee's mind: "Is doing this security task part of what it means to be good at my job?"
Host: That’s a powerful insight. What was the second most important driver?
Expert: The second was avoiding blame. Employees are motivated to follow security rules because they don’t want to be singled out as the person responsible for a security incident, knowing it could have a negative impact on their reputation and career.
Host: So what about appealing to an employee's sense of loyalty or being a 'good corporate citizen'?
Expert: That’s one of the most surprising findings. The desire to be a ‘good employee’ for the company's sake, what the study calls organizational citizenship, was not a significant factor when you accounted for the other goals. It seems that abstract loyalty doesn't drive day-to-day security actions nearly as much as personal, tangible goals do.
Host: This brings us to the most important section for our audience. Alex, what does this all mean for business leaders? How can they use these insights?
Expert: It means we need to fundamentally shift our security messaging. First, managers must explicitly link security to job performance. Make it part of the conversation during performance reviews. Frame it as a core competency, not an IT chore. Success in your role includes being secure with company data.
Host: So it moves from the IT department's problem to a personal performance metric.
Expert: Exactly. Second, leverage the power of blame avoidance, but focus it on career impact. The message isn't just "you'll get in trouble," but "a preventable security incident can be a major roadblock to the promotion you're working toward." It connects security directly to their career advancement goals.
Host: And the third takeaway?
Expert: It's all held together by building a strong security culture. The study found that a good culture is what strengthens the connection between security and the goals of performance and blame avoidance. When being secure is just 'how we do things here,' it becomes a natural part of performing well and protecting one's career.
Host: So, if I can summarize: to really improve security, businesses need to stop relying on generic warnings and start connecting secure behaviors directly to what employees value most: succeeding in their job, protecting their reputation, and advancing their career.
Expert: You've got it. It’s about making security personal to their success.
Host: Fantastic insights, Alex. Thank you for making this so clear and actionable for our listeners.
Expert: My pleasure, Anna.
Host: And thank you for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we continue to explore the ideas shaping the future of business.
Security Behaviors, Goal Systems Theory (GST), Work Performance, Blame Avoidance, Organizational Citizenship, Career Advancement
Journal of the Association for Information Systems (2025)
Technocognitive Structuration: Modeling the Role of Cognitive Structures in Technology Adaptation
Rob Gleasure, Kieran Conboy, Qiqi Jiang
This study investigates how individuals' thought processes change when they adapt to using technology. The researchers propose and test a theory called 'technocognitive structuration', which posits that these mental changes (cognitive adaptations) are a crucial middle step that links changes in technology use to changes in task performance. The theory was tested through an online experiment where participants had to adapt their use of word processing software for a specific task.
Problem
Existing theories often explain how people adapt to technology by focusing on social and behavioral factors, but they largely ignore how these adaptations change our internal mental models. This is a significant gap in understanding, as modern digital tools like AI, social media, and wearables are known to influence how we process information and conceptualize problems. The study addresses this by creating a model that explicitly includes these cognitive changes to provide a more complete picture of technology adaptation.
Outcome
- The study's results confirmed that cognitive adaptation is a critical mediator between technology adaptation and task adaptation. In other words, changing how one thinks about a technology is a key step in translating new feature use into new ways of performing tasks. - Two types of cognitive changes were identified: exploitative adaptations (refining existing mental models) and exploratory adaptations (creating fundamentally new mental models), both of which were found to be significant. - These findings challenge existing research by suggesting that cognitive adaptation is not just a side effect but an essential mechanism to consider when explaining how and why people change their work practices in response to new technology.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we’re diving into a fascinating study that looks at what happens inside our brains when we learn to use new technology.
Host: With me is our expert analyst, Alex Ian Sutherland. Alex, thanks for being here.
Expert: It's great to be here, Anna.
Host: The study we’re discussing is titled "Technocognitive Structuration: Modeling the Role of Cognitive Structures in Technology Adaptation". In essence, it explores how our thought processes change when we adapt to technology, and why that mental shift is a crucial middle step between using a new tool and actually getting better at our jobs.
Expert: That's right. It's about the "aha!" moments we have with technology and why they matter.
Host: So let’s start with the big picture. Why is it so important to understand this mental side of technology adoption? What’s the problem this study is trying to solve?
Expert: Well, for decades, theories have focused on social factors or user behavior when explaining how we adapt to new tech. But they’ve largely ignored the internal changes—how these tools literally reshape our mental models of a task.
Host: So, we know *that* people are using the new software, but not *why* they're using it in a particular way or how it's changing their thinking?
Expert: Exactly. And with modern tools like AI, collaboration platforms, and even wearables, this is a huge blind spot. These technologies are designed to influence how we process information. If we don't understand the cognitive component, we only have half the story of why a technology rollout succeeds or fails.
Host: That makes a lot of sense. So how did the researchers actually measure these internal thought processes? It sounds difficult to observe.
Expert: It is tricky, but they used a clever approach. They ran an online experiment where they asked people to create a CV using standard word processing software. They then gently nudged participants into two different groups. One group was asked to make a simple adaptation, like using a new font. The other was asked to do something more unusual—using the 'eye dropper' tool to match the CV's colors to the branding of their target company.
Host: So, two different levels of adapting the technology for the same task.
Expert: Precisely. After the task, they surveyed the participants to measure how their thinking about the task had changed, and how it affected their performance. This allowed them to connect the dots between using a tech feature, changing one's thinking, and adapting one's work.
Host: A really interesting setup. So, Alex, what were the key findings? What did they learn?
Expert: The biggest finding confirmed their core theory: cognitive adaptation is the critical bridge. It’s the essential middle step that connects using a new feature to performing a task differently. Simply clicking a new button doesn't do much. The real change happens when that action triggers a new way of thinking about the work.
Host: It's that mental lightbulb moment that truly matters.
Expert: Exactly. And they also identified two distinct types of these mental shifts. The first is 'exploitative adaptation'—which is basically refining an existing mental model. Using a new font to make your CV look a bit sharper falls into this category. You’re still thinking of a CV in the traditional way, just improving it.
Host: Okay, so doing the same thing, but better. What’s the other type?
Expert: The other is 'exploratory adaptation'. This is about creating a fundamentally new mental model. Using the eye-dropper tool to align your CV with a company's brand identity isn't just an improvement; it reframes the CV as a personalized marketing document. It’s a whole new way of conceptualizing the task.
Host: That’s a powerful distinction. Now for the most important question for our audience: why does this matter for business? What are the practical takeaways?
Expert: This is where it gets really interesting for leaders. The first takeaway is about training. It tells us that just showing employees which buttons to press in a new software is not enough. To get real value from a new tool, you have to facilitate a change in their mindset.
Host: So, instead of a simple software tutorial, a manager should be running a workshop on new ways to think about the process that the software supports?
Expert: Precisely. You need to create space for those 'exploratory' aha moments. The goal isn't just user adoption; it's cognitive adaptation. The second key takeaway is for technology designers. The famous principle "Don't Make Me Think" might be incomplete. While tools should be easy to use, the ones that also prompt users to think differently and explore new approaches can lead to far greater performance gains.
Host: Can you give an example of that?
Expert: The study mentioned qualitative data from athletes using fitness wearables. Some athletes who just intuitively followed the app's logic ended up overtraining. The athletes who performed best were those who used the data to critique the tool's assumptions and invent their own, more creative training strategies. They engaged in that deeper, exploratory thinking.
Host: This has been incredibly insightful, Alex. So, to quickly recap for our listeners: when we adopt new technology, the real transformation doesn't happen on the screen, it happens in our minds.
Expert: That's the core message.
Host: This study shows that these mental shifts, or 'cognitive adaptations', are the essential link between new tech features and better work performance. For businesses, this means rethinking training to focus on changing mindsets, not just teaching clicks.
Expert: And for designers, it means creating tools that are not only intuitive but also inspiring.
Host: Alex Ian Sutherland, thank you so much for breaking down this complex topic for us.
Expert: My pleasure, Anna.
Host: And thank you to our listeners for tuning into A.I.S. Insights, powered by Living Knowledge.
Technocognitive Structuration, Technology Adaptation, Cognitive Structures, Adaptive Structuration Theory for Individuals, Structuration, Experiment
Journal of the Association for Information Systems (2025)
Making Sense of Discursive Formations and Program Shifts in Large-Scale Digital Infrastructures
Egil Øvrelid, Bendik Bygstad, Ole Hanseth
This study examines how public and professional discussions, known as discourses, shape major changes in large-scale digital systems like national e-health infrastructures. Using an 18-year in-depth case study of Norway's e-health development, the research analyzes how high-level strategic trends interact with on-the-ground practical challenges to drive fundamental shifts in technology programs.
Problem
Implementing complex digital infrastructures like national e-health systems is notoriously difficult, and leaders often struggle to understand why some initiatives succeed while others fail. Previous research focused heavily on the role of powerful individuals or groups, paying less attention to the underlying, systemic influence of how different conversations about technology and strategy converge over time. This gap makes it difficult for policymakers to make sensible, long-term decisions and navigate the evolution of these critical systems.
Outcome
- Major shifts in large digital infrastructure programs occur when high-level strategic discussions (macrodiscourses) and practical, operational-level discussions (microdiscourses) align and converge. - This convergence happens through three distinct processes: 'connection' (a shared recognition of a problem), 'matching' (evaluating potential solutions that fit both high-level goals and practical needs), and 'merging' (making a decision and reconciling the different perspectives). - The result of this convergence is a new "discursive formation"—a powerful, shared understanding that aligns stakeholders, technology, and strategy, effectively launching a new program and direction. - Policymakers and managers can use this framework to better analyze the alignment between broad technological trends and their organization's specific, internal needs, leading to more informed and realistic strategic planning.
Host: Welcome to A.I.S. Insights, the podcast where we connect big ideas with business reality, powered by Living Knowledge. I’m your host, Anna Ivy Summers.
Host: Today we're diving into a fascinating new study titled "Making Sense of Discursive Formations and Program Shifts in Large-Scale Digital Infrastructures." In short, it explores how the conversations we have—both in the boardroom and on the front lines—end up shaping massive technological changes, like a national e-health system.
Host: To help us break it down, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: It's great to be here, Anna.
Host: So, Alex, let's start with the big picture. We've all seen headlines about huge, expensive government or corporate IT projects that go off the rails. What's the core problem this study is trying to solve?
Expert: The core problem is exactly that. Leaders of these massive digital infrastructure projects, whether in healthcare, finance, or logistics, often struggle to understand why some initiatives succeed and others fail spectacularly. For a long time, the thinking was that it all came down to a few powerful decision-makers.
Host: But this study suggests it's more complicated than that.
Expert: Exactly. It argues that we've been paying too little attention to the power of conversations themselves—and how different streams of discussion come together over time to create real, systemic change. It’s not just about what one CEO decides; it’s about the alignment of many different voices.
Host: How did the researchers even begin to study something as broad as "conversations"? What was their approach?
Expert: They took a very deep, long-term view. The research is built on an incredible 18-year case study of Norway's national e-health infrastructure development. They analyzed everything from high-level policy documents and media reports to interviews with the clinicians and IT staff actually using the systems day-to-day.
Host: Eighteen years. That's some serious dedication. After all that time, what did they find is the secret ingredient for making these major program shifts happen successfully?
Expert: The key finding is a concept they call "discourse convergence." It sounds academic, but the idea is simple. A major shift only happens when the high-level, strategic conversations, which they call 'macrodiscourses', finally align with the practical, on-the-ground conversations, the 'microdiscourses'.
Host: Can you give us an example of those two types of discourse?
Expert: Absolutely. A 'macrodiscourse' is the big-picture buzz. Think of consultants and politicians talking about exciting new trends like 'Service-Oriented Architecture' or 'Digital Ecosystems'. A 'microdiscourse', on the other hand, is the reality on the ground. It's the nurse complaining that the systems are so fragmented she has to tell a patient's history over and over again because the data doesn't connect.
Host: And a major program shift occurs when those two worlds meet?
Expert: Precisely. The study found this happens through a three-step process. First is 'connection', where everyone—from the C-suite to the front line—agrees that there's a significant problem. Second is 'matching', where potential solutions are evaluated to see if they fit both the high-level strategic goals and the practical, day-to-day needs.
Host: And the final step?
Expert: The final step is 'merging'. This is where a decision is made, and a new, shared understanding is formed that reconciles those different perspectives. That new shared understanding is powerful—it aligns the stakeholders, the technology, and the strategy, effectively launching a whole new direction for the program.
Host: This is the critical question, then. What does this mean for business leaders listening right now? How can they apply this framework to their own digital transformation projects?
Expert: This is where it gets really practical. The biggest takeaway is that leaders must listen to both conversations. It’s easy to get swept up in the latest tech trend—the macrodiscourse. But if that new strategy doesn't solve a real, tangible pain point for your employees or customers—the microdiscourse—it's destined to fail.
Host: So it's about bridging the gap between the executive suite and the people actually doing the work.
Expert: Yes, and leaders need to be proactive about it. Don't just wait for these conversations to align by chance. Create forums where your big-picture strategists and your on-the-ground operators can find that 'match' together. Use this as a diagnostic tool. Ask yourself: is the grand vision for our new platform completely disconnected from the daily struggles our teams are facing with the old one? If the answer is yes, you have a problem.
Host: A brilliant way to pressure-test a strategy. So, to sum up, these huge technology shifts aren't just top-down mandates. They succeed when high-level strategy converges with on-the-ground reality, through a process of connecting on a problem, matching a viable solution, and merging toward a new, shared goal.
Expert: That's the perfect summary, Anna.
Host: Alex Ian Sutherland, thank you so much for translating this complex research into such clear, actionable insights.
Expert: My pleasure.
Host: And thanks to all of you for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time as we decode another big idea for your business.
Discursive Formations, Discourse Convergence, Large-Scale Digital Infrastructures, E-Health Programs, Program Shifts, Sociotechnical Systems, IT Strategy
Journal of the Association for Information Systems (2025)
Toward Triadic Delegation: How Agentic IS Artifacts Affect the Patient-Doctor Relationship in Healthcare
Pascal Fechner, Luis Lämmermann, Jannik Lockl, Maximilian Röglinger, Nils Urbach
This study investigates how autonomous information systems (agentic IS artifacts) are transforming the traditional two-way relationship between patients and doctors into a three-way, or triadic, relationship. Using an in-depth case study of an AI-powered health companion for managing neurogenic lower urinary tract dysfunction, the paper analyzes the new dynamics, roles, and interactions that emerge when an intelligent technology becomes an active participant in healthcare delivery.
Problem
With the rise of artificial intelligence in medicine, autonomous systems are no longer just passive tools but active agents in patient care. This shift challenges the conventional patient-doctor dynamic, yet existing theories are ill-equipped to explain the complexities of this new three-part relationship. This research addresses the gap in understanding how these AI agents redefine roles, interactions, and potential conflicts in patient-centric healthcare.
Outcome
- The introduction of an AI agent transforms the dyadic patient-doctor relationship into a triadic one, often with the AI acting as a central intermediary. - The AI's capabilities create 'attribute interference,' where responsibilities and knowledge overlap between the patient, doctor, and AI, introducing new complexities. - New 'triadic delegation choices' emerge, allowing tasks to be delegated to the doctor, the AI, or both, based on factors like task complexity and emotional context. - The study identifies novel conflicts arising from this triad, including human concerns over losing control (autonomy conflicts), new information imbalances, and the blurring of traditional medical roles.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating new study titled, "Toward Triadic Delegation: How Agentic IS Artifacts Affect the Patient-Doctor Relationship in Healthcare." Host: With me is our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, this study sounds quite specific, but it has broad implications. In a nutshell, what is it about? Expert: It’s about how smart, autonomous AI systems are fundamentally changing the traditional two-way relationship between a professional and their client—in this case, a doctor and a patient—by turning it into a three-way relationship. Host: A three-way relationship? You mean Patient, Doctor, and... AI? Expert: Exactly. The AI is no longer just a passive tool; it’s an active participant, an agent, in the process. This study looks at the new dynamics, roles, and interactions that emerge from this triad. Host: That brings us to the big problem this research is tackling. Why is this shift from a two-way to a three-way relationship such a big deal? Expert: Well, the classic patient-doctor dynamic is built on direct communication and trust. But as AI becomes more capable, it starts taking on tasks, making suggestions, and even acting on its own. Host: It's doing more than just showing data on a screen. Expert: Precisely. It's becoming an agent. The problem is, our existing models for how we work and interact don't account for this third, non-human agent in the room. This creates a gap in understanding how roles are redefined and where new conflicts might arise. Host: How did the researchers actually study this? What was their approach? Expert: They conducted a very detailed, in-depth case study. They focused on a specific piece of technology: an AI-powered health companion designed to help patients manage a complex bladder condition. Host: So, a real-world application. Expert: Yes. It involved a wearable sensor and a smartphone app that monitors the patient's condition and provides real-time guidance. The researchers closely observed the interactions between patients, their doctors, and this new AI agent to see how the relationship changed over time. Host: Let’s get into those changes. What were the key findings from the study? Expert: The first major finding is that the AI almost always becomes a central intermediary. Communication that was once directly between the patient and doctor now often flows through the AI. Host: So the AI is like a new go-between? Expert: In many ways, yes. The second finding, which is really interesting, is something they call 'attribute interference'. Host: That sounds a bit technical. What does it mean for us? Expert: It just means that the responsibilities and even the knowledge start to overlap. For instance, both the doctor and the AI can analyze patient data to spot a potential infection. This creates confusion: Who is responsible? Who should the patient listen to? Host: I can see how that would get complicated. What else did they find? Expert: They found that new 'triadic delegation choices' emerge. Patients and doctors now have to decide which tasks to give to the human and which to the AI. Host: Can you give an example? Expert: Absolutely. A routine task, like logging data 24/7, is perfect for the AI. But delivering a difficult diagnosis—a task with a high emotional context—is still delegated to the doctor. The choice depends on the task's complexity and emotional weight. Host: And I imagine this new setup isn't without its challenges. Did the study identify any new conflicts? Expert: It did. The most common were 'autonomy conflicts'—basically, a fear from both patients and doctors of losing control to the AI. There were also new information imbalances and a blurring of the lines around traditional medical roles. Host: This is the crucial part for our listeners, Alex. Why does this matter for business leaders, even those outside of healthcare? Expert: Because this isn't just a healthcare phenomenon. Anywhere you introduce an advanced AI to mediate between your employees and your customers, or even between different teams, you are creating this same triadic relationship. Host: So a customer service chatbot that works with both a customer and a human agent would be an example. Expert: A perfect example. The key business takeaway is that you can't design these systems as simple tools. You have to design them as teammates. This means clearly defining the AI's role, its responsibilities, and its boundaries. Host: It's about proactive management of that new relationship. Expert: Exactly. Businesses need to anticipate 'attribute interference'. If an AI sales assistant can draft proposals, you need to clarify how that affects the role of your human sales team. Who has the final say? How do they collaborate? Host: So clarity is key. Expert: Clarity and trust. The study showed that conflicts arise from ambiguity. For businesses, this means being transparent about what the AI does and how it makes decisions. You have to build trust not just between the human and the AI, but between all three agents in the new triad. Host: Fascinating stuff. So, to summarize, as AI becomes more autonomous, it’s not just a tool, but a third agent in professional relationships. Expert: That's the big idea. It turns a simple line into a triangle, creating new pathways for communication and delegation, but also new potential points of conflict. Host: And for businesses, the challenge is to manage that triangle by designing for collaboration, clarifying roles, and intentionally building trust between all parties—human and machine. Host: Alex, thank you so much for breaking this down for us. This gives us a lot to think about. Expert: My pleasure, Anna. Host: And thank you to our listeners for tuning into A.I.S. Insights. Join us next time as we continue to explore the future of business and technology.
Agentic IS Artifacts, Delegation, Patient-Doctor Relationship, Personalized Healthcare, Triadic Delegation, Healthcare AI
Journal of the Association for Information Systems (2025)
Digital Infrastructure Development Through Digital Infrastructuring Work: An Institutional Work Perspective
Adrian Yeow, Wee-Kiat Lim, Samer Faraj
This paper investigates the complexities of developing large-scale digital infrastructure through a case study of an electronic medical record (EMR) system implementation in a U.S. hospital. It introduces and analyzes the concept of 'digital infrastructuring work'—the combination of technical, social, and symbolic actions that organizational actors perform. The study provides a framework for understanding the tensions and actions that shape the outcomes of such projects.
Problem
Implementing new digital infrastructures in large organizations is challenging because it often disrupts established routines and power structures, leading to resistance and project stalls. Existing research frequently overlooks how the combination of technical tasks, social negotiations, and symbolic arguments by different groups influences the success or failure of these projects. This study addresses this gap by providing a more holistic view of the work involved in digital infrastructure development from an institutional perspective.
Outcome
- The study introduces 'digital infrastructuring work' to explain how actors shape digital infrastructure development, categorizing it into three forms: digital object work (technical tasks), DI relational work (social interactions), and DI symbolic work (discursive actions). - It finds that project stakeholders strategically combine these forms of work to either support change or maintain existing systems, highlighting the contested nature of infrastructure projects. - The success or failure of a digital infrastructure project is shown to depend on how effectively different groups navigate the tensions between change and stability by skillfully blending technical, relational, and symbolic efforts. - The paper demonstrates that technical work itself carries institutional significance and is not merely a neutral backdrop for social interactions, but a key site of contestation.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into the often-messy reality of large-scale technology projects. With me is our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: We're discussing a study titled "Digital Infrastructure Development Through Digital Infrastructuring Work: An Institutional Work Perspective". In short, it looks at the complexities of implementing something like a new enterprise-wide software system, using a case study of an electronic medical record system in a hospital. Expert: Exactly. It provides a fascinating framework for understanding all the moving parts—technical, social, and even political—that can make or break these massive projects. Host: Let’s start with the big problem. Businesses spend millions on new digital infrastructure, but so many of these projects stall or fail. Why is that? Expert: It’s because these new systems don’t just replace old software; they disrupt routines, workflows, and even power structures that have been in place for years. People and departments often resist, but that resistance isn’t always obvious. Host: The study looked at a real-world example of this, right? Expert: It did. The researchers followed a large U.S. hospital trying to implement a new, centralized electronic medical record system. The goal was to unify everything. Expert: But they immediately ran into a wall. The hospital was really two powerful groups: the central hospital administration and the semi-independent School of Medicine, which had its own way of doing things, its own processes, and its own IT systems. Host: So it was a turf war disguised as a tech project. Expert: Precisely. The new system threatened the autonomy and revenue of the medical school's clinics, and they pushed back hard. The project ground to a halt not because the technology was bad, but because of these deep-seated institutional tensions. Host: So how did the researchers get such a detailed view of this conflict? What was their approach? Expert: They essentially embedded themselves in the project for several years. They conducted over 50 interviews with everyone from senior management to the IT staff on the ground. They sat in on project meetings, observed the teams at work, and analyzed project documents. It was a true behind-the-scenes look at what was happening. Host: And what were the key findings from that deep dive? Expert: The central finding is a concept the study calls ‘digital infrastructuring work’. It’s a way of saying that to get a project like this done, you need to perform three different kinds of work at the same time. Host: Okay, break those down for us. What’s the first one? Expert: First is ‘digital object work’. This is what we traditionally think of as IT work: reprogramming databases, coding new interfaces, and connecting different systems. It's the hands-on technical stuff. Host: Makes sense. What's the second? Expert: The second is ‘relational work’. This is all about the social side: negotiating with other teams, building coalitions, escalating issues to senior leaders, or even strategically avoiding meetings and delaying tasks to slow things down. Host: And the third? Expert: The third is ‘symbolic work’. This is the battle of narratives. It’s the arguments and justifications people use. For example, one team might argue for change by highlighting future efficiencies, while another team resists by claiming the new system is incompatible with their "unique and essential" way of working. Host: So the study found that these projects are a constant struggle between groups using all three of these tactics? Expert: Exactly. In the hospital case, the team trying to implement the new system was doing technical work, but the opposing teams were using relational work, like delaying participation, and symbolic work, arguing their old systems were too complex to change. Expert: A fascinating example was how one team timed a major upgrade to their own legacy system to coincide with the rollout of the new one. Technically, it was just an upgrade. But strategically, it was a brilliant move that made integration almost impossible and sabotaged the project's timeline. It shows that even technical work can be a political weapon. Host: This is the crucial part for our audience, Alex. What are the key business takeaways? Why does this matter for a manager or a CEO? Expert: The biggest takeaway is that you cannot treat a digital transformation as a purely technical project. It is fundamentally a social and political one. If your plan only has technical milestones, it’s incomplete. Host: So leaders need to think beyond the technology itself? Expert: Absolutely. They need to anticipate strategic resistance. Resistance won't always be a direct 'no'. It might look like a technical hurdle, a sudden resource constraint, or an argument about security protocols. This study gives leaders a vocabulary to recognize these moves for what they are—a blend of relational and symbolic work. Host: So what’s the practical advice? Expert: You need a political plan to go with your project plan. Before you start, map out the stakeholders. Ask yourself: Who benefits from this change? And more importantly, who perceives a loss of power, autonomy, or budget? Expert: Then, you have to actively manage those three streams of work. You need your tech teams doing the digital object work, yes. But you also need leaders and managers building coalitions, negotiating, and constantly reinforcing the narrative—the symbolic work—of why this change is essential for the entire organization. Success depends on skillfully blending all three. Host: So to wrap up, a major technology project is never just about the technology. It's a complex interplay of technical tasks, social negotiations, and competing arguments. Host: And to succeed, leaders must be orchestrating all three fronts at once, anticipating resistance, and building the momentum needed to overcome it. Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us. Expert: My pleasure, Anna. Host: And thank you for listening to A.I.S. Insights, powered by Living Knowledge. Join us next time for more actionable intelligence from the world of academic research.
Digital Infrastructure Development, Institutional Work, IT Infrastructure Management, Healthcare Information Systems, Digital Objects, Case Study
Communications of the Association for Information Systems (2025)
Unpacking Board-Level IT Competency
Jennifer Jewer, Kenneth N. McKay
This study investigates how to best measure IT competency on corporate boards of directors. Using a survey of 75 directors in Sri Lanka, the research compares the effectiveness of indirect 'proxy' measures (like prior work experience) against 'direct' measures (assessing specific IT knowledge and governance practices) in reflecting true board IT competency and its impact on IT governance.
Problem
Many companies struggle with poor IT governance, which is often blamed on a lack of IT competency at the board level. However, there is no clear consensus on what constitutes board IT competency or how to measure it effectively. Previous research has relied on various proxy measures, leading to inconsistent findings and uncertainty about how boards can genuinely improve their IT oversight.
Outcome
- Direct measures of IT competency are more accurate and reliable indicators than indirect proxy measures. - Boards with higher directly-measured IT competency demonstrate stronger IT governance. - Among proxy measures, having directors with work experience in IT roles or management is more strongly associated with good IT governance than having directors with formal IT training. - The study validates a direct measurement approach that boards can use to assess their competency gaps and take targeted steps to improve their IT governance capabilities.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business, technology, and Living Knowledge. I’m your host, Anna Ivy Summers.
Host: In a world driven by digital transformation, a company's success often hinges on its technology strategy. But who oversees that strategy at the highest level? The board of directors. Today, we’re unpacking a fascinating study from the Communications of the Association for Information Systems titled, "Unpacking Board-Level IT Competency."
Host: It investigates a critical question: how do we actually measure IT competency on a corporate board? Is it enough to have a former CIO on the team, or is there a better way? Here to guide us is our expert analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Thanks for having me, Anna.
Host: So Alex, let's start with the big picture. What is the real-world problem this study is trying to solve?
Expert: The problem is that many companies have surprisingly poor IT governance. We see the consequences everywhere—data breaches, failed digital projects, and missed opportunities. Often, the blame is pointed at the board for not having enough IT savvy.
Host: But "IT savvy" sounds a bit vague. How have companies traditionally tried to measure this?
Expert: Exactly. That's the core issue. For years, research and board recruitment have relied on what this study calls 'proxy' measures. Think of it as looking at a resume: does a director have a computer science degree? Did they once work in an IT role? The problem is, these proxies have led to inconsistent and often contradictory findings about what actually improves IT oversight.
Host: It sounds like looking at a resume isn't telling the whole story. So, how did the researchers approach this differently?
Expert: They took a more direct route. They surveyed 75 board directors in Sri Lanka and compared those traditional proxy measures with 'direct' measures. Instead of just asking *if* a director had IT experience, they asked questions to gauge the board's *actual* collective knowledge and practices.
Host: What do you mean by direct measures? Can you give an example?
Expert: Certainly. A direct measure would assess the board's knowledge of the company’s specific IT risks, its IT budget, and its overall IT strategy. It also looks at governance mechanisms—things like, is IT a regular item on the meeting agenda? Does the board get independent assurance on cybersecurity risks? It measures what the board actively knows and does, not just what’s on paper.
Host: That makes perfect sense. So, when they compared the two approaches—the resume proxies versus the direct assessment—what were the key findings?
Expert: The results were quite clear. First, the direct measures of IT competency were found to be far more accurate and reliable indicators of a board's capability than any of the proxy measures.
Host: And did that capability translate into better performance?
Expert: It did. The second key finding was that boards with higher *directly-measured* IT competency demonstrated significantly stronger IT governance. This creates a clear link: a board that truly understands and engages with technology governs it more effectively.
Host: What about those traditional proxy measures? Was any of them useful at all?
Expert: That was another interesting finding. When they looked only at the proxies, having directors with practical work experience in IT management was a much better predictor of good governance than just having directors with a formal IT degree. Hands-on experience seems to matter more than academic training from years ago.
Host: Alex, this is the most important question for our listeners. What does this all mean for business leaders? What are the key takeaways?
Expert: I think there are three critical takeaways. First, stop just 'checking the box'. Appointing a director who had a tech role a decade ago might look good, but it's not a silver bullet. You need to assess the board's *current* and *collective* knowledge.
Host: So, how should a board do that?
Expert: That's the second takeaway: use a direct assessment. This study validates a method for boards to honestly evaluate their competency gaps. As part of an annual review, a board can ask: Do we understand the risks and opportunities of AI? Are we confident in our cybersecurity oversight? This allows for targeted improvements, like director training or more focused recruitment.
Host: You mentioned that competency is also about what a board *does*.
Expert: Absolutely, and that’s the third takeaway: build strong IT governance mechanisms. True competency isn't just knowledge; it's process. Simple actions like ensuring the Chief Information Officer regularly participates in board meetings or making technology a standard agenda item can massively increase the board’s capacity to govern effectively. It turns individual knowledge into a collective, strategic asset.
Host: So, to summarize: It’s not just about who is on the board, but what the board collectively knows and, crucially, what it does. Relying on resumes is not enough; boards need to directly assess their IT skills and build the processes to use them.
Expert: You've got it. It’s about moving from a passive, resume-based approach to an active, continuous process of building and applying IT competency.
Host: Fantastic insights. That’s all the time we have for today. Alex Ian Sutherland, thank you for breaking this down for us.
Expert: My pleasure, Anna.
Host: And a big thank you to our listeners for tuning into A.I.S. Insights, powered by Living Knowledge. Join us next time as we continue to explore the ideas shaping the future of business.
Board of Directors, Board IT Competency, IT Governance, Proxy Measures, Direct Measures, Corporate Governance