Responsible AI starts with the artifact: Challenging the concept of responsible AI in IS research (2025)

1. Introduction

There has been a very significant number of paper submissions to EJIS in recent years on the topic of artificial intelligence (AI), with many of these focusing in some way on responsibility. This is evident across many IS journals and is also accompanied by dedicated journal special issues,Footnote1 conference tracks, workshops, and panels across IS and other disciplines, each of which delves into various facets of responsible AI. Regardless of whether AI applications fall into the categories of predictive, generative, or agentic, the potential for unintended consequences stemming from their use has raised considerable debate on how they should be designed, implemented, and managed (Conradie & Nagel, Citation2024; Kenthapadi et al., Citation2023). Against this backdrop, responsible AI has become a central notion in research and practice.

A common struggle for researchers working on topics connected to responsible AI is to capture what the concept encompasses and what it entails regarding how we understand the AI artifact- the actual artificial and intelligent aspect of the technology that is under scrutiny. This is in no small part due to the fact that there is a lack of conceptual clarity of what the notion of responsibility is with regard to AI, which has affected both scholarly development and the potential implications of AI for practice (Mikalef et al., Citation2022). The past few years have seen a swathe of efforts, primarily from intergovernmental organisations and practice, to develop frameworks that capture key elements of what responsible AI should entail (Fjeld et al., Citation2020). While there are merits in following a practice-based approach, there has been little focus on conceptually defining the notion of responsible AI in the field of information systems (IS) (Lobschat et al., Citation2021). It has thus left IS researchers and practitioners with a limited sense of direction concerning what responsible AI effectively means, how it is developed, managed, and used, and what the consequences of its use are (Mäntymäki et al., Citation2023). New streams of research that focus on AI fairness, transparency, bias, and explainability, among others, are veering into distinct sub-fields, which make it challenging to establish a cumulative tradition (Arrieta et al., Citation2020).

A prevailing issue with the current discourse on responsible AI frameworks and associated research is that it builds on a principles-first approach. Such a logic implies that AI applications must adhere to predefined and fixed social and ethical norms. For instance, responsible AI frameworks suggest AI applications should align with ethical norms and account for fairness, inclusiveness, and transparency (Dignum, Citation2019). Nevertheless, such notions are often vague, compound, and conflicting depending on which stakeholder’s standpoint one adopts (Mittelstadt, Citation2019). While a principles-first approach appears to be a logical approach to design and develop AI applications, it becomes problematic when the principles conflict with certain key facets of the AI artifact. Contemporary AI applications are characterised by inscrutability, learning, and autonomy, aspects that are antithetical to some of the key principles of responsible AI (Berente et al., Citation2021). While, for example, responsible AI principles emphasise human control and oversight, contemporary AI applications build on notions of autonomy, self-learning, and agency (Königs, Citation2022).

Adopting a contrarian lens, we problematise the fact that most existing frameworks and definitions of responsible AI do not take as a starting point the defining characteristics of the technological artifact. A consequence of this is that it creates a misalignment between the characteristics of contemporary AI systems and how they should be deployed in a responsible way. The purpose of this editorial is to critically reflect on some fundamental assumptions that become apparent in what AI entails as a technology versus the existing definitions and subsequent frameworks on how it should be designed, developed, and deployed responsibly. We draw on six important contradictory points to illustrate how the key assumptions of prevailing responsible AI frameworks clash with core characteristics of AI technologies.

This editorial aims to spark a debate on how we approach the notion of responsible AI and highlight important ways in which some of the defining characteristics of current AI technologies are misaligned with principles that underpin current approaches to AI responsibility. We first argue that there is a need to fundamentally rethink the concept of responsible AI for it to be effective in research and practice.

1.1. Intended audiences of this editorial

This editorial paper has three audiences. First, it should be of interest to all those who are directly involved with responsible AI research specifically and perhaps research on AI generally, be that authors, reviewers, or editors who are either conducting or evaluating these studies. Hopefully, the concepts and issues discussed in this editorial will provide a guide to conduct and evaluate AI studies. Second, there may be IS researchers who might be thinking of conducting research on responsible AI but are unsure how to proceed, and one or more of the concepts proposed here may spark interest. Our hope is that this editorial will lead to the nuances of responsible AI research being better understood and will stimulate further reflection, debate, and ultimately better clarity and execution of research. Third, we hope this paper may make a contribution to responsible AI research in other fields beyond IS, given that the topic transcends many disciplines.

We next discuss how placing the AI artifact at the centre of focus can enable a more concrete and impactful understanding of responsible AI. After elaborating on the tensions that currently exist, we proceed to discuss what such an approach could mean for different streams of research and practice.

2. Tensions between principles of responsible AI vs characteristics of the AI artifact

When examining the central definitions of responsible AI and the corresponding frameworks (Dignum, Citation2019; EC, Citation2019; Mikalef et al., Citation2022), there is an evident contradiction between how AI is treated versus its actual defining characteristics. The core principles of responsible AI, as well as the underlying definitions, convey certain characteristics regarding how the AI artifact is treated. For instance, the principle of transparency implies that contemporary AI models can be scrutinised, understood, and explained (Von Eschenbach, Citation2021). The principle of auditability assumes that there is perfect traceability of AI models and training, disregarding the fact that they are in a state of continuous flux and adaptation as they learn and interact in their respective environments (Kroll, Citation2021). Even definitions of responsible AI that state “ … the practice of developing, using and governing AI in a human-centred way to ensure that AI is worthy of being trusted and adheres to fundamental human values”, imply that there is uniformity and stability in human values (Vassilakopoulou et al., Citation2022). Our position is that there are several key defining characteristics of the AI artifact that are not aligned with current principled-based approaches, which relate to AI being agentic, autonomous, dynamic, inscrutable, adaptable, and heterogeneous. In other words, frameworks that build on a principled-based approach tend to assume certain attributes of the AI artifact that are contrary to contemporary AI approaches.

Table

Download CSVDisplay Table

2.1. Passive vs Agentic

A key aspect of many responsible AI frameworks is that they regard AI as a passive entity within the context of use. This assumption poses an important limitation to the scope of AI’s interactions with stakeholders and the groups with which it is in direct or indirect contact. For example, the high-level expert group from the European Commission states that humans must be informed when interacting with AI agents and that there should be the ability to review decisions made by an AI (EC, Citation2019). Such principles overlook the fact that AI can be more than a passive entity within its context of use and can lead to potentially unintended consequences through various forms of agency. Agency colloquially refers to the ability to take actions and affect outcomes (Emirbayer & Mische, Citation1998). Contemporary AI applications entail agency from the algorithms that manifest in many ways and are finding increasingly widespread adoption, whether those are in the form of decisions or other types of modalities (Chan et al., Citation2023). Goetze (Citation2022) argues that there is a responsibility gap between the engineers of AI systems and the outcomes of their designed systems, where those designing agentic systems are removed from the consequences of their deployments. As AI applications increasingly exert agency in the environment in which they operate, it is important to anticipate the potential harm from such systems and develop approaches to mitigate or minimise any adverse consequences. Doing so will require an understanding of the sociotechnical attributes of increasingly agentic systems and developing approaches and processes for simulating and understanding the effect that they might have at different levels. In addition, it also necessitates a better understanding of how such agency alters the structures and arrangements of the environments in which it is situated (Bengio et al., Citation2024).

2.2. Controllable vs autonomous

A complementary but distinct facet of AI systems is that they are becoming progressively autonomous (Berente et al., Citation2021). This notion extends to the previous point and concerns the degree to which it is assumed that AI applications can be programmed to produce precisely calibrated and predictable outputs. While agency highlights the capacity of AI systems to exert an active effect, autonomy concerns the (in)ability to control and direct such systems in a predefined way. A principal assumption of existing responsible AI frameworks is that through specific actions and practices during the design and development of AI systems, they can be fully controlled. Thus, the range of their outcomes is known and predictable. Aspects such as accuracy and reliability assume that through specified actions, the decisions and outcomes of AI systems can be precisely foreseen, or at least that the decisional space is predictable (Raji et al., Citation2020; Ryan, Citation2020). However, contemporary AI applications, whether predictive or generative, are designed on the premise of continuous and dynamic learning through their interactions, as well as access to new data and rewards based on the approach they have chosen to complete a task (Yampolskiy, Citation2024). The autonomy that such systems present has also been highlighted as a significant facet of unpredictability, which is a key factor contributing to stress and may have unforeseen impacts on users who interact with AI applications (Issa et al., Citation2024). Thus, it is challenging to foresee how AI systems operate within their given context, given their capacity for self-government, learning, and adaptation. Even though responsible AI frameworks strongly advocate control over AI applications and oversight on their possible outcomes, such control is in stark contrast with the design characteristics of AI applications and how they operate.

2.3. Static vs dynamic

A natural consequence of the previous facet is that AI systems are also inherently dynamic. This dynamism results from how they are architected and how they learn, interact, and evolve in a constantly changing environment. The dynamic nature of AI systems clashes with a static perception ingrained in existing responsible AI frameworks. Such perceptions consider both the context of use and the IT artifact itself as invariant. Societal norms and values are in constant flux, particularly in relation to expectations around technology and its use (Papagiannidis et al., Citation2025). Similarly, the inner workings of many modern AI systems, such as recurrent neural networks, reflect this dynamism, as they evolve and adapt based on new and emerging input from the outside world (Han et al., Citation2021). Several types of AI systems are re-trained on the fly and change based on updated information and novel signals. As such, there is a dynamic interplay between the changes in the context in which AI systems operate and the inner workings of the AI system itself. This dynamic nature of both systems and the context in which they operate contrasts with existing principles of responsible AI that assume fixity and invariability. For example, definitions of responsible AI highlight the need for AI applications to align with ethical principles and societal values (Tripathi & Kumar, Citation2025). Nevertheless, neither ethical principles nor societal values remain unchanged. When considering the role that digital technologies play in society, there is an inherent dynamism in how technology alters societal norms and ethics, and how society continuously exerts a force in shaping the technology we use (Srinivasan, Citation2018).

2.4. Explainable vs inscrutable

As AI systems become increasingly sophisticated and complex, the ability to explain their inner workings and outcomes becomes more challenging. Inscrutability has emerged as a key facet of modern AI systems, whereby making AI procedures and output intelligible to specific parties develops into a challenging endeavour (Berente et al., Citation2021). This characteristic of modern AI applications contrasts with the principle of transparency and accountability that are posited in responsible AI frameworks (Mikalef et al., Citation2022). Approaches to AI explainability can be divided into two broad categories: ante-hoc (self-interpretable) model explanations and post-hoc (after-training) model explanations (Retzlaff et al., Citation2024). A growing number of applications and modern approaches to AI build on models that are “black-box” by design and, therefore, can only be explained through approximative methods (Rai, Citation2020). However, an important aspect of decision-making concerns the level to which outcomes need to be explainable to enable transparency and satisfy ethical concerns (Asatiani et al., Citation2021), which raises the question of what transparency means to different involved stakeholders, and how to adequately address the requirement for explanations in largely inscrutable systems. As the complexity and opacity of contemporary AI models increase, the notion of explanations around AI outcomes become broader (Arrieta et al., Citation2020). To deal with the inscrutability of AI models, explanations now emphasise key data that are used to train models, or even features and facets that may be relevant to outcomes, customised to the respective stakeholders to whom they are addressed (Herrera, Citation2025). Indeed, leading AI companies have developed “glass box” transparency, which counters the technical inscrutability of AI applications by opening up the sociotechnical processes associated with their planning, design, development and use, and involving multiple stakeholders (domain experts, AI and data scientists, customers and industry consortia) in such efforts (Tarafdar et al., Citation2025).

2.5. Invariable vs adaptable

One of the fundamental shortcomings of current ethical AI frameworks is that they assume a uniform set of ethical and societal values, and thus a fixed set of corresponding design choices around AI to adhere to. For instance, several responsible AI frameworks highlight the principle for social well-being, where AI systems should be designed and developed based on a societal perspective (EC, Citation2019). Nevertheless, differences and evolutionary changes in cultural, social, religious, or political beliefs, as well as laws and regulations, would require an adaptable approach to designing and implementing AI applications (Gabriel, Citation2020). Similarly, AI applications have been suggested to adversely impact the psychological well-being and behaviours of teenagers and younger adults.Footnote2,Footnote3 Examples such as these highlight the need for flexibility and adaptability in how AI applications are designed and deployed based on the contingencies of end-user groups. Nevertheless, current approaches and frameworks on responsible AI do not provide insight into how such adaptability can be achieved and typically assume an invariable and uniform set of values that need to be aimed for. Effectively, a one-size-fits-all approach ignores the fact that different instantiations of AI will need to consider the contingencies and requirements of the sub-group or context in which they will be utilised. This move is evident in the deployment of both predictive and generative AI applications, where foundation models are customised to their context of use, and where the emergence of different techniques to do so effectively has manifested (Schneider et al., Citation2024). Understanding that different norms, ethics, and codes of conduct exist within each context of use—whether that is a group, an organisation, or even an entire country or union of countries—highlights the need to consider the adaptability of AI models and customise them appropriately (Díaz-Rodríguez et al., Citation2023). Effectively, contemporary AI applications facilitate customisation and adaptability, which is juxtaposed to responsible AI frameworks that assume fixity and invariability.

2.6. Homogenous vs heterogeneous

Many responsible AI frameworks assume a uniformity in AI systems that does not reflect their real-world complexity (EC, Citation2019). In practice, AI technologies emerge from a confluence of different models, techniques, and computational architectures, each exhibiting distinct behaviours, dependencies, and levels of interpretability. This heterogeneity challenges traditional notions of responsibility, as different AI configurations require tailored accountability mechanisms. While one system may allow for clear human oversight, another may involve distributed decision-making across multiple automated agents (Papagiannidis et al., Citation2025). Addressing the governance challenges of heterogeneous AI systems requires a more granular and context-sensitive approach that acknowledges the interplay between different AI applications, their degrees of autonomy, and the evolving nature of their interactions. Furthermore, as AI ecosystems grow increasingly modular and interconnected, responsibility becomes more diffused across various actors, making it difficult to attribute accountability to a single entity. A rigid regulatory framework that assumes uniformity risks either over-regulating low-risk applications or failing to address the emergent risks of more complex AI deployments sufficiently.

3. Thoughts on the way forward

Through the six points outlined previously, this editorial explores how IS researchers and practitioners can approach the notion of responsible AI from a more pragmatic and contemporary lens. The growing chasm between responsible AI principles and characteristics of the actual AI artifact, as experienced in practice, serves as a point of reflection concerning how future IS research can think about responsible AI in a meaningful and relevant manner. The implications that we draw based on these tensions, therefore, act as a non-exhaustive list of suggestions for IS scholars and practitioners.

3.1. Agentic

In terms of the agentic nature of contemporary AI applications, there are several recent examples in which the agency exhibited by AI applications has resulted in negative or unintended consequences. Such effects are discernible in terms of physical harm (e.g., autonomous vehicles, cobots) or psychological/emotional harm (e.g., suicide, emotional trauma) to different user groups and in a range of contexts. IS researchers can explore the dynamics and types of interactions that develop between humans and various types of AI agents to understand how such agency gradually exerts its effect on individuals or situations. While certain agentic aspects may have an immediate impact, others may be discernible only over time, and by altering social structures or psychological states of users and groups. Ethnography studies can be helpful in capturing the evolution of such dynamics that unfold between AI agents and humans or groups. Similarly, sandbox or experimental approaches can illuminate situations or conditions that may result in adverse outcomes before widespread deployment. Such approaches can produce important insights about the effects or actions the interactions with AI agents may produce, and how to design or configure applications to minimise their occurrence. Similarly, experiments testing different levels and forms of agency can reveal the impact that they have on individuals and groups. Such approaches could also yield important insight concerning limits to affordances and agency in connection with the potential physical and cognitive effects they may have.

3.2. Autonomous

The extent to which AI applications’ outcomes can be controlled, though, is questionable. As AI applications are designed with increased autonomy, the ability to predict and prevent any possible outcome becomes progressively more challenging. Such characteristics highlight the importance of piloting emulations or even simulations on digital twins of actual deployments to understand potential adverse effects. A promising research perspective, with rich practical implications, would be to develop approaches, metrics, and certification methods to capture levels of autonomy, probabilities of risk, deviating behaviours, and digression from goals. Doing so would require a combination of computational approaches, as well as active observation and qualitative assessment of AI agents in their context of operation. An important takeaway from the facet of autonomy that contemporary applications exhibit is that it is insufficient to assume that embedding them with certain design principles will eliminate adverse outcomes. The complexity and interaction with the external environment necessitate a different set of approaches to observe their possible behaviour. In addition, it forces a logic of continuous observation and live adaptation, where responsible AI follows the entire lifecycle of AI agents and is particularly important during deployment and widespread usage. In such an approach, timeliness and reactive speed are of paramount importance. Furthermore, as autonomy becomes a key facet, it is important to develop yardsticks to ensure that the function and value of such applications does not diverge significantly.

3.3. Dynamic

Both AI systems as well as the environment in which they operate are in a continuous state of flux due to the dynamic nature of interactions that develop between them. This dynamic interaction is often overlooked from the responsible AI perspective, where effects are often assumed to be fixed and unchanging. The case of Microsoft’s Tay chatbot is a clear illustration of how AI applications dynamically interact, learn, and adapt based on the environment in which they are deployed (Wolf et al., Citation2017). Even though such applications may be designed, developed, and trained based on responsible AI frameworks, their self-learning and dynamic nature entail that they are malleable as they interact and learn from their environment. An important insight from a dynamic view of AI systems is that responsibility does not end once a system is developed but must follow through its entire lifecycle. This perspective highlights the need to develop practices of how to ensure such dynamic interactions do not result in adverse or unintended consequences. At the same time, they expand the locus of responsibility to other parties and stakeholders that directly or indirectly interact with such systems. As users are also actively engaged in “shaping” AI systems through their interactions, it is likely that we will see concepts such as collective AI responsibility emerge, and even etiquette of AI interaction. Simultaneously, as different forms of interactions unfold between humans and AI, so will social norms and values evolve. Understanding these types of dynamic interactions at a macro level can reveal a lot about the two-way effects that develop, and how to implement strategies at different levels, whether in the design of AI systems, education and training of individuals, or in regulations and policies to mitigate any potential detrimental effects.

3.4. Inscrutable

One key aspect of responsible AI frameworks is their advocacy for transparency and explainability of AI systems. Yet, as AI systems become increasingly more complex, opaque, and dynamic, explaining how they derive outcomes is impractical and may be unnecessary in many cases. One key assumption underpinning the idea of transparency and explainability is that the stakeholders interested in learning how an outcome was produced can understand the complexity and functioning of enormously complex and layered AI algorithms. Adding to the above, the notion of explainability assumes that there is an invariability in how explanations are conveyed to different stakeholders and that explanations are provided after the event has occurred. As inscrutability is becoming a core facet of contemporary AI algorithms, it is prime time to rethink what explainability should entail, to whom, and when. Within the IS domain, it is crucial to understand what constitutes a satisfactory explanation for users of AI systems, depending on the specific application, its domain and context of use, user characteristics, the broader set of stakeholders, and the regulatory and legal circumstances, necessitating a sociotechnical approach to tackling transparency and explainability (Tarafdar et al., Citation2025). Furthermore, when considering the temporal dimension, certain types of explanations may be satisfactory during an event, such as a critical decision-making task. Still, they may require a different form and format after an event. Thus, the question of how we design satisfactory and appropriate explanations based on the degree of expertise and knowledge of different recipients and the context of use, will likely be an important avenue for future research.

3.5. Adaptable

Responsible AI frameworks often treat ethical principles as fixed and universally applicable. However, AI systems are increasingly deployed across diverse and dynamic sociocultural, legal, and organisational contexts that demand adaptability. IS researchers can address this gap by developing context-sensitive models of responsible AI (Ghosh et al., Citation2025). For instance, comparative case studies or cross-cultural fieldwork can surface how ethical values diverge across regions and industries. Participatory design methods, such as co-creation workshops or stakeholder mapping, can help practitioners tailor AI governance to local needs and value systems. For practice, this implies that ethical audits should not rely solely on standardised checklists but instead incorporate reflexive processes that adapt to evolving societal norms. Practitioners might use feedback loops involving diverse user groups or ethics review boards that revisit AI policies over time. In education, scenario-based training can expose teams to the variability of ethical expectations across contexts. Researchers could also prototype adaptive governance mechanisms, such as modular policy templates or real-time value alignment tools. These allow organisations to update principles dynamically as AI systems and their environments change. Ultimately, embedding adaptability into responsible AI means treating values as living constructs and building systems, processes, and oversight mechanisms that can learn and adjust accordingly.

3.6. Heterogenous

In contrast to the homogenous view implicit in many responsible AI frameworks, real-world AI systems are increasingly multilayered and heterogeneous, spanning various models, vendors, data sources, and operational and organisational contexts (Wieringa, Citation2020). IS researchers can support practitioners by developing methods to map and analyse this complexity. For example, system ethnographies and architecture mapping can help uncover the layered dependencies and distributed responsibilities across AI components. Sociotechnical system modelling and responsibility assignment matrices can support the design of accountability structures tailored to modular AI systems. Practitioners should be encouraged to conduct responsibility walkthroughs across the AI lifecycle, identifying who is accountable for which part of a system and under what conditions. Methods such as incident tracing and root-cause analysis can be extended beyond technical failures to include ethical breakdowns across heterogeneous components. In practice, organisations might implement layered governance strategies, with differentiated oversight based on the autonomy, interpretability, and criticality of each module. For example, low-risk modules could be monitored using lightweight compliance tools, while high-risk ones require continuous monitoring and third-party auditing. IS educators can integrate systems thinking into AI governance training to prepare practitioners to manage AI as part of interconnected infrastructures. By foregrounding heterogeneity, IS researchers and professionals can shift away from universalist assumptions and towards flexible, architecture-aware responsibility models that reflect the realities of complex AI deployments.

4. Conclusion

This editorial is written in the spirit of offering fundamental ideas that may be helpful to researchers who wish to study the topic of responsible AI or at least take a more informed approach to understanding it. Our position is that there are several key defining characteristics of the AI artifact that are not aligned with the current principles-based approaches, which relate to AI being agentic, autonomous, dynamic, inscrutable, adaptable, and heterogeneous. We hope this editorial will be helpful as it summarises important insights which, as a whole, are not currently embedded in the current practices of responsible AI research.

In delineating the objectives of this editorial, we wish to emphasise its guiding intent. We do not suggest that every submission to EJIS, or any IS journal, must address every characteristic explored herein. Such an expectation would contradict the emergent and evolving character of AI, where new challenges and ethical dilemmas will inevitably arise alongside technological progress and deeper domain knowledge. Rather, we believe that establishing a reference base is preferable to offering no guidance at all; without any criteria, there is a heightened risk that research on responsible AI will be assessed by arbitrary or inappropriate standards. We hope this editorial helps spark a lively debate about responsible AI and catalyses critical reflection on how AI artifact characteristics can guide the design, execution, development, and assessment of future responsible AI research.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Responsible AI starts with the artifact: Challenging the concept of responsible AI in IS research (2025)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Patricia Veum II

Last Updated:

Views: 6026

Rating: 4.3 / 5 (64 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Patricia Veum II

Birthday: 1994-12-16

Address: 2064 Little Summit, Goldieton, MS 97651-0862

Phone: +6873952696715

Job: Principal Officer

Hobby: Rafting, Cabaret, Candle making, Jigsaw puzzles, Inline skating, Magic, Graffiti

Introduction: My name is Patricia Veum II, I am a vast, combative, smiling, famous, inexpensive, zealous, sparkling person who loves writing and wants to share my knowledge and understanding with you.