Commentary - (2022) Volume 10, Issue 12

Public Sector Decision Making and Human-AI Interactions: "Automation Bias" and "Selective Adherence" to Algorithmic Advice
Luigi Celati*
 
Department of Public Administration, University of Modena and Reggio Emilia, Modena, Italy
 
*Correspondence: Luigi Celati, Department of Public Administration, University of Modena and Reggio Emilia, Modena, Italy, Email:

Received: 23-Nov-2022, Manuscript No. RPAM-22-19198; Editor assigned: 28-Nov-2022, Pre QC No. RPAM-22-19198 (PQ); Reviewed: 13-Dec-2022, QC No. RPAM-22-19198; Revised: 21-Dec-2022, Manuscript No. RPAM-22-19198 (R); Published: 29-Dec-2022, DOI: 10.35248/2315-7844.22.10.378

Description

Public entities are using artificial intelligence algorithms more and more as decision-making tools in the hopes that they will eliminate human decision-makers' biases. They might also cause new biases to be introduced when people engage with algorithms. This is caused by officials' increased awareness of prejudice in algorithms and discrimination in the wake of the incident. We go over the implications of our findings for decision-making in the public sector in the age of automation. Overall, our research highlights the potential drawbacks of administrative state automation for already marginalised and vulnerable populations. Algorithms based on Artificial Intelligence (AI) are being increasingly used in the public sector across all spheres of government. AI algorithms are used in fields as diverse as policing, welfare, criminal justice, healthcare, immigration, or education, increasingly permeating non-routine and high-stakes aspects of bureaucratic work. They are essentially a set of tools that demonstrate human-level performance on given tasks traditionally associated with human intelligence [1]. It has been determined that the public sector's increasing and deeper reliance on AI and machine learning technology is "transformative" for public administrations.

These changes are a result of the promise of possibly better, cheaper, and more efficient policy solutions. Algorithms are also stated to come with the "promise of impartiality," in contrast to human intuition-based decision-making, which contains biases and may lead to prejudice. In other words, it is claimed that using AI in decision-making has the ability to aid us in overcoming our cognitive biases and constraints. This has been a key factor in the adoption of such technology in public sector fields with high stakes, such criminal justice or law enforcement. Significant concerns have been raised by the use of AI algorithmic technology in the public sector. Concerns about algorithmic accountability and output supervision rank highly among them [2].

Understanding how these technologies may affect how public sector decision-making is made, as well as any potential cognitive biases, becomes crucial. This is even more relevant now that human decision-makers are seen as crucial protections and decisional arbitrators on issues of algorithmic bias in the context of the emergence of algorithmic governance. In an administrative state that is becoming more and more computerised, determining the extent to which our cognitive limitations permit us to serve as useful decisional mediators becomes crucial. It alludes to a well-known human tendency to instinctively obey automated systems in spite of warning signs or knowledge that is incongruent from other sources. In other words, it is discovered that human actors unquestioningly delegate their decision-making to machinery [3]. Even if they are strong, these outcomes have been recorded for AI algorithmic forerunners like pilot navigation systems and in situations outside of the public sector. The second bias that we propose and test is related to decision-makers' selective adherence to algorithmic recommendations and can be inferred from previous public administration studies on biased information processing.

Selective adherence to algorithmic advice

While studies in the fields of law and computer science have recently examined the selective processing of algorithmic outputs in studies on the use of algorithmic risk assessment by criminal courts to identify patterns that are consistent with motivated reasoning and selective adherence, public administration scholars have not yet looked into this topic. The second concern is whether algorithms are used more frequently if selective adherence biases are to continue. Decision-makers' awareness of potential biases and implicit prejudice may be diminished as a result of these feelings of moral and ethical disengagement and diminished accountability. Or even worse: by giving decisionmakers a believable justification for making biased choices, algorithmic counsel may justify and grant free rein to hidden beliefs. In other words, algorithms could "let" decision-makers to act according to their biases: The seeming "neutrality" or "objectivity" of algorithms would allay any bias accusations and/or support the legitimacy of biased or prejudiced conclusions [4]. A strong endorsement of decision-prejudices makers could result from an automated recommendation. As a result, we anticipate that biased adherence will be highlighted much more for algorithmic counsel when compared to human advice.

The prevalence of cognitive biases in public sector decision making using algorithms. Scholars in law and computer science have already conducted peer-reviewed empirical studies on this subject in the context of algorithm application in pretrial criminal justice decisions. Their preliminary results, which are described here, are in line with our hypothesised patterns of selective adherence. In these studies, participants are given information on arrests and asked to predict the likelihood of recidivism. Their predictions are then compared to those made without the use of an algorithmic risk assessment [5].

References

Citation: Celati L (2022) Public Sector Decision Making and Human-AI Interactions: "Automation Bias" and "Selective Adherence" to Algorithmic Advice. Review Pub Administration Manag. 10:378.

Copyright: © 2022 Celati L. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.