Rapid technological development in robotics and Artificial Intelligence has given rise to a dilemma that is becoming harder to ignore. Should we continue to entrust all our decision-making to humans, fallible though they may be, or should we instead delegate decision making to robots, relinquishing control to the machines for the greater good?
This chapter, written in collaboration with my good friend and colleague Jason Millar, engages this dilemma, exploring the notion of robots as ‘experts’ rather than tools. When considered to be mere tools the true capabilities of robots may be disguised. Applying the normative pull of evidence, we argue that decision-making authority should, in some circumstances, be delegated to expert robots in cases where these robots can consistently perform better than their human counterparts. This shift in decision-making responsibility is especially important in time-sensitive situations where humans lack the capacity to process vast amounts of information, an advantage held by fast-computing expert robots like IBM’s Watson.
Here, we explore four hypothetical co-robotic cases, where we argue that expert robots ought to be granted decision-making authority even in cases of disagreement. We also address the responsibilities of robots when placed in decision-making roles, and the likely challenges we may face as a result. For example, unpredictable expert robots, acting under time pressures and without the ability to express their thinking, pose challenges for assessing liability. Overall, this chapter aims to offers a narrative of what delegating and relinquishing control to expert robots could look like, but does not assess the maintenance of human control or the trust and reliability factors that are required to make the decision to delegate.