Counterfactual Assessment and Valuation for Awareness Architecture

Counterfactual Assessment and Valuation for Awareness Architecture (CAVAA)

Funding

European Commission, EIC 101071178; CO-PI: Julian Savulescu

Project Dates

October 2022 - October 2026

Project Manager

Alberto Giubilini

Researchers

Christopher Register, Maryam Ali Khan

 


Project Description

Robotics and artificial intelligence technologies are becoming increasingly advanced, and some researchers hope to build robots or AI systems that are aware of the world around them. The team at the Uehiro Oxford Institute will examine ethical issues that arise due to AI awareness, including questions of privacy and value alignment. In collaboration with CAVAA collaborators at Uppsala University and Sorbonne University, the Oxford team will investigate human judgments about privacy and other values, which may then inform policy recommendations for the design, construction, and regulation of AI systems.

Examining the ethical issues raised by aware AI involves several components. In some of our work, we examine normative and philosophical questions about what it means for an AI to be aware, whether it’s possible for AI systems to infringe our privacy, and what it might take for AI to be aligned with our norms and values. In other work, we investigate the human relational psychology of interacting with AI, as well as human preferences about how AI should treat the information they may learn about humans. Finally, in order to make reasonable assessments of the risks posed by AI, as well as reasonable suggestions for how to design ethical AI, we learn from researchers who are designing and building current generations of AI architectures and social robots.

In the spring of 2025, we will host a workshop on Privacy, Awareness, and Alignment in AI, with participants from Oxford, Google Deep Mind, Sorbonne, Uppsala, Cambridge, and Sheffield.