In a new article, researchers introduce the capabilities approach–contextual integrity (CA-CI), a framework that addresses privacy and dignity risks posed by modern artificial intelligence (AI) systems, especially foundation models whose capabilities evolve across contexts and purposes. In a case study, they demonstrate how CA-CI can operationalize the European Union (EU)’s AI Act’s fundamental rights impact assessments, harm thresholds, and anticipatory governance. The article, by researchers at Carnegie Mellon University and the University of Michigan, is published in IEEE Security & Privacy.

