Investigating AI and Surveillance Technology in Your Community
September 30, 2020
The use of algorithmic decision-making tools is on the rise across our institutions, from criminal justice and education to public benefits and health care. Take facial recognition as one example: Police can use a facial recognition app, built on a database of more than three billion images scrubbed from social media, to try and identify protesters in a crowd or a person accused of a crime. And while some cities have banned the use of facial recognition, others are installing multi-million-dollar systems aiming to facilitate real-time tracking of individuals, similar to the mass surveillance systems in China.
Meanwhile, communities most impacted by these technologies have little power over the algorithms that judge them. This panel will discuss how to investigate and report on the use of AI and surveillance technologies in your own community for greater transparency and accountability.
This session is designed for:
- Journalists looking for ways to cover the increasing prevalence of artificial intelligence systems in governance and policing, and its impacts on communities
- Newsroom leaders looking to develop new story ideas and angles in their criminal justice and government coverage.
- Anyone who is interested in data journalism, systemic bias, and future trends