Skip to content
Photo: Coding Justice Panel Collage

DC Pro Bono Week 2021: Coding Justice

In recent decades, our society has seen a boom in technology advancement, seemingly with one goal in mind: to improve efficiency in every aspect of our lives, from things such as connecting with friends and family or shopping to political engagement around causes we care most about. But should efficiency be the primary goal of technology advancement? Can we trust that tech companies have the best intentions with the data they collect about us? What happens when that data is misused or abused? These are some of the questions raised in the Coding Justice panel held on October 25, 2021, during DC Pro Bono Week 2021.

The focus of the panel was to highlight a growing area of the law around combatting algorithmic injustice through enforcement of existing civil rights and civil liberties protections that have long protected our most treasured freedoms and supported every person’s meaningful participation in society. The panel was moderated by Vienna Thompkins, Data Analyst in the Science Program at the Center for Policing Equity. The panel also featured the following notable practicians and scholars:

  • Valerie Schneider, Associate Professor of Law and Director of Clinical Law Center at Howard University School of Law;
  • Matthew Bruckner, Associate Professor of Law at Howard University School of Law;
  • Alan deLevie, Senior Software Engineer of Machine Learning at Casetest and Adjunct Associate Professor at American University Washington College of Law;
  • Clare Garvie, Senior Associate with the Center on Privacy and Technology and Adjunct Professor of Law at Georgetown University Law Center; and
  • Nassim Moshiree, Policy Director at the American Civil Liberties Union of the District of Columbia.

Professor Valerie Schneider opened the discussion by describing the intersection of algorithmic injustice and civil rights laws. The ways in which algorithmic injustice shows up in Valerie’s practice in running Howard University’s Fair Housing Clinic is when rental housing decisions are made not by people but by algorithms created by third parties that attempt to determine whether an applicant will be a suitable tenant. While using these companies may seem efficient, there are faults with the systems they use, such as inaccurate or incomplete data collected about a prospective tenant. One example is where a tenant may have been sued for eviction by a former landlord but the tenant was successful in getting the case dismissed. Oftentimes, the third-party company will only have the eviction lawsuit on file but will not disclose the disposition of the case itself. The mere existence of an eviction case in the prospective tenant’s record lends to an incorrect assumption that the tenant likely will not be a suitable tenant at a given property.

Professor Matthew Bruckner continued the discussion but in the context of student lending. Matthew pointed to a case where a particular company was initially praised for its use of artificial intelligence and machine learning in extending credit to individuals who likely would not have qualified. However, an independent study of the same company revealed that a student who applied for a loan refinance that had graduated from a historically black college or university received less favorable terms than a nearly identical student (with the same salary, same savings, same job title, etc.) who graduated from a predominantly white institution. Matthew’s example highlights that although access to credit in this scenario is seemingly innocuous, the disparate way in which the algorithm was applied created a very different outcome for each student. Matthew offered solutions to challenge these types of practices, which exist at the state, local, and federal levels, through enforcing the Unfair and Deceptive Acts or Practices (UDAP). Also, federal agencies like the Consumer Financial Protection Bureau and the Federal Trade Commission set forth specific notice requirements and take other enforcement actions against wrongdoers under the Equal Credit Opportunity Act and the Fair Credit Reporting Act.

Alan deLevie provided a high-level understanding of how machine models work, specifically how they use examples as inputs to predict a certain outcome. Perhaps unsurprisingly, bias and other related issues can be baked into these machine models at the beginning which influences the predictive function of the models themselves. Since so many models exist, and many of them can be quite voluminous and complicated, Alan suggests advocates probe more into understanding the metrics for performance when engaging in work around algorithmic injustice.

Clare Garvie, whose work focuses on law enforcement’s use of facial recognition technology, discussed how these uses impact our rights, specifically the right to a fair trial and other due process concerns. Clare’s presentation focused on debunking four assumptions we as a society hold with respect to artificial intelligence: (1) the assumption that the technology is reliable when in fact there are no existing baseline metrics to determine the reliability of facial recognition technology amidst constant misidentification; (2) the assumption that technology is neutral when the fields where this technology is used is rampant with institutionalized racism (e.g. criminal justice system); (3) the assumption that adding a human element for review somehow supports reliability when indeed human error occurs all the time; and (4) the assumption that efficiency is a good goal when in fact the use of facial recognition technology contributes to the mass incarceration of individuals for low-level crimes and create lifelong disadvantages for those individuals in other areas (e.g. employment, housing, access to public benefits, etc.).

Finally, Nassi Moshiree discussed ways to keep communities that are most impacted by algorithmic injustice engaged and involved in determining how this technology is used. Currently, ordinary community members are not part of the deliberations and thus don’t have a say in whether surveillance technology should be used at all by the government. Some solutions to this problem include educating ourselves as community members, raising public awareness, and working to remove the secrecy or “unknowability” of machine learning. ACLU-DC is seeking to do just that by convening a local coalition to shift decisions about government surveillance to community members by requiring local D.C. agencies to divulge the surveillance technology they are using or plan to acquire and seek council approval for each specific use of technology. This coalition is also pushing for legislation to require the creation of an independent privacy advisory board with experts and advocates, as well as members from communities who have been impacted by government surveillance. Having this framework will allow community members to have discussions in a more meaningful and in-depth way.

For more helpful resources on algorithmic injustice, visit the following link: https://guides.lib.uw.edu/racial-justice/algorithm

Back To Top