The goal of our January 20 workshop on procurement was geared to getting participants comfortable and confident in developing questions and guidelines that could be used when purchasing data and technology. There is an increased focus on improving processes used to buy data and technology within government, and people view the process as an important place for preventing harm, and a tool for injecting values like equity and justice in data systems. In our workshop, our primary activity focused on the pre-award component of the procurement process. At the bottom of this document, you’ll find different links that can provide a deeper look in how public agencies are approaching procurement reform before and after the contract is awarded.
In the workshop, attendees discussed several different scenarios related to purchases of data or technology. Here are the instructions they received:
The first-ever manager of the newly formed Department of Data and Technology in your city worries obsessively about making the wrong decisions when it comes to buying data and technology. You work in this department. While the organization hasn’t made major mistakes yet, the manager is always reading case studies of how other communities have made decisions that have put vulnerable residents at risk, or decisions that have made it more difficult for departments to operate transparently, efficiently, independently, and in-line with their community’s values.
Your manager is hoping to build a culture where people in the department feel comfortable asking critical questions about technology. They have developed this workshop, which shares some of the most misfortunate use cases that could be ripped from the headlines.
In this workshop, you and your peers will develop questions that could have been asked before harmful data and technologies were purchased. Your director was inspired to draft these scenarios from some of the most-problematic use cases as a guide in what not to do. Through this training, your director hopes that your organization will avoid many of the mistakes that others have already made.
One of the scenarios discussed involved the hidden, frequently-exploited labor force behind many of the artificial intelligence (AI) systems in use. Several recent news articles on the topic of AI colonialism told the stories of some of the people that perform important labor in Venezuela and Kenya, and were the inspiration for the following scenario.
A vendor contracted with a neighboring City to purchase AI technology that was used to create a dataset of public infrastructure conditions using imagery. A trusted local human rights organization points out that the company relies on frequently exploited workers from Venezuela to tag and process images that are instrumental to the function of the algorithm. These workers typically act as independent contractors, are usually paid very low wages, and lack the power to object when unscrupulous contractors invent ways to not pay these workers for their labor. The human rights organization has asked your mayor to rescind the contract, but doing so may trigger a lawsuit with the company.
What questions could have been asked and what rules and practices could have been made before the contract was awarded that would have prevented this government from buying a product that exploits extremely vulnerable people?
In the workshop, participants thought the scenario could have been prevented had the following questions been asked prior to the purchase:
- Who developed the algorithm, and what do we know about them?
- How does the algorithm work?
- Who performs the labor to prepare data for the algorithm?
- What wages were paid to these workers (developers, data workers, etc) and how does it compare to prevailing wages?
- What sorts of guarantees can the company provide that all people who contribute to the algorithm (employees, contractors, etc.) are fairly-compensated and treated with dignity and respect?
- Does the company or do its contractors engage in union-busting activities?
- How can the procurement process be improved to require certifications from potential vendors?
Other scenarios in the workshop included:
- Social media platforms that were used to both engage and surveil residents;
- Programs that used potentially biased data to make program eligibility determinations;
- Data systems that couldn’t be easily-modified to adopt more-inclusive data standards;
- Technologies with origins in border enforcement; and
- Vendors that shared data with law enforcement agencies in violation of city policy.
Participants felt strongly that:
- Purchasing decisions will be more just and more effective if people from different backgrounds and people with different experiences are part of the process;
- It’s not enough for processes to provide an opportunity for public input. Residents should be directly involved in making purchasing decisions;
- People without technical backgrounds should have power to ask questions about data and technology;
- We shouldn’t view every problem as one that needs to be solved by technology.
Our first Data Literacy for Data Stewards cohort concludes on January 27 with activities related to data governance and group reflections on the series.
- A guide (and introductory blog post) to de-risk government technology projects produced by the federal government’s 18f.
- The disconnect between software development and government contracting – blog post by Waldo Jaquith
- What Lies Beneath – blog post by Mark Headd
- Resources from the Harvard Kennedy School Government Performance Lab, including: