Want Accountable AI in Government? Start with Procurement
- Accountable AI in government starts with procurement practices.
- The New Orleans Police Department’s software deal raised ethical concerns.
- Palantir’s free software agreement bypassed traditional oversight.
- Procurement processes for AI vary dramatically across cities.
- Most current AI procurement lacks a structured solicitation process.
Understanding AI Procurement in Local Governments
The narrative around accountable AI in government often starts with how public cities engage in purchasing—and yes, that includes way more than just school buses or office supplies. Since 2018, instances like the New Orleans Police Department’s reliance on Palantir’s predictive policing software illustrate some very troubling gaps in ethical oversight. The uproar wasn’t merely about the technology itself—people asked who approved such a deal, especially since it was shrouded in secrecy until civil rights advocates stepped in and stirred public outcry. Contracts with private vendors in the U.S. generally operate under strict procurement rules, and those loopholes exposed how complicated the dynamics really are. Here, Palantir’s philanthropy turned a purchasing decision into a freebie, neatly avoiding city regulations that usually call for a public debate and council approval. As a result, key officials hadn’t even realized this partnership was taking place, let alone questioned its implications. This case inspired our research team, made up of scientists from Carnegie Mellon University and the University of Pittsburgh. They dove deep into the opaque processes that influence how AI technologies are procured in the public sector, interviewing 19 city employees across seven anonymous cities to figure out what’s really going on. Their findings? Procurement practices differ widely, significantly shaping how AI is governed, raising big questions about transparency and ethics in public sector technology choices.
The Gaps in AI Governance Practices
It’s easy to think of procurement as a simple bidding process for government contracts—vendors submitting proposals and city employees evaluating them. However, this standard model doesn’t capture the full picture when it comes to AI technologies, which have their own set of complexities. Multiple vendors can bypass the traditional bidding space, going instead for small-dollar purchases or using government-issued purchasing cards to grab low-cost AI tools. Unbelievably, some AI systems are even gifted through donations or thrown into university partnerships without a second thought about public accountability.
The Role of Oversight in AI Purchases
A striking observation from our investigation is that how a city organizes its procurement can either bolster or erode responsible governance of AI. In cities where centralized oversight exists, IT staff—or a designated group—must vet every tech acquisition. The situation looks different in decentralized models, often resulting in departments making independent decisions without thorough checks. This creates a tough conundrum for local governments attempting to ensure that AI systems fulfill ethical standards while managing varied departmental dynamics.
As evident from our research, understanding procurement’s crucial role in AI decision-making is essential for addressing accountability in public sector government. By pinpointing who has the responsibility for evaluating these technologies and examining how purchasing practices can adapt, local governments can reshape the narrative around responsible AI. The path forward is complex, but it represents an invaluable opportunity for collaboration among stakeholders.