The bill would require enterprises using any form of automated systems to make “‘critical decisions’ that have significant effects on people’s lives relating to education, employment, financial planning, essential utilities, housing or legal services” to assess those systems and report on them to the Federal Trade Commission.
While Protocol reports that support for the bill is unclear at this time, should it pass it would create a significant overhead on the use of artificial intelligence (AI) and other automation technologies during various elements of the customer and employee experience.
Notably, the focus of this legislation is not on the companies providing the technologies, but rather the enterprises employing them.
From Protocol’s article:
Suppliers of algorithmic tools also would have to conduct assessments if they expect them to be used for a critical decision. However, Winters said it makes the most sense to focus on users rather than vendors.
“The bill focuses on the impact of the algorithmic systems, and the impact depends on the context in which it is used,” he said. Vendors selling algorithmic systems might only assess those tools according to perfect use cases rather than in relation to how they are used in more realistic circumstances, he said.
The new version of the bill is 50 pages long, with far more detail than its 15-page predecessor. One key distinction involves the language used to determine whether technologies are covered by the legislation. While the previous version would have required assessments of “high-risk” systems, the updated bill requires companies to evaluate the impact of algorithmic tech used in making “critical decisions.”
Image credit: USDA, public domain.