Coded Bias was great, thanks for the invite Vicki.
In the movie, Cathy O’Neil argued for legislation requiring algorithms automating decision making to be audited, to make sure they do what they're supposed to do, and aren't discriminatory.
"Algorithmic accountability".
I was excited to find out that there's already a policy at the federal level requiring this for government departments:
Some highlights:
- "The objective of this Directive is to ensure that Automated Decision Systems are deployed in a manner that reduces risks to Canadians and federal institutions, and leads to more efficient, accurate, consistent, and interpretable decisions made pursuant to Canadian law."
- Before an automated decision making system can be used, an Algorithmic Impact Assessment has to be completed
- Meaningful explanation to affected individuals of how and why the decision was made is required
- Ensuring that an Automated Decision System allows for human intervention, when appropriate
Brilliant stuff. I'm beyond impressed to see the public sector ahead of the curve on this.
As for wider society (private industry, RCMP, police) the situation is similar to what's described in the movie. I'm having trouble finding what the state of the discussion is exactly. Most of what I find is around
police use of Clearview AI, and ignores other uses for the technology.
Do you know if there's anyone in MB who's working on this?
Some reports I found:
Some civil servants with an interest that I managed to find: