This is the first public event of the creative computing institute

Feminist Internet are the first research group. This comes out of ual futures. The mission is to advance equality on the internet.

Gendering of personal assistants (i.e Alexa, Google, Siri, Cortana)

The internet of things is a thing.

(Why is every slide a gif)

These personal assistants have two components:

  • Networked objects, which are a part of everyday life
  • Intelligent algorithms which are making decisions all around us

Buolamwini (2018):

We have entered the age of automation overconfident, yet underprepared. If we fail to make ethical and inclusive artificial intelligence we risk losing gains made in civil rights and gender equity under the guise of machine neutrality.

Source

Tae (the Microsoft chatbot)

The more you chat with Tay the smarter she gets, so the experience can be more personalized to you

She turned into a nazi blah blah we know the story

Microsoft Zo is the latest iteration of this.

People working on AI ethics:

You get a think tank! And you get a think tank!

Jaqline Feldman (2016) in the New Yorker:

By creating interactions that encourage consumers to understand the objects that serve them as women, technologists abet the prejudice by which women are considered objects.

Tech companies say to this: It’s just what the market wants. AIs are designed as women, respond to abusive language in ways that reinforce stereotypes.

Leah Fessler, Quartz Magazine (2017): We tested bots like Siri and Alexa to see who would stand up to sexual harassment

There’s been some pushback to that (and some changes), but that’s not as good as intervening at the design/development stage.

Panel

  • Feminist AI researcher Josie Young
  • Founder Acorn Aspirations, Teens in AI Elena Sinel (Yuch a business person)
  • Co-Founder, Head Creative Technologist, Comuzi Alex Fefegha (Comuzi is an ad agency)

Josie Young on Feminist Chatbots

How do we interrogate how we design chatbots? Chatbots are probably the main interface we have with AI (citation needed). If you call up a government agency, talk to your laptop, Facebook etc., you’re taling to a chatbot. Biases in chatbots seep back into society in all kinds of ways.

Should chatbots have a gender? Nope.

  • When a gender is attached to chatbots, it’s usually done in a stereotypical way. Female bots are assistants, navigation, male bots gove law advice etc.
  • When we gender bots, it prompts negative reactions from people interacting with it. When Cortana was first deployed, the most common question was wether she had a boyfriend (citation needed).
  • This tech reaches a lot of people, so there’s a huge amount of responsibility
  • Constrains design of robots - you’ve limited how that bot can express itself, connect with others, what it can do.

Feminist research design process

Uzbekistan isn’t a great place.

Teens in AI does bootcamps, hackathons etc.

ehhhhh.

Alex Fefegha: Algorithms and the Life of Brisha Borden

This is his CSM MA Thesis.

AI as

The study of how to make computers do things at which, at the moment, people are better Rich and Knight (1991)

Brisha Borden was a Florida teen. She got arrested for stealing a bike - the judge in the case was using a re-offending score software. Of course the thing’s racist.

This is detailed in a 2016 ProPublica Investigation

The offenders in Florida would get a survey where they ask questions which are essentially designed to filter out poor people. Of course this plays into disproportionate sentencing of black people in the US>

Responses to the ProPublica piece:

This is all based on US data and reporting, how does this play in a UK context. Ran workshops etc. with Comuzi.

Conclusions

  • AI is just making bad decisions faster
  • Fairness is a subjective thing, hard to do with maths (That seems like a very broad statement)
  • Accountability of algorithms and data is hard
  • More conversations on bias are needed
  • Teams need to be diverse

Philip Alston:

It is extremely important for an audience interested in AI to recognize that when we take a social welfare system and … put on top of it ways to make it more efficient, what we’re doing is doubling down on injustices

Source

AI Cheatsheet

Questions

How do we balance changing tech vs changing society

  • Young: They need to be intertwined. If we bring in social scientists, philosophers together with people building the AI this can happen.
  • Fefegha: Tech is an extension of ourselves. Humanity isn’t nice, but if we’re going to introduce AI systems we need to recognize our on flaws. We need to have conversations on how data collection is biased etc.
  • Young: Governments and companies are setting the classification, who goes in a residual category etc.

Do we need standards / global regulation for AI

  • Fefegha: Part of IEEE, which is trying to develop standards for building ethical AI. We can build these frameworks, but how are they enforced, measured, regulated. Of course industry is trying to avoid regulation.
  • Young: This all needs to happen at different levels / layers: The teams building the software, people using it, governments regulating it etc. Also: Innoation used to happen in universities (which have all this ethics infrastructure) - now that happens inside companies, which don’t have any of those frameworks.

Calvert makes her point about artificial intelligence v.s. partial intelligence

Fefegha: I stay away from that conversation and focus on real-world issues that affect people now (i.e sentencing) Young: A more opimistic of the future, where AI creates, works together. Her (2013) as opposed to Ex Machina (2014).