Responsibility

A letter from Morgan

It’s time for some real talk.

In this work we are estimating the sensitive identity dimensions of people we see and hear in media. These identity dimensions are deeply personal and often impact people’s experiences as they move through the world. We strongly believe that to build products and services that are fair, equitable, and inclusive, Responsibility needs to be foundational to our work, and I want to explain what that means.

First, let’s talk through some examples to ground this in real life. This might be a controversial hot take, but our customers aren’t worried about the amount of screen time white men between the ages of 19 and 55 get. In fact, looking at a number of studies, you might even come to the conclusion that these men are OVERrepresented in media. 

What our customers really want to know is how their media does when it comes to groups that have been historically UNDERrepresented. Often, those are also the people who have been historically marginalized and discriminated against.

To name a few examples, women, LGBTQ+ individuals, people with disabilities, people with darker skin tones, and people with larger body sizes.

Whether measuring representation using people or AI as annotators, an irresponsible approach can result in inaccurate results. Our customers are counting on us to give them accurate insights into representation. If we get things wrong, we could incorrectly tell them that the representation in their content is great! And then change would never happen, and our customers (and society) wouldn’t realize the benefit (financial and societal) of inclusive content. 

So, with this context, here are a few of the ways we strive to build our products and business with a responsibility-first approach.

I. We partner with experts and advocacy groups

Our partners have research, resources, and groups of people that together form a representative perspective for the community they work with. These partners co-created the annotation guides with us, helped deliver training to our teams, and will work with us as we build more insights into our offerings in the future. They help us ensure our products are rooted in the context and needs of the community they advocate for. I encourage you to take a look at our partners on our Partners page, visit their sites, and see for yourself all the great resources they have available. 

asian family having breakfast together

II. We strive to build inclusive scales

I don’t know how else to explain this than by using an example. When I look around the industry, I see a lot of companies that have built AI models to assign a gender to a picture or video of a person. All of these tech products have results that are either “man” or “woman” (or “male” or “female”). There are a lot problems with this, but the biggest two are:

These gender models are binary, but gender is not.

And, back to a point made up above, the people that fall outside the man / woman gender identities are the ones that have historically been underrepresented and marginalized. If you don’t have an option for something other than “man” or “woman”, you won’t be able to measure and report on it.

You can’t tell someone’s gender by looking at them.

Gender is an internally held identity dimension. What we can do, as an observer, is say how we are perceiving someone to be expressing a gender identity.

What this means for us is that we won’t report on the representation of men vs women (gender identities), but instead the representation across gender expressions (feminine, masculine, and gender nonconforming). These are usually highly correlated with gender identities, but are a more accurate and fair way to measure. 

girl with pink hair kissing another girl on cheek during pride
man with white braids posing stylishly

III. We are thoughtful about what can and cannot be estimated via people or technology

I want to illustrate this with another example. If a person can’t look at a picture of another person and tell what a person’s sexual orientation is, an AI model can’t do it either. In cases like this, we will not ask a human or AI annotator to look at a picture of a single person and tell us whether that person is gay or straight or bisexual or any other sexual orientation.

What we can do instead, is notice when a person is shown in the context of a romantic, or intimate, or partnered interaction. We can observe the sexual orientation of that interaction, and make an annotation around that. 

And even though we’re being thoughtful about how to make this type of annotation responsibly, we won’t always get it right! Are we seeing two female friends playing with one of their babies or a lesbian couple playing with their baby? This type of annotation is hard for both people and AI models. But even if we get some fraction of them wrong, they’re wrong because they’re hard, not because we’re using harmful stereotypes to make annotations.

two women hugging each other in the pool
older and younger man smiling

IV. We partner with experts, advocacy groups, and vendors that share our values

We prioritize partners and vendors that have a demonstrated commitment to equity. Here are a few examples:

  • It goes without saying that our partners share in our mission to accelerate the shift to more inclusive and inspiring representation of all people in all media - again, please check them out on our Partners page
  • Our early design partner, ustwo, is a B Corp. They also led us to finding our web designer via their partnership with and promotion of Where Are The Black Designers?
  • Our annotation vendor, CloudFactory, leads with their dedication to the professional and community development of their teams (ours is in Kenya), and pays 2-3x the hourly wage of other firms in the region
  • Even our primary lawyers, Venturous Counsel, are a specialist firm that puts DEI at the core of their work

We look for like minded vendors and partners at every step. 

family smiling, with mother and son who have down syndrome hugginng

V. We are committed to building AI and ML models responsibly

The points above are really step 0 to building AI models responsibly. The next set of steps involve:

1. Building representative training datasets

2. Publishing data cards for the datasets

3. Choosing AI algorithms that promote equity and fairness

4. Measuring the performance of trained models

5. Publishing model cards for the models

6. Continuing to test and tweak performance over time

7. Being transparent about all of the above

As we begin incorporating our own AI and ML models, expect to see more from us as described above.

VI. We’re learning and growing every day and want to hear from you

We learn from our partners, we learn from our customers, we learn from so many different sources every day. When we make mistakes (and we do, and we will, we all do!), we seek out feedback and look for opportunities to grow. If you have any feedback on the work we do or the approaches we share - we would love to hear it. Please reach out!

albino man posing
black woman with hijab smiling and posing