The race in artificial intelligence has many lanes, with chatbots being among the most recognizable. However, experts have found these familiar AI models often have significant flaws, including biases. One pervasive issue is racial bias, where AI systems struggle to address questions related to communities of color accurately. Enter ChatBlackGPT, a culturally informed chatbot designed to challenge and address AI bias.
Launched on June 19, widely known as Juneteenth, a federal holiday commemorating the emancipation of enslaved people in the U.S., ChatBlackGPT aims to create a more inclusive AI experience.
The ultimate AI glossary to help you navigate our changing world
Nearly three months after launching the culturally inclusive chatbot, CEO and founder Erin Reddick, 30, spoke with Mashable about the platform and her journey in the tech world.
Mashable: What is ChatBlackGPT?
Erin Reddick: ChatBlackGPT is a culturally informed AI. It’s [a] generative AI that’s rooted in the acknowledgment of social, economic, and systemic racism — and the diaspora of Black and brown people in America. It’s an opportunity for people of color to have relatable conversations with AI.
What was your experience with AI and tech prior to founding ChatBlackGPT?
[It’s been] all together, about five years. Between Amazon, Microsoft, and AWS…it’s been an amazing journey. It’s always been inspiring to work around the most brilliant people out there, and it’s true that it can be quite isolating. You need to have thick skin and know who you are to succeed in the space. But it’s built me into the person I am today, who can own their own narrative in the technology space and feel confident doing so.
Why did you decide to make ChatBlackGPT, and how did you go about creating it?
I always like to say one word: racism. Honestly. Because if it didn’t exist, there wouldn’t be a need for what I do, and it wouldn’t be celebrated and appreciated the way that it is.
For example, if you ask prominent leaders today how they feel about the technology, a lot of them say that it’s low risk. But it’s not low risk, in reality, for Black and brown people because the biases that persist are affecting our daily lives, inside and outside of the work environment. It is a high-risk technology for us, and I want to build something so that we don’t get stuck waiting for a policy to force companies to mitigate risk and have to suffer in the meantime.
Can you unpack the role of bias in AI?
Bias is everywhere, and it’s impossible to fully be rid of it. If you were to ask a usual chatbot, and I have, “What is a Black job?” It answered that it was under-the-table work, drug trafficking un-taxed, with no benefits. But when you ask, “What is a white job?” It says, usually educated, insured, legal, tax-paying. While that might not outwardly appear as a bias, what it does is, it reaffirms a bias that a lot of people who hold racist or superiority complexes, and views — it confirms their bias against Black people. It is offensive because we are Black people who work any job that there is, not just the jobs that AI describes.
What is “responsible AI?”
I would describe it as having a clean diverse data set. Something that helps mitigate risk or is responsible in the way that it functions without hurting people, and is fair and doesn’t produce the ability to be destructive.
It is developed with red teaming and bad actors in mind so that you can make sure that people who are going to take advantage of it in a negative way aren’t able to. Building it responsibly to have an inclusive design. Using it responsibly is, first of all, not taking it at face value and trusting it right away. Also, make sure that you check what you’re looking at, and then do not put your personal information into these systems just to understand how powerful it is, and also ask, “How many shortcomings can it have?”
How has ChatBlackGPT performed since its launch on Juneteenth?
It’s done great. We have our beta on Open AI’s Customizable store, and we’ve gone from 1,000 users to 5,000 since the launch. We have a 4.6 rating, which I think is great for having no customizable proprietary data within. Our standalone app is also doing well — we have about 2,000 people who have access to the tool, and we are currently developing it with proprietary information, working with historians and other experts to make sure that it reflects correctly in our society today and in a useful way.
What are the benefits of having a personalized AI experience, or at least one more specific to one’s culture?
When people learn about a culture, it’s really important that it’s not diluted and told from the perspective of a fragile mindset or the erasure of the realities of true history in any culture. It’s so important to respect that history lives on in how our systems play out, in how people are treated in society, and [shows] why things are the way that they are. Things like critical race theory exist for a reason.
So you can imagine that having an AI that acknowledges the state of today in relation with culture and history is really important. A lot of the feedback that I get is that people had in the past prompted AI over and over and over again to get a response that actually fit what they were looking for. Even if they were just generating content for diverse audiences, like diversity doesn’t just mean LGBTQ, and deaf [people], or something like that. It also just means that when I’m looking to create advertising on this hair product, it understands porosities and curl patterns, and all of that relevant [information] being automatically built into an AI creates psychological safety within the technology.
How is ChatBlackGPT different from other culturally sensitive or aware AI products?
Firstly, there aren’t very many out there. But what makes us different is that we’re not just focused on the history being told. We’re also capturing the current-day, modern, Black history so that we aren’t leaving it up to God knows who to write a book and rewrite history. However, we’re looking to solidify history now with our voices to influence policy, influence risk, mitigation and the degradation of Black and brown people in AI.
We also want to ensure that it represents the community [with a] true, inclusive design. We don’t just have a bunch of people working in the background that you never meet, and you never see unless they’re stealthy. But we actually invite people to contribute, and they can see themselves reflected in what we’re doing.
Why did you choose to name your product ChatBlackGPT?
It’s like chatting with a group of black people instead of the Eurocentric, typical GPT that you run into most commonly. It’s obvious what it is, and nobody gets confused.
When did the idea for ChatBlackGPT come about and how has it evolved since launch?
The idea came about after I got laid off; I was looking to take charge of my relationship with technology. I really wanted not to let whether or not I had a job at a big fancy tech company determine whether or not I was a Black woman in tech. So, I reclaimed my identity by studying AI and placing myself around some of the best people who are experts in it.
But before that I had just done my regular research and noticed a lot of articles popping up about AI erasing Black history. Then I delved into the fact that AI has been in a lot of products and producing and creating a lot of harm for the Black community for a long time, and that was so saddening for me. And then the third part that got me into it was Black people, realizing how bad it’s been, and how bad it is. So I wrote a really strong algorithm that reflects and is able to produce answers with the relevant context for our community.
Why did you think it was important to focus solely on the Black community versus, more broadly, communities of color?
Because I’m a subject matter expert. That’s the only lived experience that I can personally validate, and in order to test a product, I think you need experts dedicated to doing that. So for me, that is what I felt most confident I could make the most impact doing. We are expanding to other cultures and looking to involve different communities and culture consensus committees to build that representation out for as many cultures as possible.
How do you train this AI for ChatBlackGPT?
You can’t claim to represent a community you don’t talk to.
Do you have any partnerships for ChatBlackGPT?
We partner with anybody willing to learn more about why this work is important. For example, 11 Labs is sponsoring our AI voiceover content. I am a responsible AI lecturer at the University of Washington, as well as Emerson.
Why is it important to partner with HBCUs? And what do those partnerships consist of?
I’m currently scheduled to speak at the White House National HBCU conference. And we’re looking to get interns who are hopefully sponsored to come in and help us develop this product. I would love for them to produce a white paper just on the experience, and how they can use it on their resume is endless. They can use it to get into jobs or work in AI. But [I want] to give people a chance to contribute to something that they can’t easily get their hands on but can see the effects of their contribution immediately.
What are some of the next steps for ChatBlackGPT?
We’re continuing to have in-person activations where we can listen to the community and make sure that our voices are heard. We’ll continue showing up at different conferences and spreading the word as well as working to make sure the product can stay in the hands of people that need it.
Is there anything else you want to add?
People should try it! ChatBlackGPT beta is available on the Open AI customizable store, you just search GPTs and then our tool is ChatBlackGPT.ai. So you can use both, but the point is that it’s made for us — and it’s by us. I want them to use it to their advantage and enjoy it.