Intel's 20-year-old AI ethics prodigy on the future of artificial intelligence

Ria Cheruvu represents the next generation of AI innovators — and she's long been doing the work.
By Chase DiBenedetto  on 
Ria Cheruvu smiling into the camera. She has dark hair which is slicked back into a ponytail and is wearing glasses.
Credit: Mashable composite / Ria Cheruvu

Ria Cheruvu has been ahead of the curve for most of her life. After graduating from her Arizona high school at just 11, the student deemed prodigy became one of the youngest people to ever graduate from Harvard. Her collegiate record is a marvel to many. 

Following a period studying neurobiology and during the completion of her first computer science degree, Cheruvu was hired for Intel's ethics team — preceding the AI boom that would soon hit mass markets, and years before the phrase became a household utterance. At the time of her hiring, Cheruvu was just 14 years old. In the years since joining the tech giant and graduating from the Ivy League, she's become a go-to voice on responsible AI development, bolstering her resume with multiple AI patents, a Master's Degree in data science from her alma mater after a neuroscience internship at Yale, and multiple teaching credits for digital courses on AI ethics. She's working on a PhD, as well, because… why not?

Today, as one of Intel's AI architects and "evangelists" — yes, that's the real word — the 20-year-old is on the forefront of one of the world's hottest topics: How do we move forward with this technology, and how can it be done in a way that ensures real people remain at its core?

Her presence is a rare thing in an industry now steamrolled by capital investors, commercial interests, and self-proclaimed tech "disruptors." But her age is more of a benefit than a hindrance, as the future of AI will soon be placed in the hands of the next generation of technologists and users — her peers — and many of them are already embracing the complex integration of generative AI in their daily lives. 

Cheruvu spoke to Mashable about her now-established career in the realm of "AI for Good," one of the few young voices with a seat at the table as the world reckons with accelerating change. 

Mashable: Your accomplishments run through a gamut of scientific fields: Computer science, data science, neuroscience. Why did you turn your attention to AI, and Intel, specifically? 

Cheruvu: After I graduated with my Bachelor's in computer science, I was looking for the next step. It was a turning point: Do I go into neuroscience, or do I get into something that's pure computer and data science related? I had a brief interest in AI.

Both of my parents are software engineers by training and have their Masters in computer applications and technology. At the time, my dad was working at Intel Corporation. I had actually been on a number of field trips in high school to our local campus. I applied, and I interviewed with three different teams in different areas. One was pure math and AI, the other was a little bit on the neuroscience side, and then the last was deep learning and hardware. Eventually, I picked that third team and got accepted. It evolved from there into a six year journey of different roles at Intel.

The industry has had so much turnover, especially in the last couple of years, what has kept you there?

I've been in so many different roles in different areas. Some of them have been pure business or the technology side, others on the pure research side, and then some bridging the two. I was a team lead, and now I am an evangelist and public speaker and architect. I'm gearing back more to technical architect roles. So lots of jumping around the map. But my network and the community has stayed true, which is what encourages me to continue to work at Intel, and continue to work in the AI industry, too. 

I find it really rare to hear of a person as young as yourself being so integrated into AI's ethical development, not just its use. Why this and not a different aspect?

I've been looking at ethical AI for about two to three years now, professionally and personally. From the technical angle, there's a lot of things to be done: technical tooling, analysis, metrics, quality assurance, all of that fun stuff. On a societal aspect, an incredible amount of work needs to be done toward privacy, consent, bias, and algorithmic discrimination. It's been a whirlwind, learning about all of these topics and then trying to understand which are practical versus which just seem to be talked about a lot, and doing honest reevaluations. 

There is an increasing need for younger voices and opportunities for younger generations to be able to step up and to start contributing to these technologies.

My mom did her PhD in metaphysics and philosophy, so we have very deep conversations around AI and humanity. What exactly is our idea of consciousness? How far can AI go in terms of being able to mimic humans? What is our framework for helping each other?

And have these reflections been fruitful? What does "AI for Good" actually look like, then? Right now, the phrase "human centered" is very buzzy but what does that mean for the future?

Folks who are exposed to technology and digital technology are getting exposed to AI at a faster and faster rate. The reason why I gravitate toward "human centered" frameworks is to focus on the fact that the infrastructure, the technology, should be able to empower users. 

Mashable Light Speed
Want more out-of-this world tech, space and science stories?
Sign up for Mashable's weekly Light Speed newsletter.
By signing up you agree to our Terms of Use and Privacy Policy.
Thanks for signing up!

According to regulations, and the communities that we're building around them, you should have the right to control the data that you generate. On the technical side, we should be empowering developers and creators to be able to test for bias, and to remove data from models. We're not training data models with data that we don't have consent for. When you're a person in AI, it's assumed you're advocating for AI development. But there's a lot of areas, personally, where I feel that more AI development doesn't make sense. Maybe it's something that needs to be more streamlined or in the hands of creators and artists. 

When we see a lot of these technologies, like robots and self-driving vehicles, starting to pop up, how are they empowering user experiences? How are we building trust into these relationships?

There's a couple leading researchers who are the subject matter experts in this field. I'm thinking of Fei Fei Li and Yejin Choi. It's been really interesting to see how their research and the research coming out of their labs and teams has been connected to bigger advancements or leaps in AI. I have been using that research as a marker to demystify what's coming up next in [the AI industry]. 

Your title is "evangelist," which is an interesting term to use for scientific development, but essentially you're a public communicator. How do you navigate that role amid the onslaught of AI coverage?

There's a lot of pressure, there's a lot of hype, placed on certain topics. It takes a pretty strong will and determination to push through that and say what is important for me, for my community, for the industry, right now. To focus on what is really driving the practical impact I want to communicate and share with folks, things I can inspire them to be optimistic about. I want to be honest about risks and challenges, too. Instead of buttering up the truth, be straightforward about it. As an evangelist, someone who's passionate about public speaking just as much as coding, what does that balance look like?

There has been an emergence, or a boom, of AI experts and evangelists in this space. Not to say anything direct about credentials or anything, but everybody has an opinion about AI. I personally have been listening to perspectives that have been in the industry for longer. That wisdom that's getting passed down is something that I like to tap into, as opposed to, maybe, some of the newer folks who are forming some quick assumptions.

How do you envision your peers getting involved in these conversations?

I think that there is an increasing need for younger voices and opportunities for younger generations to be able to step up and to start contributing to these technologies. Through their usage of it, [the technologies are] getting mastered pretty quickly. 

And it's important to bring a fresh perspective to [AI design]. Not only consuming the technology, but contributing to its development, being able to shape it in ways that are different. Rather than seeing it as a kind of "disruptor" or a "bubble" that needs to be explored and pushed to the limit, we can bring it back to the applications where it can be most useful. 

There's a lot of opportunities to contribute. Not a lot of them are as recognized as other applications, in terms of priority, coverage in the media, or public interest, but they definitely lead to a much more meaningful impact. There's always bigger projects, and bigger themes — like large language models — but the smaller applications really make a difference, too.

Sorry to use a cliché, but it feels like AI is yet another "global inheritance" we'll be tossing down to younger generations, much like we've done with our current climate crisis. 

I was reading that quote recently about being able to leave the world behind a little bit better than how you found it initially. In a generational context, we need to continue to have conversations about this, especially with the AI algorithms that are close to us, whether it's social media or apps that are writing content for you. You're getting exposed to them on a day to day basis.

In my opinion, many people are uncomfortable with the widespread pressure to use AI in our daily lives, when we don't fully understand what's at stake. They want things to slow down.

I feel like folks who are working on AI and machine learning know that very well, but, for some reason, it doesn't proliferate outside of that bubble. Folks who are working in AI know to be very, very cautious when they see a tool. Cautious in the sense of, "I'm not going to adopt it, or I'm not going to use it, unless I think it's useful." But when it comes to [AI stakeholders] externally, I think it's just a kind of hype. Ironically, that's not what you see in the inner circle. It just gets pushed on us. 

What do current stakeholders or developers owe to the next generation of technologists and users, including yourself?

Human labor disruption is a really big topic, and I'm thinking about talent and folks who want to enter into the AI space. When we talk about AI and these technologies, it's always: fast, rapid innovation, moving forward. These kinds of words and other terminology keep getting added to a pile that makes it even more intimidating for folks to be able to understand and truly grasp [AI]. "AI" itself is one of those words. The field started off with "deep learning" and "machine learning," and it's been a gradual transition. I've seen my job title change from deep learning engineer to AI architect. I'm part of that, too. I think that there might be an opportunity to take AI as a buzzword and break it down — and we can still keep the word, the general feeling around it.

But there's only so much responsibility that a user can take on. Providers and developers and creators of infrastructure also need to be able to shoulder that responsibility. Of course, regulations come in to help protect the rights of the individuals involved to a certain extent. 

A lot of folks may not have the time to sit down and read through the full compendium of what they need to know. I'm valuing content and people who are taking the time to break it down and say, "You've got this. This is something easy. This is how you contribute." It doesn't need to be a fearful topic. It's something you can voice your concerns on.

I've had so many conversations over the past few years with brilliant people on inclusive AI, democratizing AI, AI literacy. There's a lot of different ways to enable that empowerment. For example, there's been a lot of really great efforts on digital readiness programs that I'm honored to have been a part of, going to community colleges or creating AI curriculum for free. Five million or so folks who have been trained as part of Intel's digital readiness programs. We need more accessibility, more tutorials, more content, more one-on-one interaction, saying, "You know, this is easier than you think it is. You can be a professional in this space. It's not hard to get started."

Chase sits in front of a green framed window, wearing a cheetah print shirt and looking to her right. On the window's glass pane reads "Ricas's Tostadas" in red lettering.
Chase DiBenedetto
Social Good Reporter

Chase joined Mashable's Social Good team in 2020, covering online stories about digital activism, climate justice, accessibility, and media representation. Her work also touches on how these conversations manifest in politics, popular culture, and fandom. Sometimes she's very funny.


More from Mashable Voices
What is an AI art museum? The world will soon find out.
Refik Anadol and Efsun Erkılıç, co-founder of the Refik Anadol Studio, are the minds behind Dataland.

The secret online lives of high schoolers: FX's 'Social Studies' lifts the veil
A group of teens discuss their online lives in "Social Studies."

From the Dolomites to your device: Tech’s role in preserving Indigenous languages
By Janine Oliveira and Juliana Rebelatto
A Native American mom reads to her children.



Recommended For You
Free Apple Intelligence upgrade likely arriving soon, leak suggests
The Apple Intelligence logo is being displayed on a smartphone, with the Apple Siri logo in the background

Apple Intelligence release date: When will you get Genmoji and more?
Apple Intelligence demo

How to enable Apple Intelligence on your MacBook
Apple Intelligence showcased on MacBook, iPad, and iPhone

Apple Intelligence gets ChatGPT Plus upgrade with iOS 18.2 beta: 3 features the paid version gets you
ios 18 Writing Tools feature on an iPhone against an abstract blue background

Apple Intelligence on Mac: 5 AI-powered features you can test right now
apple intelligence writing tools in the mail app on a mac

More in Tech

Online experts you can trust for Hurricane Milton info
Satellite imagery of Hurricane Milton over the Gulf of Mexico.

Hurricane Milton is almost here. Here's how to get help evacuating.
A weather alert is displayed along a sidewalk as Hurricane Milton churns in the Gulf of Mexico on October 07, 2024 in Tampa, Florida. Milton, which comes on heels of the destructive Hurricane Helene, has strengthened to a Category 5 storm as it approaches Florida’s Gulf Coast near Tampa, where it is projected to make landfall Wednesday.

Hurricane Milton: Spaghetti models track the storm’s path
NOAA's cone model for Hurricane Milton


Trending on Mashable
NYT Connections hints today: Clues, answers for December 6, 2024
A phone displaying the New York Times game 'Connections.'

Wordle today: Answer, hints for December 6
a phone displaying Wordle

Tesla suspends Cybertruck production. Who could have predicted this?
Tesla vehicles, including Cybertrucks, loaded on a transport that seems to be going nowhere.

13 best websites to analyze your Spotify data in 2024
A white man holds a dozen or so phones, with several falling down.

Wordle today: Answer, hints for December 5
a phone displaying Wordle
The biggest stories of the day delivered to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
Thanks for signing up. See you at your inbox!