Learning the Machine: An Interview with an AI Engineering Intern
“I really like the field of machine learning...because I’m so scared of it.”

Tessa Lili Augsberger is a 22-year-old writer from Los Angeles and editor of The New Critic studying history at Dartmouth College.
Rhea Madhogarhia is a 21-year-old computer science and cognitive science student at the University of Chicago.
Discussions over AI seemed to follow one around at every intergenerational gathering this summer. Having realized my perspective on the matter lacked technical fluency, I consulted the expertise of my friend Rhea Madhogarhia, who interned as a machine learning engineer at Reddit in New York this summer. I enlisted Rhea to unveil the many names and faces of AI, its powers of seduction, the shelter digital literacy can provide against the information storm, and the meaning of “intelligence” in the first place. Read on for our conversation, which has been edited for length and clarity.
TESSA AUGSBERGER Tell me a little bit about your job.
RHEA MADHOGARHIA I [was] a machine learning engineering intern at Reddit…[working on] the search recommendation and relevance team. So when you go to Reddit and you look at that little magnifying glass in the corner and tap it, you’re using our product.
AUGSBERGER Did you have any hesitations coming into the job before you accepted?
MADHOGARHIA Well, I’m not going to lie, I was just really happy to get the job. But I do remember [that] a few weeks before I started, I was crazily listening to so many AI podcasts, and reading a bunch of opinion pieces on AI. I was looking at my notes from previous classes that I’d taken and trying to get my morality straight. Because even though I am somebody who wants to study machine learning, I am very cautious of its impact. I think being a cognitive science major definitely exposed me, initially, to the study of AI and LLMs and machine learning and [the need to] look at this bias, or look at its impact on human behavior. What does [AI] tell us about human behavior, rather than it just being computation? So I definitely came in with that mindset, and I was very scared. I was like, “This is my first corporate job. Am I going to be making AI for, like, hoorah capitalism? They just went public as a company. What am I getting myself into? [Am I] going to be forced to build something that I personally find evil?”
And I was very pleasantly surprised. [The team said], “Reddit is a human space online, it has content that is provided by humans, and people come here for real opinions, not for AI generated content.” So, yes, to answer your question, TLDR: yeah, I thought about it a lot, but throughout my time there I think that those thoughts have been a little bit eased.
AUGSBERGER How do you see the role of AI within the corporate tech industry and how did you interpret that role coming in as a machine learning engineer?
MADHOGARHIA Computers are really good at writing code, and so that’s one thing when it comes to efficiency that I think a lot of tech companies are taking advantage of. I went to this conference two years ago called The Human Connection and AI Summit. It was run by this woman named Michelle Culver, who started The Rithm Project, which is all about, “How do you navigate AI and how does human connection erode or increase in the face of this new technology?” I remember the one thing I took away from that was this one diagram she made. It was a grid, a tool-to-replacement [diagram]. But I think a lot of places don’t call out [that] AI should be a tool. It should help you reach the goal that you set yourself — not set the goal itself. I do this in my research for the lab I work at school, too. I use natural language processing techniques to analyze data, and I feel good about that, because there’s always a human eye on it, and at the end of the day, the analysis is human. But if [AI] can help me look through thousands of rows of data and turn that into numbers, and then turn it into a result, [which] I can [then] analyze and know that I did it responsibly, then I think it’s good progress in tech.
AUGSBERGER What is the difference between LLMs, AI, and machine learning? I think a lot of people tend to group those together, including myself.
MADHOGARHIA I guess the easiest one to start with is an LLM, which is a large language model. An LLM’s job is to parse whatever the user has given to them and give out a response through natural language: like English language or human language. So ChatGPT is an LLM — Claude or Gemini. All of those [products] are considered AI tools, and they all are built through machine learning techniques. So an LLM is machine learning and an LLM is an AI product — machine learning and AI are somewhat interchangeable.
[Regarding] the words “artificial intelligence,” there’s an argument there that [in calling it that] you’re claiming that the behaviors that this program is expressing show that it’s intelligent. I use [the term] machine learning only. I don’t know why. I just have had this big question in my head of, “What does Intelligence even mean?” I don’t ever say that I work on AI. I say that I'm a machine learning engineer.
Intelligence is a human created concept, too. So, how is it even possible for us to fathom what a computer intelligence is without it being a human type of intelligence? That’s a question that I’ve thought about a lot ever since I took this class at school called “Philosophy of Mind,” where we read Thomas Nagel: what does it mean to be a bat or something? Or like: what is it even like to be a bat? I think the reason cognitive science and psychology and neuroscience are so intertwined with this field of AI is because “machine learning” and the term “artificial intelligence” are very psychological things, very human things. We essentially entangled [AI] with the human brain at its creation, which I find very confusing, and something that is very hard to wrap my head around, but I think it demystifies it all and makes it easier to think about when I’m like, “Rhea, these are just computational models, and people have been using computational modeling for research forever.” When people think of AI, they often think of large language models, because it’s the thing that is closest to exhibiting human behavior, because it’s done through language. And so it’s easiest to compare it to a human intelligence because it does things that seem like reasoning and understanding and problem solving, which up until now, were very human brain things.
AUGSBERGER What do you see as your professional life in the next few years and your relationship with machine learning?
MADHOGARHIA I really like the field of machine learning and AI because I’m so scared of it. You know, I have a friend that probably considers himself an existential risk person. I remember having to explain to him [how] anyone can have my job. Like, there’s a million computer science students out there that probably want to work at Reddit. So I’m very grateful to be here, but I personally want to be a machine learning engineer because I don’t know that the other students that are studying this really care about these decisions, like the humanistic part of it, or how it’s impacting human behavior. Because there’s a lot of people out there that are just for technology for technology’s sake. I’m not one of those people. I really enjoy research, and so I always would like research to be like a part of my job. I’m really glad that my engineering job at Reddit has felt like research as well, but honestly, in the future, I’ve been thinking about going into AI safety or governance or alignment. I don’t have a public policy background, but I do think that engineers need to be more vocal in that space.
I think engineers need to be more present because we’re the ones actually building these systems. We know how it works. We know how these decisions get made. I’m not sure that being in AI governance and alignment right now is like gaining traction at all. I don’t think it’s a good place to be if you want to make an impact in AI because AI is growing too fast for legislation, you know.
And so if you’re going to be having an impactful change on technology, you need to be the one making it and making it responsibly, rather than trying to dampen the creation of it. Because I’m sorry, but I’m very pessimistic about the idea that legislation is going to stop creation. It’s a really negative stance to take. I did a policy competition last year with my friends, and it was all about AI legislation. The goal of the competition was to make a bipartisan bill that could pass in the House, and we were like, “How do we make it bipartisan?” And we had to resort to trying to only legislate on sexually explicit content created by AI systems, which is bipartisan bad. Nobody’s gonna support online sexual exploitation. So, when we were doing that, we were like, “Okay, how do you even prevent those kinds of things?” You can maybe put a penalty on people who don’t have sexually explicit content checks in the output of their LLM stage or something, and maybe that solves the problem at the top level. But technology is open for all. A big thing about the reason people love tech is that it’s open source, it’s accessible. It levels the playing field. Things like GitHub and Hugging Face, which are places online where you can look at everyone’s code and see how they built it and build it for yourself, those are amazing communities and spaces. And — it sounds bad, even coming out of my mouth — to restrict that, in a way, is to one, restrict accessibility to technology that’s already been created, and two, to dampen creativity. And again, those are just things that people don’t want. Technological progress is something that’s always sought after, even though I personally am not a “technology for technology’s sake,” or “innovation for innovation’s sake,” person, it’s going to be a really tough battle to go against that. I mean, it’s not going to happen here in the US. Maybe I’m very pessimistic about that, but I do think that in places like Europe, specifically the UK and Germany, [in] their legislation I’ve seen more criticism towards these systems. It seems like they’re focusing on the more humanistic aspect of this issue, which is not about dampening technology. It’s about conserving something human about how we live our lives.
AUGSBERGER On the user side, what do you think the uninformed college student needs to know about AI when they’re using it?
MADHOGARHIA I think people need to know that behind the words is math. I get so upset when I see these stories of a person committing suicide because they followed something an AI said, or somebody getting into a deep relationship with an AI agent. I mean, it’s so cliche, but that is the plot of the movie Her you know, and it’s very understandable that those things happen, and a lot of people are like, “And what’s wrong with that? Let them find solace in what they want to find solace in.” But I think about the fact that I’m talking to something that is a statistical prediction. It’s probability. It’s telling you, in a way, what the consensus of all its data is. That makes me look at it more objectively. Take all of it with a grain of salt. This is somebody that doesn’t really know you. It knows what you’ve told it.
I just think it should be used with caution. I have conversations with AI sometimes to parse through my thoughts or for it to poke holes in my arguments and stuff like that. But think about your personal growth. If you’re somebody that wants to continue to better yourself, then exercise the parts of your brain that are hard to exercise. Because yeah, it can have a short, temporary fix and ease. And I know that it’s such a privilege to be able to take on a new hardship every day in these little ways. But if challenging yourself is part of the human experience — we’ve always done things that are difficult, we’ve always done things that are hard — to live a life with [an] end goal of living it completely with ease like that, that doesn’t make sense to me. That’s not human to me.
AUGSBERGER Do you feel like there’s this huge difference between the generational understanding of machine learning?
MADHOGARHIA I think that the interesting thing about AI is that, in terms of its newness, I do think that it’s kind of an equal playing field. I do think that it’s probably more deceptive to older generations. The one thing I was telling [my friend that] I think helped her a little bit, is that language itself is really persuasive. Just the fact that these predictions and probabilities and statistics are being presented to you in natural language means that it’s seductive, it’s attractive. You want to listen to it. You understand it practically immediately. And I think just knowing that you are being seduced into believing it in a certain way can maybe quell your inclination to believe it so much, or to be like, “Oh, that was an awesome answer.” I do think one thing that stuck with me is this idea that natural language is seductive, as opposed to just the math.
I always feel like I’m being tricked a little bit. Often, when I’m using AI, I’ll push back. Gemini has this little “Show thinking” button, and I’ll always read it to see how it formatted its answer differently than what it was going to originally because it’ll show you the process and the prompts that it gives itself to output its final answer. I think last week there was one where I was like, “Okay, and why did you rewrite your answer to be nicer?”
AUGSBERGER Would you challenge the average person to engage with it at that level?
MADHOGARHIA That’s a slippery slope. Because, in a way, that’s me endorsing people using it more and taking AI seriously, but, like I do that. I think that’s because you have to take it seriously to be able to question it. It all matters whether or not the user believes that there is something special about human to human interaction. Because if they don’t believe that, then there’s no reason they should want to take AI with a grain of salt. I think that’s something that a lot of people today don’t maybe overtly value as much. They don’t. They don’t care if humans are more special than computers. There [are] a lot of tech optimists [who] believe using AI and offloading all of these human things we do onto a computer will bring us into a new consciousness of being that we’ve never unlocked, which is, you know, all speculative, but in that sense, they believe that the only special thing about humans is the fact that we are human. And that’s scary to me, because that means those kinds of people think we can erode our current behaviors and patterns to any degree and we will still uniquely be human.
I don’t know. I’m already getting confused talking about it. It’s such a confusing thing to think about because you try and think about what to prevent, what are the possible downfalls of using AI and what are the consequences, [and] we have no idea what will happen.
AUGSBERGER You’ve said that is the way that the market is trending. So the fact is that people are engaging with AI on this level. But do you think they should be engaging with AI?
MADHOGARHIA I mean, it’s almost the same answer. It’s going to be showing up everywhere in our lives, you know. It’s already in your Instagram feeds, probably it’s already in Google Search. When you go to a store, or when you’re filling out your bills, or looking at Rocket Money on your phone, all of it is probably using AI in some way. Just when it comes to using LLMs and dependence on these systems, I would just keep a critical eye towards where and when you’re interacting with AI and maybe, if you’re somebody that cares about any of the things I mentioned, when you realize you’re interacting with [AI], check on yourself. Are you becoming really dependent on it? Are there skills that you no longer have? Are you finding that you’re talking to your ChatGPT more than you are with your friends? Or like, are you finding that you’re receiving therapy from ChatGPT and that it’s outweighing the amount of time you spend with your family? Or, like, I don’t know. These are all like, those are all real outcomes, and they all seem pretty extreme. I think that my fear is that it just creeps in, like it happens, and you don’t even know that it happens. I mean, that’s how it becomes habitual, and that’s how addiction starts. And, like, I don’t know — social media addiction is real and global and like, that happened before any of us, like, even knew it would happen. And now it’s like, I think everybody I know is addicted to social media, regardless of whether or not they think they are, you know.
But to what end, right? If it’s all going to be that in the future, why do today’s youth need to even develop that awareness? And that goes back to what I said about [how] this is all predicated on whether or not you believe that the human, unassisted experience you have right now is valuable to you. If it’s not, then, I don’t know, there’s a lot of people that like cyborgs. Go ahead, you know. But if it’s something that you value more deeply, if it’s something that you think about, that you have thought about or learned about academically and have learned to value, or your philosophy is that the best thing about your life is the human interaction and connection with the outside world, if it’s something you’re worried about, then there are ways to prevent yourself from being attached to it. But I think if it’s something that you’ve thought about and you realize you don’t care about, then don’t listen to anything I’ve said. There [are] a lot of people that are like that, and [that is] fair of them, that’s fine. I’m not going to call them ignorant. I’m not going to call them uneducated because they’re probably not. They probably just think their life would be better that way. And I guess we’re not ones to judge. But my fear is that even the people that did value that in the past will forget what they originally valued, rather than consciously changing their values because they found something better, because, I mean, again, it’s seductive and it’s persuasive and it’s cool and interesting.
Yeah, I think that it’s something that you do have to decide whether or not it matters to you. Because, I’m sorry, I’m getting sad about all the people I know that [are] like that. I’ve heard them talk about it and [people] are like, “Wait, yeah, I wouldn’t mind. What’s so wrong with being in love with a chat bot or something?” I’ve read articles about people that are like, “What’s so bad about that?” And I’m like, “Oh, for me, [for] my life choices, I wouldn’t want that to happen.”
AUGSBERGER What can somebody who values human interaction do to protect themselves against that “seduction”?
MADHOGARHIA Just be conscious that it’s seduction. It is more tiring [to be conscious about AI] than it is to use [AI] without questioning it all the time. But if it’s something you really care about, question the answers. Still do your own research, still go with your gut and your intuition. For example, I’ve seen stories of people getting caught for cheating because [their professors] thought that ChatGPT wrote their papers but [actually] they wrote it by themselves. But where did you learn to write like that? You probably adopted the tone and the writing style of an LLM because you’ve been communicating with it so much and you’ve been liking the outputs and the responses and saying, “This is better than my own.”
If your own personal style and your own personal experience is more important to you than this aggregated, accumulating experience and data that an LLM has — if you are in love with your life — then maybe take a little bit more time out of your day to question what’s given to you when you use these tools.
AUGSBERGER How would you recommend people educate themselves about it? Are there resources you would point people to?
MADHOGARHIA I do think digital literacy is very important. Just because conversations around this started with this conference, I would recommend The Rithm Project’s Substack. They take a very middle of the line approach. I think there’s a good balance of tech optimism and [the] kinds of AI futures we [can] imagine while also pointing out the potential harms of AI.
I think learning a little bit about what a computational model is [is important]. Even if you don’t understand any of it, that’s good to realize that you don’t understand any of it, and that is something that you should keep in mind. Because when you talk to your friend, you can usually understand where they’re coming from — you can’ understand where an LLM is coming from. Maybe you should have a little bit more caution when believing what it says.
Go on Reddit. Look at the discussion already happening around [AI]. Everyone’s talking about it, so if you’re confused, there [are] definitely a lot of rabbit holes to go down.
AUGSBERGER Thank you, Rhea. This was so helpful.
As Rhea tells us, AI is a topic made much less frightening by constant conversation. The more we can have those conversations, the better. As another frustrated friend working in the AI startup world told me this summer, the key is to have more of the productive conversations and fewer of the despairing ones. The world is changing. One must decide for themselves whether they are willing to change with it.




Very interesting interview. I like the grain of salt idea.... I also feel that AI answers and solutions come so fast and effortless that I have a hard time to value it.
"I personally want to be a machine learning engineer because I don’t know that the other students that are studying this really care about these decisions, like the humanistic part of it, or how it’s impacting human behavior. Because there’s a lot of people out there that are just for technology for technology’s sake. I’m not one of those people."
Be for real. This same line of thinking can be used to justify serving any evil. "Oh, I personally want to be an ICE agent because idk that other people care as much about immigrants as me. It has nothing to do with the $50k bonus." Nothing is gonna be different in the world by you doing that job besides your bank account getting denser and your linkedin becoming more marketable. Why not just be honest about that