Helping Students Find Their Natural Voices in an Artificial Age

By Benjamin Bean et AI

Summary

Asking the question, "Can Large Language Models (LLMs) like ChatGPT offer students an opportunity to develop voice?" this high school teacher considers the important role of culturally responsive teachers against a backdrop of rapidly evolving technologies, techniques, and unknowns, including AI.
Citation:
Bean, Benjamin, et AI. 2025. Helping Students Find Their Natural Voices in an Artificial Age . Journal of Folklore and Education. 12:67-77.

We do not write because we must; we always have a choice. We write because language is the way we keep a hold on life. With words we experience our deepest understandings of what it means to be intimate. We communicate to connect, to know community.
–bell hooks

Prologue: In Which AI Explains How to Be “Authentically You”

Q: I am a 15-year-old student in an urban public school. Writing has never been fun or easy for me. I can put sentences together, but I have trouble expressing myself in a creative or convincing way. How can I find my voice as a writer?

 

ChatGPT1: That’s a great question—and an honest one. Finding your voice as a writer is something many people struggle with, even professional writers. It’s not about being perfect or sounding like someone else; it’s about being authentically you. Here’s how you can begin to find your voice and grow more confident in your writing.

 

1. Write Like You Talk (But a Bit Cleaner)
Your voice is how you naturally express yourself when you’re speaking—your humor, your tone, your point of view. Try this:

  • Think of a topic you care about (even if it’s small, like your favorite snack or a weird bus ride).
  • Tell the story out loud like you’re talking to a friend.
  • Then write it down using the same tone. You can clean up grammar later—just focus on sounding real.

 

2. Read Writers Who Feel Real

Read books, blogs, or articles by people who sound honest and relatable. Try reading:

  • Personal essays
  • Memoirs
  • Young adult novels
  • Opinion columns

Ask yourself: What do I like about the way they write? What feels “real” or interesting about it?

 

3. Write About What You Know and Feel
Don’t worry about being “deep” or “creative” all the time. Instead, write about:

  • Something that annoys or excites you
  • A memory you can’t forget
  • Something you wish more people understood about you

Your writing voice comes out more when you actually care about the topic.

 

4. Practice Low-Stakes Writing
Set aside 5-10 minutes a day just to write without pressure. Try things like:

  • Journaling (nobody has to see it)
  • Writing a rant or a rave
  • Fake letters (e.g., to your future self, or to a fictional person)

This helps your true style come out naturally.

 

5. Get Feedback You Trust
Ask a teacher, friend, or mentor to read something you wrote and tell you:

  • What parts felt honest or real
  • Where it sounded like “you”
  • What could be clearer or stronger

Don’t just ask if it’s good—ask what feels genuine about it.

 

6. Try Different Forms
Writing doesn’t have to be essays or poems only. Try:

  • Short stories
  • Rap lyrics or song lyrics
  • Blog-style posts
  • Graphic novel scripts

Sometimes your voice shows up better in a form that matches how you think.

 

7. Accept That Voice Takes Time
Your writing voice isn’t one “thing” you find once—it grows and changes. The more you write, the more you’ll recognize your patterns: how you describe things, what you notice, and how you make people feel.

 

If you want, I can help you practice. Want to try writing a short paragraph about something real (like a moment from your day or something that made you laugh)? I’ll help you shape it without changing your voice.


Machine Learning, and Learning to Write Without Sounding Mechanical

Over the last year, my tenth year as a professional educator and my fifth year as a high school teacher, I have seen a significant increase in students using AI for essays and short writing responses. The technology is ubiquitous and, teachers are told, here to stay, so we had better get used to it. But we are also instructed to be cautious and teach students how to use it responsibly. An approach I hear often from colleagues is to allow the use of AI only after students have developed basic skills in research and writing; Framed this way, AI does not replace the process or the knowledge but is useful for completing work more quickly and efficiently. If we collect writing samples early in the school year, avoiding any possibility of plagiarism, we can better detect when students are using AI or copying answers from another student or an online source. When we find something that far exceeds our expectations of a student’s writing, we approach them: “This is unlike anything you have written before. It doesn’t sound like you.” Sometimes, perhaps feeling cornered or guilty, the student admits to using ChatGPT or another large language model (LLM) to assist their writing. In some cases, students claim that the ideas were their own, but they used an online writing tool, such as Grammarly or the dictionary tool in Google Docs, to make their writing conform to academic conventions.

In writing this article, I wanted to examine the assumption that we should teach our students these writing conventions, along with course content, before allowing them to use online tools as shortcuts. Ultimately, for reasons I outline below, I believe that AI should be limited, used critically and carefully, and never promoted as an alternative to instruction rooted in human connection. However, as long as our students are accessing these technologies in our classrooms, we should consider the potential of LLMs to demonstrate a particular standard of writing to our students. At any level or in any subject, we can design activities for students to reverse engineer an AI response to identify aspects of sentence structure, grammar, argumentation, and style.

Classroom Connection: A simple exercise that can be practiced early and often is to prompt an LLM with a short response question, read the response aloud, and rewrite each sentence with slight changes in vocabulary and sentence structure. Fact-checking the AI response can be a component of this practice, as well. This activity can be scaffolded with regard to lexile level and content knowledge, but the basic premise is the sort of modeling and emulation that takes place in good writing instruction, regardless of medium: As students read and paraphrase (not just search, copy, and paste), their imitation becomes habit becomes style.

Now, an important distinction between style as convention and style as individual or cultural expression needs to be honored within this practice. It is one thing to learn discipline-specific ways of describing or explaining content, but another thing altogether is finding or establishing one’s voice as a writer. Another distinction is helpful here: voice-as-agency vs. voice-as-affect. While agency is certainly relevant, this discussion focuses primarily on an important question about the latter: As ChatGPT states above, what matters is “sounding real,” feeling “genuine,” and considering “how you make people feel.” When writing for a particular audience, the goal is to convey a voice that the audience expects: the authority of science, the dramatic narration of history or epic poetry,  or the detached objectivity of journalism.  These  are  all voices that can be taught and imitated, and even LLMs are adept at these genres. If the goal, however, is to empower our students with the skills and the confidence to communicate in a way that feels “authentically” them—both as individuals and as members of sociolinguistic communities–then we face the new responsibility of thinking critically about how new technologies enhance or impede this learning process.

Whatever technologies we choose to embrace or avoid, educators should remember that learning to write is much more than a process of imitation that seeks to meet a standard, accepted, normative type of communication. Writing—or any form of verbal or artistic expression, for that matter—can be (and is, in my teaching practice, first and foremost) a practice of liberation and creation. Discussing how marginalized people have subverted standard English to create alternative ways of being and knowing and producing culture, bell hooks offers a vision of language as a liberating process of intimacy:

To recognize that we touch each other in language seems particularly difficult in a society that would have us believe that there is no dignity in the experience of passion, that to feel deeply is to be inferior, for within the dualism of Western metaphysical thought, ideas are always more important than language. To heal the splitting of mind and body, we marginalized and oppressed people attempt to recover ourselves and our experiences in language. We seek to make a place for intimacy. Unable to find such a place in standard English, we create the ruptured, broken, unruly speech of the vernacular. There, in that location, we make English do what we want it to do. We take the oppressor’s language and turn it against itself. We make our words a counter-hegemonic speech, liberating ourselves in language. (1994, 174-75)

Regardless of our students’ cultural or economic backgrounds, if we want to empower them to speak and to write in voices that will bring them both professional success and a true sense of belonging in their communities, we must take seriously our responsibility to connect with our students, to seek out all those “broken” ways in which they hope to communicate, to help them pursue our standards without losing or feeling shame for their own voices, and to strengthen all of their voices so they can “make English do what [they] want it to do.”

But can LLMs really offer our students an opportunity to develop voice, in either sense of the word? In what follows, I briefly consider the potential advantages and dangers of AI in the learning process, not to offer a cost-benefit analysis but to highlight the value of culturally responsive teachers against a backdrop of rapidly evolving technologies, techniques, and unknowns. As a cautious and highly selective adopter of AI in the classroom, I share from my own teaching experience to start a conversation about the uncertainty of this new technology and its place in our practice, with the suggestion that, whatever tools we play with, we always return to the liberatory and community-making power of self-expression.

AI as a Code-switching Tool

Writing in a particular style or for a particular discipline requires learning and using patterns of language that are recognizable within a language community. In this sense, all students learn to code-switch when they learn to communicate in different subjects, different classroom settings, and different assignments. Frequent exposure to a specific disciplinary style is essential for acquiring the skill of writing for that discipline, but transposing a passage from one context or genre to another can shed light on the aspects of language that are unique to each genre. AI, with its increasing ability to imitate diverse voices, might prove useful in this learning process.

Author: ChatGPT, please rewrite the above paragraph in the voice of an urban Gen Z student.

ChatGPT: Sure! Here’s a rewrite of your paragraph in the voice of an urban Gen Z student—think casual, direct, and a little playful but still on point:

Yo, real talk—writing for different classes is like speaking different languages. You gotta switch it up depending on the vibe: science class? Use those fancy terms. English essay? Get deep and poetic. It’s basically code-switching, just academic style. The more you’re around a certain subject’s way of talking, the easier it gets to sound like you know what you’re doing. And honestly? Flipping a paragraph from, say, a textbook vibe to something more chill—or vice versa—can help you see what makes each style tick. Lowkey, AI could be clutch for this since it can mimic all kinds of voices and tones. Just saying.

Author: Thanks! Now rewrite it in the style of a linguistics professor.

ChatGPT: Absolutely! Here’s that same paragraph rewritten in the style of a linguistics professor—precise, academic, and grounded in disciplinary terminology:

From a sociolinguistic perspective, learning to write within a specific disciplinary framework entails the acquisition of distinct linguistic registers and discourse conventions that are recognized within particular academic communities. In effect, students engage in a form of code-switching as they navigate the communicative demands of various subjects, classroom environments, and genres of academic writing. Repeated exposure to the linguistic norms of a discipline is essential for developing fluency in its written conventions. Moreover, the practice of transposing a text from one genre or rhetorical situation to another can illuminate the linguistic features and stylistic markers unique to each context. Given recent advancements in natural language processing, AI technologies capable of simulating a wide range of voices and genres may offer valuable pedagogical support in this area.

Code-switching is a practice students are always already engaging in, a skill they have learned in a variety of social contexts. In any academic or professional setting, linguistic norms are established and may be used to distinguish levels of expertise, in-group identities, moments of ritual or formality, or other distinctions understood within the community. The classroom or school setting is one in which the stakes may be high for a student who struggles with speaking or writing in a way that authority figures deem appropriate. Making a good impression and fitting in are reasonable goals, but a truly equitable education allows students to express their individuality and their culture. Ideally, children learn and develop confidence in speaking and writing in a variety of voices:

[C]ode switching and code meshing are not simply survival skills that enable a speaker of a community discourse to participate in an academic setting. Code switching and code meshing are performances of multiple voices, demonstrating the speaker’s mastery and intentionality. They are a kind of artistry, a conversational equivalent of musical sampling, where other voices, other strains, are purposely integrated into the performance. (Sperling and Appleman 2011)

This musical analogy is an apt one, not only because it helps illustrate how a student’s written and oral expression in the classroom can be polyphonic and representative of their diverse cultural influences. It also reminds us that music is a powerful, engaging tool for our classrooms: Music activates so many regions of the brain; it invites us to move along with rhythms, to sing in unison or to harmonize; and to imitate a wide range of voices and sounds. Just as artists learn both to master individual genres and to blend them, students can learn to speak or write within—and to blur the boundaries of—particular academic conventions and levels of formality. In this context, “voice” can be conceptualized as a performance or a process of experimenting with different ways of “sounding.” Teaching students to find their voices, then, means providing opportunities for play, means fostering a safe environment in which they can switch and mesh different styles until they figure out how to sound like themselves (this is where, as mentioned earlier, voice-as-agency is inseparable from the affective dimensions of sound). Thinking with performance studies, sound studies, science, technology, and society (STS), and other interdisciplinary fields, educators might consider the following points of departure for inquiry on developing student voice:

  • What are we already doing to teach our students about how text “sounds,” and how can we build on this to inform their writing practice?
  • How can we be more culturally responsive in a way to encourage code-switching and to welcome a wider variety of voices and sounds into the learning environment?
  • How might AI, assistive technologies (AT), and everyday classroom materials help our students conceptualize voice and develop their own voices as they become college- and career-ready?

From Raciolinguistic Ideologies to Ideological Technologies

One final consideration for this relationship between developing student voice and the educational technologies through which this development happens: How might students’ experimentation with voices reinforce negative perceptions regarding how certain people sound or communicate? On the one hand, when students are given text in an advanced academic style, they might feel a sense of alienation, a distance from a voice that sounds privileged, elitist, dominant, or even threatening. On the other hand, using AI to write something more accessible or culturally relevant can produce unintended consequences. When LLMs (or any media, for that matter) represent the voices of marginalized or minoritized communities, what harmful ideas or stereotypes are the algorithms invoking? What are we to do when AI speaks to our classrooms in voices that perpetuate ideas about certain groups being undereducated, delinquent, or deviant in one way or another?

Rosa and Flores coined the term “raciolinguistic ideologies” to describe how institutions and individuals alike tend to perceive speaking and listening subjects in ways that are deeply shaped by racism and colonialism. In academic settings, this manifests in bias toward students who speak and write in ways that, however complex, are coded as “deficient.” English-Spanish bilinguals, for example, are often depicted as “lacking full proficiency in either English or Spanish and in need of linguistic remediation to provide them with access to the so-called ‘academic language’ required for complex thinking processes and successful engagement in the global economy” (Rosa and Flores 2017, 626-27). Following this challenge against “the ideological assumption that racialized subjects’ language practices are unfit for legitimate participation in a modern world” (627), educators should be cautious that efforts to develop students’ voices do not preclude students from thinking and expressing themselves in ways that matter most to them.

We should also be mindful of how AI attempts to “sound like a race” (Rosa 2019) in ways that are, at best, ineffective, and, at worst, reproducing harmful ideologies. In response to criticisms of bias (implicit or intentional) in the programming of LLMs and other new technologies over the last several years, tech companies have worked toward mitigating this problem (USC 2022). As we encourage students to experiment in conversation with artificial voices, we must make them aware of these biases—and their own—that shape our representations and perceptions of racialized, gendered, and generally othered subjects.

Author: How does your programming affect how you represent various marginalized or stereotyped voices?

ChatGPT: This is a deep and crucial question. The way I represent marginalized or stereotyped voices is directly shaped by how I was trained, what data I was trained on, and how my creators have tried to mitigate harms. Here’s a breakdown of the main forces at play:

 

🔧 How My Programming Shapes Representation

  1. Training Data Reflects the World—Biases and All

I was trained on large-scale datasets from the internet, books, academic sources, and more. These sources include:

  • Biased portrayals of race, gender, sexuality, disability, and class
  • Unequal representation—dominant groups (e.g., white, Western, cis, male) are often overrepresented
  • Stereotypes and tropes that show up even in well-meaning or “neutral” texts

So, unless specifically corrected for, I may replicate:

  • Harmful patterns (e.g., exotification of Indigenous cultures, pathologizing of neurodivergence)
  • Underrepresentation (e.g., fewer nuanced examples of Black joy, disabled agency, or queer family structures)
  1. Reinforcement Learning + Guardrails

To reduce harm, my responses have been shaped by human feedback, safety training, and ethical guidelines. This includes:

  • Avoiding overtly racist, sexist, or hateful content
  • Limiting stereotypical portrayals or one-dimensional representations
  • Promoting respectful, inclusive, and balanced language

But this doesn’t mean I always get it right. Guardrails can:

  • Overcorrect and flatten expression (e.g., overly sanitized responses)
  • Undercorrect when a stereotype is subtly embedded in tone or framing
  1. No Lived Experience

I can simulate language about marginalized voices, but I don’t have:

  • Embodied understanding of oppression or cultural nuance
  • Firsthand experience with systemic inequity or resistance

This limits my ability to represent such voices authentically or in a way that honors their complexity.

 

⚠️ What That Means in Practice

Even when I’m trying to help:

  • I might unconsciously reproduce dominant narratives (e.g., framing “diversity” as an add-on rather than integral)
  • I could erase intersectionality (e.g., treating race, gender, and class as separate instead of deeply linked)
  • I may default to safe or mainstream sources, rather than amplifying radical or community-rooted voices

What I Can Do Responsibly

When guided by informed users:

  • I can highlight scholarship or voices from within marginalized communities (with citations when possible).
  • I can help you analyze language, reveal bias, or deconstruct harmful tropes.
  • I can support your creative or critical projects that aim to center or uplift underrepresented voices—especially when you bring the lived or scholarly insight I lack.

Conclusion: The Machines Will Never Know Our Students Like We (Should) Know Our Students

After ChatGPT offered to help my fictitious 15-year-old student to practice (see “Prologue”), I used a short response by one of my real students from an actual writing prompt in a history class. I often share this writing sample with colleagues when we discuss culturally responsive pedagogies, surprising moments in the classroom, or strategies for making our lessons and activities more inviting and engaging for our students. With this warm-up writing prompt, I had expected students to consider major world events: the Black Lives Matter movement, Covid-19, the Russia-Ukraine War, or something similar in scale. This 10th grader described something that made a major impact in her own world, somewhere in Wilmington, Delaware:

What event during your lifetime has had the biggest impact on the world?

When me and my old friend were playing outside and then she had got shot in her leg an [sic] we were about 5 or 4 years old and that was world changing because she still walking like nothing happened today.

 

ChatGPT: Thank you for sharing that. That’s a powerful memory—short but full of emotion. Let’s take what you wrote and shape it a little, just to bring out your voice more clearly while keeping it your story.

Here’s a revised version based on your words:

“I remember playing outside with my old friend when we were just four or five years old. Out of nowhere, she got shot in the leg. It was scary—everything changed in that moment. I didn’t really understand it back then, but now I see how serious it was. The crazy part? Today, she walks like nothing ever happened. It blows my mind sometimes.”

What’s working here:

  • It still sounds like you. It’s real and emotional, not fake or overly fancy.
  • There’s a mix of facts and feeling. You’re not just telling what happened—you’re telling why it matters.
  • It flows clearly. Your point comes through without getting lost in grammar.

If you want, we can keep working on this moment—turn it into a short scene or memory piece. Or we can try a different story. You’re already doing the hardest part: being honest. That’s how a strong writing voice starts. Want to keep going?


This is not how my student sounds.

I know my student’s voice in a way that no LLM is presently capable of understanding. I could use this sort of artificial dialogue exercise with my students to help them think of new ways to write their ideas, but if I am doing my job, I am in a better position to help them find their voice than a program that cannot (for now) hear them, much less relate to them. Indeed, like many teachers (Langreo 2024), I am wary of the unintended consequences of AI. First and foremost, I worry that this new technology is little more than a shortcut that causes our students to miss out on crucial learning processes, struggles, and — most importantly here — the human relationships built within educational settings. Surely, there must be a more ethical way to empower our youth to resonate in their worlds, to tell their stories and to sound themselves out in their most unique vibrations.

I do not want to be so quick, however, to dismiss a new technology simply because of its potential drawbacks and slippery slopes. And anyway, as the refrain goes, AI is here to stay. As a teacher who has learned from colleagues of several backgrounds and generations, I understand that there is no singular way to reach our learning objectives, and that broadening our horizons by embracing a variety of tools and strategies can only benefit students by giving them more choice in their process. Likewise, when a student is searching for their personal voice to show what they are learning or how they are feeling, I want to allow them the space and the means to do so, wherever they might be in their academic and personal journeys. Of course, I will always encourage them to seek out the most intimate opportunities to experiment with their voices and the real, live voices of those around them. Instead of asking ChatGPT about the MOVE bombing or the immigrant populations in their cities, students could be conducting oral history projects in their communities or visiting local libraries, museums, and festivals. In these processes, they are exposed to the diversity of voices and sounds with which they can converse and rehearse their own; they develop critical tools for listening to multiple perspectives and for analyzing diverse modes of communication. But just because these “natural” or “traditional” ways of learning and (re)searching might be better, richer, more effective methods in the long run, does not mean that they should be our only methods, and it certainly does not mean they will come naturally to everyone, even our most capable students.

Consider how, only a few years ago, our students were struggling to learn in their virtual classrooms: By default they were muted. When they did have an opportunity to speak, their faces often out of frame or otherwise illegible, they might have been heard by a few of their peers, at best, after being reminded, “You’re on mute.” The most conversation they might have had in their entire school day, between March 2020 and fall of 2021, was in a breakout room.

The point here is that we are serving young people who, for better or worse, have a deep relationship with technology that has shaped their understanding of voice, how they sound, how they choose to express themselves or refuse to do so. We teachers also have relationships to technology that shape our own choices, and while our knowledge and experience may differ profoundly from our students’ knowledge and experience, we can connect with our students in the anxieties that new technologies introduce to our learning spaces. While the challenges of early Covid virtual learning may have overwhelmed everyone in the Zoom room at times, I remember it now as a time of personal and professional growth as an educator, pushing me to play with new technologies and strategies until I began to figure out my teaching style, my voice in the classroom. During this time of trial and error, my students appreciated my vulnerability and my persistence in finding ways to reach them, to be relevant through whatever tools we had to share at the time. AI might be bigger and scarier than anything we have seen before, but it presents us with new opportunities to learn with our students and to center their voices in the process. Some of our students have not had access to the experiences and tools needed to navigate their language communities. Some have trauma, disabilities, and other, very real obstacles complicating their written and oral expression—obstacles that AI might be able to research widely in an instant but that only a caring mentor can take the time to understand on a personal level. If we are to help them find their natural voices, we will have to venture, care-fully, into their artificial spaces, where we might build the confidence and agency they need for whatever “real world” comes next.

 


Benjamin Bean is a Social Studies teacher at Mount Pleasant High School in Wilmington, Delaware, where he also directs the Jazz Band and advises the African Diaspora Association. He is a PhD candidate in sociocultural anthropology at University of California, Davis, with his research focusing on the relationship between student belonging and classroom technology.

Endnotes

  1. All quoted ChatGPT conversations occurred June 13, 2025 and June 19, 2025 (OpenAI).

Works Cited

hooks, bell. 1994. Teaching to Transgress: Education as the Practice of Freedom. New York: Routledge.

—-. 1999. Remembered Rapture: The Writer at Work. New York: Holt Paperbacks.

Langreo, Lauraine. 2024. Teachers Are More Wary of AI Than Administrators. What Would It Take to Change That? Education Week, January 9, https://www.edweek.org/technology/teachers-are-more-wary-of-ai-than-administrators-what-would-it-take-to-change-that/2024/0.

OpenAI. 2025. ChatGPT (June 13 version) [Large language model]. https://chat.openai.com/chat.

—. 2025. ChatGPT (June 19 version) [Large language model]. https://chat.openai.com/chat.

Rosa, Jonathan and Nelson Flores. 2017. Unsettling Race and Language: Toward a Raciolinguistic Perspective. Language in Society 46.5:621-47.

Rosa, Jonathan. 2019. Looking like a Language, Sounding like a Race: Raciolinguistic Ideologies and the Learning of Latinidad. New York: Oxford University Press.

Sperling, Melanie and Deborah Appleman. 2011. Voice in the Context of Literacy Studies. Reading Research Quarterly. 46.1:70-84.

USC Viterbi School of Engineering. 2022. “That’s Just Common Sense”: USC Researchers Find Bias in Up to 38.6% of ”Facts”’ Used by AI. May, https://viterbischool.usc.edu/news/2022/05/thats-just-common-sense-usc-researchers-find-bias-in-up-to-38-6-of-facts-used-by-ai