
Readers’ Note: In this piece, I use the word “voice” to describe more than the sounds we make. It’s about how we express who we are through our words, tone, values, and presence. While I reference research along the way, this is by no means a literature review. It’s a reflection (part observation, part lived experience) about how both humans and artificial intelligence learn to communicate.
Voice as more than sound
Not long ago, one of my friends told me that my daughter sounds like me — not just in tone, but in the way she speaks. I had to laugh because it’s true. We share a similar cadence and even some of the same expressions. I can hear myself in her voice (even though she may not like to hear this).
The funny thing is, I’ve also started to sound a bit like her and her friends. I spend a lot of time around teenagers, and I’ve found myself using their lingo. Sometimes ironically, just to make them roll their eyes, but some of it actually sticks. The words we use and the ways we express them are contagious.
Our voices are shaped by the environments we spend time in, the conversations we have, and the media we consume. They evolve with every book we read, podcast we listen to, and person we engage with. Over time, those small influences accumulate and form something deeply personal: a voice that reflects who we are, what we value, and how we want to be understood.
When you think about it, we’re all learning from data — but our data are even less structured than the data used to train artificial intelligence (AI). Generative AI models are trained on massive amounts of data (Stryker & Scapicchio, n.d.; Zewe, 2023). We, on the other hand, learn from lived experience: raw, emotional, sensory, and often inconsistent. Our “training dataset” includes stories, relationships, memories, values, and more. That’s what makes human learning so unique.
That’s where this idea began for me: if AI learns to “speak” through the data it’s fed, aren’t we doing something similar? The difference is that we have a choice. We can decide what shapes us — and what doesn’t. Finding your voice isn’t just about learning how to speak; it’s about choosing what to listen to and what to carry forward.

We are pattern learners
From the time we’re born, we start collecting patterns. We mimic the voices around us long before we understand what the words mean (Mampe et al., 2009). Babies copy sounds and expressions (Mampe et al., 2009; Sundara et al., 2020). Adults do it, too, though we may think we’ve outgrown it. We pick up phrases from family members, coworkers, friends, favorite authors, and even strangers online.
If you listen closely, you can hear the echoes of your surroundings in your own speech. Maybe you’ve caught yourself using a phrase from a podcast host or adopting the tone of someone you admire. It happens naturally. We are wired to learn by imitation — to internalize patterns of sound and behavior (Mampe et al., 2009).
Even newborn babies show this instinct. In one study, French babies tended to cry with rising pitch patterns while German babies cried with falling ones, reflecting the intonation of their native language (Mampe et al., 2009). And, research has shown that just five hours of exposure to a second language can measurably alter an infant’s speech production (Sundara et al., 2020), demonstrating how quickly some human brains can adapt to new linguistic inputs.
Like us, AI systems also learn through pattern recognition — finding structure in even unstructured data and adjusting their internal “weights” to reflect relationships between words, ideas, and tone. The key difference is that AI lacks the full human ability to interpret meaning or emotion. Humans do this instinctively. Our data aren’t cleaner, but are richer, more context-dependent, and more tied to purpose.
But unlike AI, we don’t just process patterns; we interpret them. We assign meaning. When a friend sighs before saying, “I’m fine,” we hear more than the words. Our brains read tone, context, and emotion all at once. That’s what makes our learning (and our voice) distinctly human. It’s not just what we hear; it’s what we understand and choose to reflect.

From imitation to intention
Imitation is where we all start, but it’s not where we’re meant to stay. As we grow, we begin to notice the patterns we’ve absorbed. And, it’s also when we decide (consciously or not) which ones feel like us. That’s where voice starts to take shape: in the choosing.
I notice this when I catch myself using my daughter’s teen slang. Sometimes I do it just to make her and her friends cringe, but the funny thing is, some of it slips into my everyday speech. Language has a way of sneaking in through the back door. What begins as mimicry can turn into a habit.
That’s why awareness matters. We all inherit voices from somewhere — family, community, culture, even algorithms. But, at some point, we start curating them. We keep the words, tones, and rhythms that resonate with our values, and we let go of the ones that don’t. That process isn’t about constructing a brand or polishing a persona. It’s about noticing what feels authentic, and being brave enough to let that be heard.
Our voice, then, becomes more than a reflection of where we’ve been. It becomes an expression of who we are becoming.

Curating your data stream
Our voices are shaped as much by what we take in as what we put out. The people we spend time with, the stories we consume, and the platforms we scroll — all of it becomes input. Over time, those inputs start influencing not just how we speak, but how we think.
If you’ve ever found yourself quoting a podcast host or echoing the tone of a favorite writer, you’ve experienced this firsthand. The voices we surround ourselves with can sharpen our thinking or dull it. They can expand our empathy or narrow our view.
Artificial intelligence faces a similar challenge: its voice depends on the quality of its training data. If the data are biased or shallow, the model reflects that. Humans aren’t much different. We may not call it “data,” but the information and attitudes we absorb become the foundation for how we communicate and make sense of the world.
Research even shows that as adults age, their voices continue to evolve — articulation slows, pitch changes, and subtle shifts in vocal stability occur over time (Fougeron et al., 2021). That suggests our “output” is never fixed. It adapts based on both biology and experience.
The influence of our environment begins early. Studies have found that when parents use digital devices during child routines, their children tend to have smaller vocabularies and weaker grammar development (Sundqvist et al., 2021). In other words, the voices we hear and habits surrounding us — even the silent ones, like screens — shape our capacity for connection and expression.
That’s why discernment matters. Developing a strong voice means paying attention to your “data diet” — choosing what deserves your attention and what doesn’t. It’s less about shutting out noise entirely and more about being mindful of how much space it takes up.
When we’re intentional about our inputs, our output — or, our voice — can become clearer, kinder, and more grounded in what we truly believe.

Mutual training: How we teach the tools that teach us
Artificial intelligence learns through exposure and feedback — similar to the way we do. Every prompt, correction, or clarification helps refine how it responds. In that sense, AI mirrors the tone and habits of the people who interact with it most often.
But that relationship goes both ways. The more time we spend with these tools, the more they have the power to shape our voice, too. We start adjusting how we ask questions, clarify ideas, and organize our thoughts. Over time, our communication becomes a blend of human instinct and machine influence, each shaping the other in subtle ways.
It reminds me of how my daughter has picked up my phrasing, and I’ve unwittingly picked up some of hers. We train each other through everyday conversation. The same thing happens when we use AI. If we approach it with curiosity and respect, it reflects that tone. If we’re dismissive or careless, it can mirror that as well.
When used thoughtfully, AI can even help us find our voice. It can surface patterns in our writing, clarify ideas we’ve been circling, or help us hear ourselves more clearly. Like any tool, it reflects how we use it, and it can amplify our creativity when guided by purpose.
That’s why how we interact matters. We aren’t just using technology; we’re helping form its voice. And that, in turn, invites us to think more carefully about our own.

Values as the algorithm
At the core of every voice — human or artificial — there’s an algorithm. For AI, it’s a set of mathematical weights and objectives that determine how it responds. For us, I believe it’s our values.
Our values can shape our tone, word choice, and even the courage to speak up. They determine when we soften our language to invite understanding and when we use it to stand firm. For me, that means assuming positive intent and calling out the good in the world whenever I can. It also means using my voice to stand for what’s right, even when it’s uncomfortable.
Values are what give our voices coherence. Without them, we’re just mimicking patterns. With them, we communicate meaning. They help us decide what deserves amplification and what’s better left unsaid.
The same way that an algorithm learns through weighted feedback — giving more importance to what aligns with its goal — our voices evolve through the reinforcement of what we believe to be right. Every time we choose honesty over convenience or empathy over defensiveness, we strengthen that internal code.
In a world full of noise, our values act as a stabilizing layer, keeping our voice steady. Reflecting on our values can help remind us that how we say something can matter as much as what we say (or even more). Unlike AI, we have the ability and the responsibility to decide what kind of presence we want our words to leave behind.

Reflection as fine-tuning
Artificial intelligence improves through training loops — feedback, adjustment, and retraining. Humans do the same thing, just less formally. Every time we write, speak, or listen, we’re testing our voice against the world and learning from the results.
Reflection is how we fine-tune. We notice when something we said landed well (or didn’t) and we adjust. We rework our phrasing to sound clearer or kinder. We learn when to speak and when to stay quiet. Each iteration shapes our tone, making it more intentional and aligned with who we are.
Writing plays a big part in that process for me. It’s where I can slow down and listen to my own thoughts: to check if what I’m saying still feels true. Reading and conversation help me with this too, especially with people whose experiences differ from my own. Those moments stretch my perspective and keep my voice grounded in empathy and growth. Research supports this process: voice perception itself continues to develop through childhood and adolescence (Harford et al., 2024), showing how we refine not only how we speak but how we listen to and interpret the voices of others.
In that way, reflection is more than correction; it’s calibration. It keeps our voice dynamic, human, and alive — never static, never finished.

The gift of being human
For all that AI can learn to mimic, it can’t feel. It can identify sentiment, but it doesn’t know empathy. It can generate convincing words, but it doesn’t truly understand meaning. That’s what makes the human voice so extraordinary: it carries intention, emotion, and moral weight.
When we speak, we draw not just on patterns, but on experience. We’ve lived the joy, the heartbreak, and the uncertainty behind our words. That lived experience gives depth to our tone and credibility to our convictions. AI can replicate rhythm and syntax, but not sincerity.
Research on voice development supports this distinction. Humans learn to perceive and interpret voice through years of lived interaction — evolving from simple sound recognition to an understanding of identity and emotional nuance (Harford et al., 2024). That development requires context, connection, and care, which are challenging to program or simulate.
Our ability to connect pattern with purpose — to join data with conscience — is what gives our voices life. It’s why a kind word can heal and a careless one can wound. It’s also why developing our voice matters so much. The goal isn’t to sound perfect; it’s to sound real.
At its best, the human voice reminds others — and ourselves — that we’re still here, still learning, still listening.

A call to conscious voice
When I think back to that moment — realizing my daughter sounds like me — I’m reminded how naturally voice spreads. We don’t set out to teach it; we just live near each other long enough for it to happen. The same thing is true in every part of life. The words we use, the tone we take, and the way we choose to show up all shape the world around us.
Artificial intelligence may be learning to speak, but so are we. Every interaction, every story, every bit of feedback refines our voices, too. The difference is that we get to choose what we learn from. We can decide what we want to absorb, what we want to reflect, and what kind of tone we want to set in the spaces we inhabit.
Our voices are living things: part inheritance, part influence, part intention. They grow stronger when we use them thoughtfully. And as the world gets louder, that kind of consciousness matters more than ever.
Because in the end, we’re all learning from the noise around us. The real work is turning that noise into something meaningful — allowing it to help us craft a voice that reflects not just what we’ve heard, but who we’ve chosen to be.
I’d love to hear how you think about your own voice, and how tools like AI might be helping (or challenging) it. Feel free to share your reflections in the comments below.
Author’s Note
I wrote this essay to think through what it really means to “find your voice” in a time when both humans and technology are learning to speak in new ways. Much of what we call learning (in people or in AI) comes down to recognizing patterns, deciding which ones matter, and choosing what to amplify. For me, this learning process is deeply personal. It’s about curiosity, empathy, and intention. My hope is that this piece encourages you to listen more closely: not only to the noise around you, but to the quiet shaping of your own voice.
References
Fougeron, C., Guitard-Ivent, F., & Delvaux, V. (2021, October 25). Multi-dimensional variation in adult speech as a function of age. Languages, 6(4), 176. https://doi.org/10.3390/languages6040176
Harford, E. E., Holt, L. L., & Abel, T. J. (2024, March 8). Unveiling the development of human voice perception: Neurobiological mechanisms and pathophysiology. Current Research in Neurobiology, 6, 100127. https://doi.org/10.1016/j.crneur.2024.100127
Mampe, B., Friederici, A. D., Christophe, A., & Wermke, K. (2009, November 5). Newborns’ cry melody is shaped by their native language. Current Biology, 19(23), 1994-1997. https://doi.org/10.1016/j.cub.2009.09.064
Stryker, C. & Scapicchio, M. (n.d.). What is generative AI? IBM. Retrieved October 28, 2025, from https://www.ibm.com/think/topics/generative-ai
Sundara, M., Ward, N., Conboy, B., & Kuhl, P. K. (2020, January 29). Exposure to a second language in infancy alters speech production. Bilingualism: Language and Cognition, 23(5), 978-991. https://doi.org/10.1017/s1366728919000853
Sundqvist, A., Koch, F., Thornberg, U. B., Barr, R., & Heimann, M. (2021, March 17). Growing up in a digital world: Digital media and the association with the child’s language development at two years of age. Frontiers in Psychology, 12, 569920. https://doi.org/10.3389/fpsyg.2021.569920
Zewe, A. (2023, November 9). Explained: Generative AI. Massachusetts Institute of Technology. Retrieved October 28, 2025, from https://news.mit.edu/2023/explained-generative-ai-1109


Leave a comment