Using AI Wisely: Lessons from Voice-to-Text

Photo by Jessica Lewis ud83eudd8b thepaintedsquare on Pexels.com

Artificial intelligence (AI) seems to be everywhere these days. I’ve been especially thinking about generative AI, the kind that creates text, images, and even videos in response to a prompt. That’s exciting, but also unsettling. Whenever new technology becomes part of daily life, I can’t help but think about the ethics behind it—and whether we’re using it wisely.

One of my biggest concerns with AI is the temptation to take what it gives us at face value. If we don’t pause to ask whether the information is accurate or reliable, we run the risk of spreading misinformation or making decisions based on shaky ground. That’s not just a tech problem. It’s an information literacy problem.

To explain what I mean, I want to share a story about my mom and her love of voice-to-text. It’s a funny example (at least to me), and also brings to light something important about how we use AI in our lives. And after that, I’ll share some practical strategies you can use to make sure AI stays a helpful tool rather than a risky shortcut. 

A lesson from voice-to-text (and my mom’s humor)

My mom was one of the funniest people I’ve ever known. She had a way of bringing humor into nearly any situation. Even in the hospital near the end of her life, she was making up silly songs. At one point, when she needed to use the restroom, she started singing “Let it go, let it go, can’t hold it in anymore!” to the tune of the Frozen song. It was such a simple thing, but it brought smiles to everyone in the room.

That playful spirit also showed up in how she used technology. Back when mobile voice-to-text was newer (and a lot less accurate), my mom fell in love with it. She’d send me messages so riddled with errors they barely made sense. But instead of being frustrated, she’d laugh about it—knowing I’d call her later, confused, and she’d get another laugh out of explaining what she had really meant.

Voice-to-text was never perfect, but she didn’t mind. For her, the errors were part of the fun. And I think they were her way of “baiting” me into calling her—knowing we’d share another good laugh when I did. For me, those moments became a reminder that technology, no matter how advanced, still needs a human touch. And, from my perspective, that’s just the lesson we need to carry into how we use AI today.

What voice-to-text teaches us about AI

Voice-to-text has its place. If I’m driving, it’s great to be able to answer a message without taking my eyes off the road. If my hands are covered in bread dough, talking into my phone can keep it clean. In those types of moments, I’m willing to accept a few mistakes because the convenience outweighs the risk of confusion.

But if I’m sending something important—like a message with medical information, financial details, or anything time-sensitive or business-related—I’m far less likely to use voice-to-text. Or, if I do, I carefully proofread before hitting send. Accuracy matters more in those moments, and I don’t want a silly error to cause harm.

In my view, AI use is similar. It can be an amazing helper when the stakes are low: getting quick ideas, playing around, or general brainstorming. But when accuracy, reliability, or ethics are on the line, it’s risky to rely on AI without carefully checking the facts. Just like voice-to-text, AI still needs a human touch.

And here’s where the comparison breaks down a bit—because AI’s mistakes don’t always look like mistakes. With voice-to-text, it’s usually obvious when a sentence comes out jumbled. But with AI, the output often looks polished and convincing even when it’s wrong. That makes it even more important to pause, verify, and take responsibility for how we use the tool.

So how do we keep AI helpful without letting it take over?

6 tips for using AI responsibly

It doesn’t look like AI is going away, and when used thoughtfully, it can be a powerful support tool. The challenge is making sure we stay in control of the process instead of outsourcing our judgment to a machine. That means approaching AI with a mix of curiosity, caution, and responsibility.

Here are six ways to keep your generative AI use responsible:

  1. Verify the facts. Double-check names, dates, statistics, and sources. AI is known to “hallucinate” or make things up, and it often sounds confident even when it’s wrong.
  2. Go to the source. If AI summarizes an article or report, click through and read the original material. Don’t rely on the AI’s version alone.
  3. Match the tool to the task. For casual brainstorming, the stakes are low. But for business, academic, or personal decisions that carry weight, slow down and fact-check before using the output.
  4. Watch for bias. AI systems learn from human-created data, which means stereotypes and blind spots can show up in the results. Ask yourself: Whose perspective might be missing here?
  5. Be mindful of privacy. Avoid sharing sensitive personal or business information with AI tools unless you know exactly how the data will be stored and used.
  6. Keep your voice. AI can help with ideas and drafts, but your unique style, perspective, and ethics should always shape the final product.

In the end, responsible AI use is more about the choices we’re making as humans than it is about the technology. If we remember that AI still needs a human touch, we can use it in ways that add value without compromising accuracy, ethics, or our own integrity.

Why verifying before you trust matters

At its heart, this conversation really isn’t just about technology—it’s about voice. My mom’s voice came through even in the silliest, most error-filled messages. I always knew it was her behind those texts, no matter how garbled they looked. That’s the piece we can’t afford to lose with AI.

We each have a voice—our own style, our own perspective, our own values. AI can be a great tool, but it’s not a replacement for who we are. It can help us draft, brainstorm, or find information, but it can’t tell our story for us. If we’re not careful, it’s easy to let its polished answers drown out our own thinking.

Using AI responsibly means more than just fact-checking. It means making sure the final product still sounds like you. It means slowing down enough to question, verify, and decide for yourself what belongs. In other words, it’s about using AI as a partner—not a substitute—for your voice.

If we keep our voice at the center, the technology can enhance what we do instead of erasing what makes us unique. That’s the real opportunity—and the real responsibility—we have as AI becomes part of everyday life. It’s an exciting prospect. 

Of course, that raises another important question: how do we really know what our voice is in the first place? That’s a conversation for another day. But it’s one worth having, because clarity about our own voice is the first step toward making sure AI doesn’t take it away.

In the meantime, I’d love to hear your thoughts. How are you using AI in ways that help—or risk—shaping your voice? Share your ideas in the comments. Let’s talk about it!

Leave a comment