The Wrong vs. Right Way to Use AI

The Wrong vs. Right Way to Use AI

We’re all figuring out how to use AI in daily life. Some of us are enamored by it. Others are still ignoring it, almost three years after the launch of ChatGPT!

Both are mistakes in my eyes.

AI is a tool. As with any tool, the operator makes the difference. Give a carpenter a hammer and you’ll get a house. But if you give me a hammer, I ain’t building anything.

Since late 2022, I’ve used AI every single day though. I’ve seen what works and what doesn’t.

I’ve also seen where people go off the rails, sometimes in ways that can hurt their careers or even their mental health.

That’s why I believe there’s a wrong way to use AI, and there’s a right way. Let’s get into it.

The wrong way

1. Treating AI like a person

AI is not human. It doesn’t have emotions. It doesn’t understand you. It predicts language patterns based on massive training data.

And yet, because it writes like a person, people start talking to it as if it were one. They vent, they confess, they rely on it for emotional comfort.

That’s not harmless. I’ve read too many stories of people who got addicted to late-night “conversations” with ChatGPT. Some even said they felt like the AI “knew them better than anyone.”

That’s a dangerous illusion. Remember that AI is predicting what a caring friend might say. But it doesn’t care. It doesn’t know you. Treating it like it does will only make you dependent and less connected to real people.

No matter how advanced AI gets, always use it as a tool, not a companion.

2. Taking advice from AI

During the early months of my wife’s pregnancy, she constantly asked ChatGPT about food, exercise, and health. And of course, the answers were over-the-top cautious.

“Don’t eat this. Don’t drink that. Don’t do that.” It was basically saying: avoid life.

That’s what AI often does. It plays it safe. Because it doesn’t know your unique situation. And because it’s been trained to avoid liability.

This is why you can’t let AI act as your doctor, therapist, or boss. It can research options.

It can summarize guidelines. It can surface information. But you, the human who’s in charge, make the final call.

3. Believing everything it says

AI is confident, but it’s not always correct. In fact, sometimes it makes up entire answers. The industry calls this “hallucination.” But you can really call it lying with confidence.

I’ve seen AI give me research papers that don’t exist, statistics that aren’t real, and quotes that were invented. If you just take it at face value, you’ll spread bad information without even knowing it.

That’s why I always double-check. ChatGPT is my main tool, but I also use Gemini and Perplexity. If two or three different AIs give me the same result, I trust it more. If they don’t, I dig deeper.

The rule is simple: Be skeptical. Treat AI like an intern who’s eager, fast, but unreliable. You still want to check the work. And even as AI is getting more accurate and better, I still think it will be healthy if we remain skeptical.

What’s the harm in checking the AI?

The right way

1. Letting it write for you

I’ve gone back and forth on this a lot over the years. I don’t want to outsource the process of thinking to AI. I still want to do my own thinking.

But at the same time, there’s also a lot of writing that I don’t particularly enjoy doing. I like writing articles like this one. But sometimes I’m just not in the right mood. In those cases, I use ChatGPT as follows.

I write a messy draft in the chat box. Just bullet raw ideas and my talking points. No grammar. No polish.

But I do use the structure that I want. So I start with a beginning, middle, and end in mind. I don’t say, “Write an article on boosting your focus for me.” I don’t see any advantages of letting AI do all the work.

After I write a rough draft, I ask ChatGPT to refine it but keep my language and voice. It gives me a nice little draft. From there, I edit it fully line by line to make sure it’s exactly as I want.

Sure, it saves time. But more importantly, it helps me to write the things I wouldn’t necessarily write on a particular day.

Overall, I like using AI to do writing, especially when it comes to formal emails, lesson descriptions of my courses, and other writing I don’t really like doing.

2. Using it for brainstorming

This one is straightforward and I think one of the most common ways to use AI. But it’s still worth mentioning.

I really use it daily for this purpose.

I ask for new angles on articles, ways to explain complex ideas, or strategies for my business.

I’ve also used it for personal projects, like interior design. We just bought a new house, so AI is helping a lot with choosing furniture and wall colors, etc.

The point isn’t that AI gives you the answer. It’s that it gives you enough answers to spark something better in your own thinking.

3. Automating workflows

AI isn’t just for ideas. It’s starting to get good at whole workflows.

For example, when I updated my course Ai basics, I used ChatGPT as a project assistant. It researched the topics, outlined the lessons, drafted advice, suggested visuals, and even created the slide decks. All I had to do was edit and refine.

Let me tell you this. The end result was a BETTER slide deck than I would design by myself.

That’s where AI is heading. It’s not replacing you, but giving you leverage and improved output.

Enhance your capabilities

The wrong way of using AI makes you dependent, misinformed, and disconnected.

The right way makes you better, faster, and more creative.

The point of AI is to enhance human capabilities.

Use it to write, brainstorm, and automate. But never treat AI like a human. Don’t let it make your decisions. Don’t trust it blindly.

Read Next: