AI Privacy, Fake Reviews, and Outputs: Your Essential Check-In Guide

Published by Dan on

Film noir style illustration showing detective examining AI data with magnifying glass representing verification and privacy protection

I have little interest in drinking from the firehose of artificial intelligence news.

Instead, I keep my ears peeled for signals in the noise. What information feels so counterintuitive that I have to stop and read it two or three times to attempt to understand what may be at play, or maybe even what people aren’t talking about.

Today let’s dig into three “Check-ins” for some insight and learning about the state of AI away from the glare of venture capital-fueled headlines. No “A Rounds” here. Just three tales to help you more confidently navigate these turbulent times.

Plus, you can check out my new 10-Minute AI Privacy Audit.

Verify. Always Verify.

From Oregon we learn that some people continue to not check information generated by AI. Because of it, a lawyer will pay dearly.

For submitting false information to the court, a judge ordered Portland civil attorney Gabriel A. Watson to “pay $2,000 to the state judicial department, charging him $500 for each baloney citation and $1,000 for the bogus quote” according to an OregonLive article.

What did the attorney do? As outlined in an earlier article, the attorney “referenced a 2021 case called ‘Stell v. Cardenas,’ but Simon quickly identified it as a ‘hallucinated’ case, with no parties named Stell or Cardenas in any case in the ‘District of Oregon or elsewhere.’”

In other words, Attorney Watson failed to heed the words of warning displayed on every AI platform.

Claude AI interface showing warning message to double-check AI responses for accuracy

Many people use AI, and more use it every day. According to a study by the Federal Reserve Bank of St. Louis, more than half of U.S. adults between the age of 18 and 64 use generative AI. That usage grew by 10% in one year!

You must always check the outputs. There is no substitute for this oversight because the technology continues to make things up, as the Oregon attorney learned the hard way.

The next time you start to copy and paste something AI produces for you, ask yourself if you would bet $2,000 it is accurate.

Don’t gamble your reputation on blind trust. Check the facts.

Maybe Don’t Believe That Five-star Review

You know those five-star Amazon reviews you’re reading right now for last-minute holiday gifts? Up to a third of them might be fake.

Although AI is designed to prioritize reliable sources over sketchier information, we prefer to trust our guts for what to purchase. And many of us are really bad at detecting fake reviews. According to one Cornell University study, humans were able to detect a fake product review only about 50.8% of the time.

They may as well have flipped a coin.

You’d do well to use AI to research that gift. It will certainly search better than you can, and may do a better job surfacing a higher quality product.

Let AI’s strengths cover your human weaknesses.

What You Pay in Privacy by Using AI

When you do search, protect your data, especially if you use Gemini.

Until preparing for a client meeting last week, I didn’t realize there was a key distinction in how major AI companies handle data. What I found made me want to tell you right away.

Google Gemini does not let a user opt out of letting the model be trained with their data while also retaining memory. If you opt out of training, you lose access to past chats.

Anthropic’s Claude and OpenAI’s ChatGPT both allow a free user to opt out of letting the model be trained on their data while also remembering their conversations. That memory is important because it enriches later conversations.

Differences carry through to each model’s paid versions as well.

For ChatGPT Plus and Claude Pro, model training is turned off by default. Gemini Advanced turns model training on by default and, as with its free version, does not allow a user to opt out of model training and retain conversation history.

Why the difference? Google’s business model depends on data more than OpenAI’s or Anthropic’s. It’s just Google being Google, as I’ve written about before.

These may not be deal breakers for you. You may not have even opted out of model training, but you should because when you lose control of our data, there are real risks at play including:

  • LLMs can unintentionally expose sensitive data: Models can memorize or reproduce parts of their training data, creating risks of privacy leakage especially for personal, proprietary, or regulated information.
  • Once data is used for training, control is effectively lost: Training data cannot be selectively removed or “forgotten.”

I am concerned enough about what could be done with my personal information that I am extremely cautious about what I do and do not share with AI. That extends to using masked email addresses for my accounts and opting out of allowing my data to be used for training whenever possible.

Check-in To Safeguard Your Data or Go Deeper

Notice the pattern?

The attorney didn’t check his output. We don’t check where our product info comes from. Most of us never check our privacy settings.

We trust AI blindly or avoid it entirely out of fear. Both approaches are wrong.

The right move is to check in. Every time. Check your outputs., you your sources, and your settings.

To help you, I put together this free 10-minute AI Privacy Audit checklist. It will help you navigate the tricky permissions landscape.

Click through here, provide an email address, and download the checklist.

Why an email address? I want to notify you when the checklist changes. That’s it.

If you do want a hand with this, though, and are ready to start using AI with confidence, my AI Trainer service is for you. It will help you build these check-in habits while unlocking AI’s real potential for your work and life without the overwhelm or privacy risks.

Book your free orientation call now and let’s figure out your next steps together.

Categories: AIPrivacy

0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *