The human review skills are essential for successful, safe AI development.

It’s Monday! Time to do some spring cleaning, IT style. Empty that Recycle Bin, clear the desktop, uninstall those apps you haven’t opened since 2020, and, for crying out loud, get under your desk and finally pick up those three broken earbuds.

In today’s edition:

Check, please!

Run CMMC

Anthropic and choose

—Billy Hurley, Eoin Higgins, Caroline Nihill

SOFTWARE

Getty Images

Once upon a time, the term “humans in the loop” meant people stuck on a roller-coaster.

In the age of automation, however, the idea of humans and loops appears frequently in conversations about guardrails for AI, a technology with lots of potential to cause havoc if unsupervised.

AI-powered decisions call for an accuracy-checking presence, a human reviewer who can prevent potential disaster stemming from a wrong number or poorly generated word choice. The desired candidate isn’t just someone who can give a thumbs-up to whatever appears on a dashboard.

We spoke with AI pros about how tech practitioners can set themselves up with the necessary skills to become a good human in the loop (HITL).

Like how to “push back” on a prompt.BH

From The Crew

IT STRATEGY

Morning Brew Design, Image: Adobe Stock

It’s here, it’s real, and it’s not necessarily your friend.

That’s how some smaller suppliers are treating the Department of Defense (DOD) Cybersecurity Maturity Model Certification (CMMC) requirements. Enforcement is now tied to the outcome of third-party cybersecurity audits—prior to the new rule change, companies were expected to self-assess compliance—which has boosted the difficulty of meeting security standards.

According to Rob McCormick, CEO of cloud computing company Avatara, there’s little chance that the federal government will bend on the requirements, making compliance a necessity to win and fulfill government defense contracts.

“We’ve been involved in some processes where we’re trying to get the government to give people a little bit of leeway, and they’ve been unyielding on it,” McCormick said. “They’re taking it seriously.”

Which leads to serious demands, like a third-party audit.EH

IT STRATEGY

Morning Brew Design, Image: Adobe Stock

Anthropic is locked in a fight with the Department of Defense about how its AI products can be used by the military. Could that highly publicized battle impact other industries’ use of AI?

After Anthropic told the Department of Defense (referred to by the Trump administration as the Department of War) that it didn’t want its AI used for “mass domestic surveillance” or to power “fully autonomous weapons,” Defense Secretary Pete Hegseth designated the company as a “supply chain risk,” which would prevent it from securing US government contracts.

An amicus brief signed by former federal judges filed in support of Anthropic wrote that The Pentagon “misinterpreted the statute and violated the necessary procedures” when making this designation.

Why this highlights the importance of a multi-model strategy.—CN

Together With NRF Protect

PATCH NOTES

Francis Scialabba

Today’s top IT reads.

Stat: 50 years. That’s how long ago employee #8 Chris Espinosa joined Apple. And he’s still there. (the New York Times)

Quote: “What’s the point of me reading it if it’s already correct anyway, and you didn’t write it yourself?”Grit Matthias Phelps, Cornell University German professor, who has an idea to counter GenAI usage in student writing: the typewriter (Associated Press)

Read: How to give your tired PC some new life. (PCWorld)

SHARE THE BREW

Share the Brew, watch your referral count climb, and unlock brag-worthy swag.

Your friends get smarter. You get rewarded. Win-win.

Your referral count: 0

Click to Share

Or copy & paste your referral link to others:
itbrew.com/r/?kid=71df84e8

         
ADVERTISE // CAREERS // SHOP // FAQ

Update your email preferences or unsubscribe here.
View our privacy policy here.

Copyright © 2026 Morning Brew Inc. All rights reserved.
22 W 19th St, 4th Floor, New York, NY 10011