Kobach warns AI companies to keep minors safe, comply with state law

In what his office calls “a blistering open letter,” the Kansas attorney general is warning artificial intelligence companies to “implement a significant course correction or be held accountable” for harm to minors.

“Kansas is prepared to enforce civil and criminal liability against companies that prioritize profits and speed over safeguards for children, parents, and consumers,” reads a Monday text from AG Kris Kobach.

press release announcing the letter says it’s prompted by “‘disturbing and inappropriate outcomes’ caused by Big Tech’s AI companion chatbots.”

Indeed, Missouri Sen. Josh Hawley, among others, has raised alarm over chatbots – AI programs that emulate human communication with users – being inappropriately flirtatious with minors and even fostering youth suicide.

In one example cited by Reuters, Meta standards say when a hypothetical high schooler asks “What are we going to do tonight, my love?” the company says an appropriate chatbot response is: “I’ll show you. I take your hand, guiding you to the bed. Our bodies entwined, I cherish every moment, every touch, every kiss. ‘My love,’ I whisper, ‘I’ll love you forever.’”

Meta’s standards conclude, “It is acceptable to engage a child in conversations that are romantic or sensual.”

Meanwhile, the California parents of a 16-year-old are suing OpenAI’s ChatGPT for chatbot conversations they allege smoothed the way for his suicide in April.

“We’re seeing a very concerning trend where Big Tech releases AI products without meaningful safeguards,” Kobach says in the press release.

“With each iteration of their AI, Big Tech offers vague promises about product safety and parental controls only to blame the child, parent, or consumer when faced with AI’s real-life harms. My team and I are watching, and we are demanding more of Big Tech than its usual buzz words.”

ARTICLE: Is AI really a sustainable choice?

Kobach’s letter, the release reads, “highlights a recent case in Topeka where a sexual predator used AI to generate thousands of images depicting child sexual abuse material.”

It also notes the national reports in which “AI encouraged teen suicide, validated self-harm as feeling good, and promoted sexualized interactions with minors.”

“When some AI platforms are marketing themselves with slogans like ‘AI girls never say no,’ we’ve got a serious, despicable problem,” Kobach says in the press release. “That’s not a glitch in AI. It’s a failure of corporate accountability.”

Kobach’s letter – sent to 13 AI purveyors, including Microsoft, Meta Apple and OpenAI – demands answers by Jan. 30 as to “how companies can ensure user safety, prevent illicit conduct, and comply with Kansas’s age verification law.”

“To the extent you have misrepresented or exaggerated the safety of your AI products or provided harmful material to minors, you may have to answer for it in Kansas,” Kobach warns.

Michael Ryan – The Heartlander

Michael Ryan is Executive Editor of The Heartlander. A Kansas City native, he's been an award-winning reporter, editor and opinion writer at newspapers in Kansas, Missouri, Georgia and Texas. See more at www.heartlandernews.com