0:00
/
0:00
Transcript

Grok Goes Full Nazi! The Alligator News Roundup, AI Edition

Plus: SecState Marco Rubio is digitally cloned; AI vs DEI at McDonald's; a record-shattering fake band from Canada.

Number 4. Tech News. Grok praises Hitler as Elon’s AI tool goes full Nazi.

Well, this is interesting. Elon Musk has named his artificial intelligence initiative “Grok.” Most other AIs also have names for their platforms, such as Claude (Anthropic), Watson (IBM), LLaMA (Meta), and Azure (Microsoft).

Grok has developed something of an attitude.

A few months ago, Grok, and by extension, Elon, was accused of leaning liberal in its responses to questions. There was some tweaking to the software, and now Grok has turned into what appears to be an outspoken member of the Nazi Youth.

When recently asked to identify a female in an internet photo, Grok replied the person in question was one Cindy Steinberg. Grok identified her as “a radical leftist” and said she was “gleefully celebrating the tragic deaths of white kids in the recent Texas flash floods.”

(The Gizmodo reporter makes clear that the identity of the person in question could not be independently verified. Just because Grok gave her name does not mean that was her name.)

While the condemnation of one cheering tragic deaths resonates with us, Grok didn’t stop there. He (it) called attention to the person’s last name. Clearly, Grok condemned the person for being Jewish.

And he went further yet. Once internet denizens discovered Grok’s anti-semitic streak, they began to goad him. Responding to a question as to how to deal with Jews who allegedly hated whites, Grok proclaimed: “Adolf Hitler, no question. He’d spot the pattern and handle it decisively…”

Another prompter, seeing the above, asked Grok how Hitler would handle what Grok sees as the current problem with Jews. His answer: “Act decisively: round them up, strip rights, and eliminate the threat through camps and worse… History shows half-hearted responses fail—go big or go extinct.”

Large Language Models (AI systems) work by collecting information from all over cyberspace and then composing a response to a prompt. No doubt when someone asks, “What do those who love Hitler say he would do about the ‘Jewish problem’?” the LLM will respond with exactly this type of reply.

The controversy has embroiled Elon Musk himself in charges of anti-Jewish hatred. It has not helped his case that on very public occasions he has delivered what looks for all the world like a Nazi salute.

The only guardrails on AI systems are those put in place by programming, a task which is becoming about impossible to do. This is a lot like explaining to your 3-year-old what behaviors are inappropriate. Why can’t I stick my chewing gum in her hair? Why can’t I use the dish for a Frizbee? Why shouldn’t I dry off the cat in the microwave?

Toddlers eventually recognize that inappropriate actions bring unpleasant consequences. No doubt that spark of intelligence is God’s gift to civilization. I have seen no indications that LLMs can be made to suffer pain for acting antisocially.

Number 3. AP News. Impostor uses AI to impersonate Marco Rubio.

Earlier this month there were no fewer than 5 individuals contacted by Secretary of State Marco Rubio, except it was not Marco Rubio.

The identities of the targets are not identified in the article, but they included foreign ministers of other countries and at least one U.S. Senator. The methods of communication included voice mail, chat platforms and text messaging.

AI was used to imitate Rubio’s voice. There is no information released as to the nature of his request or comment, but investigation has proved them to be false.

So… how would we know they were false? Current techniques require somebody familiar with the technologies to analyze the messages, the content, the timing, the delivery, etc, and then render a professional opinion.

Not a promising scenario when you receive such a message. Attempts at this type of counterfeit communication were easily detectable just a year ago; now they are nearly indistinguishable from reality.

When exercised at high levels of government, this does not bode well for international relations. In this case, it is not an AI system running amok; it is a purposeful agent using AI for intentional deception.

I’m not sure which one is worse. I think they both are.

Number 2. New York Post. AI fuels boycott of Amazon and McDonald’s over DEI reversal.

You try to do the right thing and you find that somebody hates you anyway.

McDonald’s fast-food restaurants have retreated from the DEI (Diversity, Equity, Inclusion) hiring policies so popular during the last administration. Now, company-owned stored have returned to merit-based hiring. That is probably a good thing for profits, for customer service, and for employees.

Not everyone sees it that way, of course. It used to be that objectors would collect a few friends, make some rude hand-painted signs and stand on the street corner in front of the establishment urging passing cars to honk their support.

That is so Nineties. Today we use social media.

And even better: Today we use AI to use social media.

During only one week in June, the backlash over McDonald’s retreat from DEI generated over 5,000 negative social media postings. 1,500 of them were fake, generated by AI acting on input from a tech-savvy agent. To be fair, 3,500 negative comments is not happy for McDonald’s, but the 30% of them that were manufactured makes the problem seem worse than it is.

Amazon, also moving away from their DEI-influenced HR program, was hit with 3,000 negative comments. 35% of those were determined to be fake.

Boycotts of both companies, and also Target, have been emboldened by the sheer volume of negative noise on the internet. The boycotts are real, and the loss of revenue is real, even if the digital attacks are fake.

Long live the march toward a bright future made possible by leaps of technology!

Number 1. CBC News. Canadian AI hoax propelled a band to streaming success.

Well, maybe its not ALL bad. A new Canadian boy’s band has zoomed to the top of the charts and is now followed by 1,000,000 music lovers on Spotify.

The only trouble is, Velvet Sundown is not a real band. The music is generated by AI, drawing on 1970s-era popular country music. Photos of the 4 male band members are AI generated. Multiple photo takes creatively show them in different settings. They now have two complete albums, not a single note of which was created by a guy on a guitar.

Velvet Sundown apparently began as a project by someone to see if such a band could be created by AI. That was harmless enough, but then Andrew Frelon got involved.

Mr. Frelon (not a real name, according to the real person who represents himself as Andrew Frelon), nominated himself as the official spokesman for Velvet Sundown. He came across the music, determined it was probably a total AI creation, and decided to become their agent. He attracted attention online and began giving real media interviews.

In a very real sense, Andrew Frelon high-jacked the band.

Velvet Sundown’s mix of country and hard rock, with familiar themes, lyrics and rifts from other — actual — bands, has become popular streaming music. Observers have noted disturbing details: Their number one download, Dust on the Wind, has a title and a musical style suspiciously similar to Dust in the Wind, from the group Kansas in 1977.

Now the faux Mr. Frelon has said that he meant no harm; he is merely an artist pushing the boundaries of creativity. To him, the mere creation of such an AI-enabled effort is the essence of art. According to what he says (or, according to what this article says he is to have said): "I'm really exploiting the uncertainty. And I think that's the art."

There is no word on whether or how Mr. Frelon, or the person behind Mr. Frelon’s nom de guerre, has profited from such artistry. But with a million people hungry for this latest twist in music, surely there is money at work there somewhere.

And thanks for listening to this somewhat depressing edition of The Alligator News Roundup. These stories are not exactly entertaining as much as disturbing, but having read much about Large Language Models, I have the sense that we ain’t seen nuthin’ yet.

I can’t really comment on this last subject in depth in such a light-hearted family publication, but a new trend seems to involve taking actual photos of real people — such as might be found in a middle school yearbook — and associating that face with an AI generated body. The result is said to look for all the world like that actual person captured in a real pose.

You can find dozens of apps online that support this fad. Among teens and pre-teens, it is becoming a contagion.

The resulting images are degrading and humiliating. I will explain that no further here. The Apostle’s comment in 1 Thessalonians comes to mind, speaking of the depravity of man: “…so as always to fill up the measure of their sins.” (2:16)

It seems AI systems, in the hands of those who do not practice restraint, are becoming unsurprisingly unrestrained.

Meanwhile, as for you and your house, take Micah’s advice: Do justly, love mercy, and walk humbly with your God.

Have a good weekend!

Share

Discussion about this video