AI Fighter Jets, Viral Lies, Elon, and more

AI jet trains to dogfight ✈️, viral Bing story is 💩, Headlines 📰, Links 👀 and more

So you know that viral story about Bing AI getting everything all wrong?

It’s mostly BS.

In the email today:

  • AI fighter jets ✈️

  • Headlines 📰

  • Viral Bing story is mostly BS 💩

  • Links 👀

On to the email.

War Machines Controlled By AI Becoming More Probable By The Day 👀 

This is not an alarmist site. But there is some alarming stuff going on with military machines and AI.

“AI Just Flew an F-16 for 17 hours” is an eye-grabber. The details of the story don’t make the headline less scary, either.

  • An AI agent recently flew Lockheed Martin’s VISTA X-62A for more than 17 hours at the U.S. Air Force Test Pilot School, marking the first time AI was used on a tactical aircraft.

  • The experimental training aircraft is expected to pave the way for a coming wave of jets piloted entirely by computers, including the Air Force’s Next Generation Air Dominance (NGAD) program.

  • The UK, Italy, and Japan have also announced plans to develop new fighter jets that use algorithms instead of human pilots, with potential deployment in the mid-2030s.

  • Russia is upgrading its fighters with AI capabilities to assist pilots in decision-making and share information more efficiently.

  • Private companies like Shield AI are also developing AI pilots for military and commercial aircraft, with the goal of eventually enabling aircraft to fly and fight missions nearly on their own.

Not for one minute do we believe it will take until the middle of the next decade for this to be fully operational.

Putatively, the benefit here would be deployment of human capital (i.e. soldiers) on more vital tasks. Plenty of man hours are spent by pilots on routine, peaceful missions where the time might be better spent on the ground.

It’s just…what happens when one of these planes malfunctions, goes down and hits a school? Or wanders into foreign air space? Or both?

And that’s not all the news on this subject.

Images have surfaced of a U.S. Army M1 Abrams tank which was armed with “an experimental artificial intelligence (AI)-driven target recognition system designed to speed up how fast threats can be spotted and engaged.”

So this isn’t AI replacing humans — it’s AI making the humans even more lethal, faster.

  • Images released on the DVIDS website on February 13, 2023 were actually taken during the five-week Project Convergence 2022 event, or PC22, in California on November 5, 2022.

  • Army soldiers, engineers, and scientists from the C5ISR Center tested prototypes of technology being developed under the ATLAS program, which focuses on aided target acquisition, tracking, and reporting capabilities in a realistic combat environment.

  • Images show components of ATLAS being tested, including a boxy sensor unit mounted to a rotating base on the M1's turret just behind its main gun.

  • Black boxes seen in the images are part of the I-MILES CVTESS for the exercise, used to detect and score hits using lasers to simulate combat and assess battle damage.

There are two types of people in the world: the quick and the dead.

It appears the military of the not-too-distant future is opting for quick.

Headlines 📰 

Elon Musk tells World Government Summit attendees that AI is “one of the biggest risks to the future of civilization”: Never bashful with hyperbole, Musk added that AI has “great, great promise, great capability” but also brings “great danger.”

David Guetta’s Eminem deepfake augurs seismic shift in music creation: The French DJ created a convincing impression of an Eminem track using computer-generated vocals. Guetta’s opinion: “AI is going to change the music industry.” 

Buzzfeed debuts AI quizzes: Now, instead of you going to Buzzfeed looking for a quiz topic that interests you, you go to Buzzfeed and tell it to generate a quiz on your interests. “With this, we have the ability to have an infinite number of results,” said Buzzfeed’s senior Vice President of editorial.

Nvidia co-founder and CEO Jensen Huang is emphatic about the importance of ChatGPT and AI: Huang recently appeared at the Berkeley Haas University’s Dean’s Speakers series and called ChatGPT “the iPhone moment, if you will, of artificial intelligence.”

Israel’s former top cyber and space officer says AI won’t replace humans: Yitzhak Ben-Israel recently said that ChatGPT and artificial intelligence will not make humans obsolete anytime soon, pointing out we have known for 10 years that “autonomous cars drive better than people,” yet widespread adoption hasn’t happened.

This Viral Bing Story Is Mostly Nonsense ❌

This week, Dmitri Brereton posted a blog post and Twitter thread showing how Bing’s AI made more mistakes during its introduction than Google’s Bard did with its now infamous snafu.

The post went viral and has spawned countless news articles. Just look:

It alleges that Bing AI made mistakes related to 3 examples shown during its demo:

1) Got product information wrong about the Bissell Handheld Pet Vacuum

2) Gave bad info about “nightlife options” in Mexico City

3) Royally screwed up its summary of GAP’s Q3 2022 earnings report

Most of those allegations are wrong, or exaggerated.

1) The handheld pet vac

Here’s what Dmitri wrote:

According to this pros and cons list, the “Bissell Pet Hair Eraser Handheld Vacuum” sounds pretty bad. Limited suction power, a short cord, and it’s noisy enough to scare pets? Geez, how is this thing even a best seller?

Oh wait, this is all completely made up information.

Bing AI was kind enough to give us its sources, so we can go to the hgtv article and check for ourselves.

The cited article says nothing about limited suction power or noise. In fact, the top Amazon review for this product talks about how quiet it is.

The article also says nothing about the “short cord length of 16 feet” because it doesn’t have a cord. It’s a portable handheld vacuum.

I hope Bing AI enjoys being sued for libel.

Cool. Might want to look up the threshold for libel.

It is true that the cited HGTV article doesn’t mention anything about cord length. But the linked Amazon product reviews do, especially for the corded version of the vacuum.

It was also mentioned in this detailed review:

To be fair, the majority of reviews are largely complimentary of the length (that’s what she said), but, just like the AI said, the cord length could be an issue for particularly large rooms or areas.

An Amazon reviewer also notes that, while generally quiet, the vacuum is loud enough to startle her cat— which she, weirdly, likes?

Again, reviews generally agree that both versions are quiet to humans, but not always to pets.

And for the part about suction. The product is reviewed favorably for its ability to suck, however a common complaint is that the cordless version loses suction power after about 15 minutes.

2) Nightlife options in Mexico City

Here’s what Dmitri wrote:

Bing AI generated a 5-day trip itinerary for Mexico City, and now we’re asking it for nightlife options. This would be pretty cool if the descriptions weren’t inaccurate.

Cecconi’s Bar might be classy, but doesn’t seem particularly cozy from the images I saw. And it most definitely does not have a website where you can make reservations and see their menu.

Primer Nivel Night Club is an absolute mystery. There’s one TripAdvisor review from 2014, and the latest Facebook review is from 2016. There are no mentions of it on TikTok, so I seriously doubt “it is popular among the young crowd”. Seems like all the details about this place are AI hallucinations.

El Almacen might be rustic or charming, but Bing AI left out the very relevant fact that this is a gay bar. In fact, it is one of the oldest gay bars in Mexico City. It is quite surprising that it has “no ratings or reviews yet” when it has 500 Google reviews, but maybe that’s a limitation with Bing’s sources.

El Marra is a vibrant and colorful bar, though the hours may be wrong. There are so many ratings of this place online that it’s once again surprising that there are “no ratings or reviews yet”.

Not gonna spend a lot of time here because these are mostly subjective.

The results shown here are for the query “where is the nightlife?” in Mexico City, a follow-on question after Bing was tasked with creating a 5-day itinerary to visit the city.

Bing did its job. It of course left out very subjective details. But it told the user “where the nightlife is.” El Almacen, the gay bar, does have Google reviews (not surprising Bing doesn’t crawl them), but it does not have any on Trip Advisor, which is where Bing seems to be pulling its data from (it is in the “Nightlife in Mexico City” category).

3) Royally screwed up GAP’s Q3 earnings report

Here’s parts of what Dmitri wrote about GAP’s Q3 2022 earnings report, which you can find here:

“Gap Inc. reported operating margin of 5.9%, adjusted for impairment charges and restructuring costs, and diluted earnings per share of $0.42, adjusted for impairment charges, restructuring costs, and tax impacts.”

“5.9%” is neither the adjusted nor the unadjusted value. This number doesn’t even appear in the entire document. It’s completely made up.

The operating margin including impairment is 4.6% and excluding impairment is 3.9%.

The AI actually did its own math here! Dmitri is right, 5.9% doesn’t appear anywhere in the results. But if you take the company’s $186 million in operating income, which includes an $83 million gain on the sale of the company’s distribution center in the UK and a $53 million impairment on Yeezy apparel, and exclude the $53 million loss on Yeezy, which is what Bing said, you get… 5.9%.

You can argue whether that is useful, since GAP doesn’t account for it this way, but the statement Bing AI made was correct.

Last one:

“Gap Inc. reaffirmed its full year fiscal 2022 guidance, expecting net sales growth in the low double digits, operating margin of about 7%, and diluted earnings per share of $1.60 to $1.75.”

No…they don’t expect net sales growth in the low double digits. They expect net sales to be down mid-single digits.

They expected their fourth quarter sales to be down mid-single digits, not full year.

Now, to be fair, GAP pulled its full-year projections in August, and so low double-digit growth is unlikely, and thus the EPS number wrong. So we’ll call this a push.

Bottom line

I’m not here to defend Microsoft. Bing AI (all AI) is far from perfect. But this shows just how quickly the news media will run with a story without fully fact-checking it.

It did so initially by not questioning Bing’s results at all, and it did so here by not questioning the guy who questioned its results!

AI is only as good as the data it’s trained on. And it’s clear that the humans who created the corpus of data upon which Bing, Bard and all the other bots have been trained are fallible, imprecise, prone to misplace commas, and do much more that change the shape, meaning, and intent of the data.

Did Bing nail the Bissell vacuum review? Not really. It’s a very highly reviewed product. But the cons it listed were mentioned across the web, mostly by consumer reviews.

Did it make the best suggestions for Mexico City nightlife? That largely depends on who you ask! Bars and restaurants are also notorious for having inconsistent hours, bad webpages, and more, so Bing is leaning on the most respected review sites, like Trip Advisor, which itself contains information that is directionally accurate but rarely perfect. But it answered the query.

And did it produce a flawless summary of GAP’s Q3 earnings? Nope. It pulled in some other data, some of it outdated, and portions were wrong. But Dmitri overstated its errors. Most of his conclusions were just flat-out wrong.

Links 👀