- Smoking Robot
- AI Fighter Jets, Viral Lies, Elon, and more
AI Fighter Jets, Viral Lies, Elon, and more
AI jet trains to dogfight ✈️, viral Bing story is 💩, Headlines 📰, Links 👀 and more
So you know that viral story about Bing AI getting everything all wrong?
It’s mostly BS.
In the email today:
AI fighter jets ✈️
Viral Bing story is mostly BS 💩
On to the email.
War Machines Controlled By AI Becoming More Probable By The Day 👀
This is not an alarmist site. But there is some alarming stuff going on with military machines and AI.
“AI Just Flew an F-16 for 17 hours” is an eye-grabber. The details of the story don’t make the headline less scary, either.
An AI agent recently flew Lockheed Martin’s VISTA X-62A for more than 17 hours at the U.S. Air Force Test Pilot School, marking the first time AI was used on a tactical aircraft.
The experimental training aircraft is expected to pave the way for a coming wave of jets piloted entirely by computers, including the Air Force’s Next Generation Air Dominance (NGAD) program.
The UK, Italy, and Japan have also announced plans to develop new fighter jets that use algorithms instead of human pilots, with potential deployment in the mid-2030s.
Russia is upgrading its fighters with AI capabilities to assist pilots in decision-making and share information more efficiently.
Private companies like Shield AI are also developing AI pilots for military and commercial aircraft, with the goal of eventually enabling aircraft to fly and fight missions nearly on their own.
Not for one minute do we believe it will take until the middle of the next decade for this to be fully operational.
Putatively, the benefit here would be deployment of human capital (i.e. soldiers) on more vital tasks. Plenty of man hours are spent by pilots on routine, peaceful missions where the time might be better spent on the ground.
It’s just…what happens when one of these planes malfunctions, goes down and hits a school? Or wanders into foreign air space? Or both?
And that’s not all the news on this subject.
Images have surfaced of a U.S. Army M1 Abrams tank which was armed with “an experimental artificial intelligence (AI)-driven target recognition system designed to speed up how fast threats can be spotted and engaged.”
So this isn’t AI replacing humans — it’s AI making the humans even more lethal, faster.
Images released on the DVIDS website on February 13, 2023 were actually taken during the five-week Project Convergence 2022 event, or PC22, in California on November 5, 2022.
Army soldiers, engineers, and scientists from the C5ISR Center tested prototypes of technology being developed under the ATLAS program, which focuses on aided target acquisition, tracking, and reporting capabilities in a realistic combat environment.
Images show components of ATLAS being tested, including a boxy sensor unit mounted to a rotating base on the M1's turret just behind its main gun.
Black boxes seen in the images are part of the I-MILES CVTESS for the exercise, used to detect and score hits using lasers to simulate combat and assess battle damage.
There are two types of people in the world: the quick and the dead.
It appears the military of the not-too-distant future is opting for quick.
Elon Musk tells World Government Summit attendees that AI is “one of the biggest risks to the future of civilization”: Never bashful with hyperbole, Musk added that AI has “great, great promise, great capability” but also brings “great danger.”
David Guetta’s Eminem deepfake augurs seismic shift in music creation: The French DJ created a convincing impression of an Eminem track using computer-generated vocals. Guetta’s opinion: “AI is going to change the music industry.”
Buzzfeed debuts AI quizzes: Now, instead of you going to Buzzfeed looking for a quiz topic that interests you, you go to Buzzfeed and tell it to generate a quiz on your interests. “With this, we have the ability to have an infinite number of results,” said Buzzfeed’s senior Vice President of editorial.
Nvidia co-founder and CEO Jensen Huang is emphatic about the importance of ChatGPT and AI: Huang recently appeared at the Berkeley Haas University’s Dean’s Speakers series and called ChatGPT “the iPhone moment, if you will, of artificial intelligence.”
Israel’s former top cyber and space officer says AI won’t replace humans: Yitzhak Ben-Israel recently said that ChatGPT and artificial intelligence will not make humans obsolete anytime soon, pointing out we have known for 10 years that “autonomous cars drive better than people,” yet widespread adoption hasn’t happened.
This Viral Bing Story Is Mostly Nonsense ❌
The post went viral and has spawned countless news articles. Just look:
It alleges that Bing AI made mistakes related to 3 examples shown during its demo:
1) Got product information wrong about the Bissell Handheld Pet Vacuum
2) Gave bad info about “nightlife options” in Mexico City
3) Royally screwed up its summary of GAP’s Q3 2022 earnings report
Most of those allegations are wrong, or exaggerated.
1) The handheld pet vac
Here’s what Dmitri wrote:
Cool. Might want to look up the threshold for libel.
It is true that the cited HGTV article doesn’t mention anything about cord length. But the linked Amazon product reviews do, especially for the corded version of the vacuum.
It was also mentioned in this detailed review:
To be fair, the majority of reviews are largely complimentary of the length (that’s what she said), but, just like the AI said, the cord length could be an issue for particularly large rooms or areas.
An Amazon reviewer also notes that, while generally quiet, the vacuum is loud enough to startle her cat— which she, weirdly, likes?
So does this review for the cordless version:
Again, reviews generally agree that both versions are quiet to humans, but not always to pets.
And for the part about suction. The product is reviewed favorably for its ability to suck, however a common complaint is that the cordless version loses suction power after about 15 minutes.
2) Nightlife options in Mexico City
Here’s what Dmitri wrote:
Not gonna spend a lot of time here because these are mostly subjective.
The results shown here are for the query “where is the nightlife?” in Mexico City, a follow-on question after Bing was tasked with creating a 5-day itinerary to visit the city.
Bing did its job. It of course left out very subjective details. But it told the user “where the nightlife is.” El Almacen, the gay bar, does have Google reviews (not surprising Bing doesn’t crawl them), but it does not have any on Trip Advisor, which is where Bing seems to be pulling its data from (it is in the “Nightlife in Mexico City” category).
3) Royally screwed up GAP’s Q3 earnings report
Here’s parts of what Dmitri wrote about GAP’s Q3 2022 earnings report, which you can find here:
The AI actually did its own math here! Dmitri is right, 5.9% doesn’t appear anywhere in the results. But if you take the company’s $186 million in operating income, which includes an $83 million gain on the sale of the company’s distribution center in the UK and a $53 million impairment on Yeezy apparel, and exclude the $53 million loss on Yeezy, which is what Bing said, you get… 5.9%.
You can argue whether that is useful, since GAP doesn’t account for it this way, but the statement Bing AI made was correct.
They expected their fourth quarter sales to be down mid-single digits, not full year.
Now, to be fair, GAP pulled its full-year projections in August, and so low double-digit growth is unlikely, and thus the EPS number wrong. So we’ll call this a push.
I’m not here to defend Microsoft. Bing AI (all AI) is far from perfect. But this shows just how quickly the news media will run with a story without fully fact-checking it.
It did so initially by not questioning Bing’s results at all, and it did so here by not questioning the guy who questioned its results!
AI is only as good as the data it’s trained on. And it’s clear that the humans who created the corpus of data upon which Bing, Bard and all the other bots have been trained are fallible, imprecise, prone to misplace commas, and do much more that change the shape, meaning, and intent of the data.
Did Bing nail the Bissell vacuum review? Not really. It’s a very highly reviewed product. But the cons it listed were mentioned across the web, mostly by consumer reviews.
Did it make the best suggestions for Mexico City nightlife? That largely depends on who you ask! Bars and restaurants are also notorious for having inconsistent hours, bad webpages, and more, so Bing is leaning on the most respected review sites, like Trip Advisor, which itself contains information that is directionally accurate but rarely perfect. But it answered the query.
And did it produce a flawless summary of GAP’s Q3 earnings? Nope. It pulled in some other data, some of it outdated, and portions were wrong. But Dmitri overstated its errors. Most of his conclusions were just flat-out wrong.
More slightly disconcerting military/AI news: The Navy is partnering with Qualcomm “to explore 5G, artificial intelligence and cloud computing” ⚓️
Maryland researchers have joined the battle against bias and discrimination in AI 🤝
Dating, for some of us anyway, was never easy— AI is poised to make it easier, but also duller 💘
We don’t generally trade in listicles, but some of the observations about what pop culture is missing with AI resonated with us 👍️
Some teachers, seeing that they cannot beat ChatGPT, want ChatGPT to join them in the classroom 🖥️