🦙 Meta's Llama on the Loose

Meta's AI leaks, Microsoft's new vision AI, Headlines, Links, and more

ZUCKERBERG’S LLAMA IS ON THE LOOSE AND TERRORIZING CIVILIANS! IT WAS LAST SEEN WEARING RED PAJAMAS! APPROACH WITH CAUTION AND A SUPER POWERFUL GPU FARM.

Did we get that right?

In the email today:

  • Meta’s AI leaked online 🦙

  • Microsoft announces vision AI 🇮🇹

  • Headlines 📰

  • Links 👀

Onward.

Meta’s AI Reportedly Leaked Online and That Could Be Bad 🦙

Last week we described how Meta entered the AI fray with the creation of its own large language model, LLaMA.

Here’s what it looks like:

The blog post that announced LLaMA repeatedly emphasized Meta’s desire to be a leader in prudent deployment of such powerful technology. Here were the two big let’s be careful out there quotes from that post:

  • “To maintain integrity and prevent misuse, we are releasing our model under a noncommercial license focused on research use cases…to academic researchers; those affiliated with organizations in government, civil society, and academia; and industry research laboratories around the world.”

  • “We believe that the entire AI community…must work together to develop clear guidelines around responsible AI in general and responsible large language models in particular.”

As always, the road to hell is paved with good intentions.

Code for LLaMa has purportedly leaked online through 4chan and is now available for anyone to download.

Here’s what that means:

  • This would be the first time a major tech firm's proprietary AI model has been leaked to the public.

  • LLaMa is similar to other AI models like OpenAI's GPT-3 and is built on a massive collection of tokens.

  • LLaMa has multiple versions of different sizes, with the largest being trained on 1.4 trillion tokens scraped from various sources.

  • Meta, Facebook's parent company, has not denied the leak and is filing takedown requests to control the model's spread.

  • Meta has warned users seeking to access the leaked model that unauthorized distribution of the model constitutes copyright infringement and improper or unauthorized use.

This is clearly — CLEARLY — a sub-optimal outcome for Meta, which (if you believe Meta) trusted presumably responsible actors (academics, scientists, government researchers, etc.) to help them vet LLaMa while safekeeping it.

Now it’s running wild on 4chan.

The average user will have not the time, inclination, or, by far most importantly, resources to do anything with a giant block of AI code that requires very powerful computers and expertise to run.

But.

Bad actors, skilled hackers, and nefarious governments may be able to get more mileage out of a powerful large-language model built by one of the biggest tech companies on planet Earth.

We’ve detailed the dangers of powerful AI in recent sends, so we won’t rehash them here. But in short, any safeguards on API use put in place by Meta - or OpenAI or Google - preventing bad actors from doing horrible things with the tech, go right out the window here.

The Verge, parroting what seems to be some background commentary from Meta, tried to downplay the risk:

As the story notes, “downloading LLaMA is going to do very little for the average internet user.” Owning a laptop isn’t enough to use LLaMA. You would need far more powerful machinery and “a decent amount of technical expertise.”

The METAphor (see what we did there?) found in this piece is apt. “Think of LLaMA as an unfinished apartment block. The frame’s been built and there’s power and plumbing in place…(but) you can’t just move in and call it home.”

For the time being, AI commentators are reminding an uneasy public that the wild reckoning that ChatGPT was supposed to cause hasn’t come to pass.

Yet.

Microsoft Announces an AI Computer Vision Model Called Florence 🇮🇹

Microsoft’s new vision AI will be able to understand images and videos to make them searchable and more easily categorized.

From the press release:

We are pleased to announce the public preview of Microsoft’s Florence foundation model, trained with billions of text-image pairs and integrated as cost-effective, production-ready computer vision services in Azure Cognitive Service for Vision. The improved Vision Services enables developers to create cutting-edge, market-ready, responsible computer vision applications across various industries. Customers can now seamlessly digitize, analyze, and connect their data to natural language interactions, unlocking powerful insights from their image and video content to support accessibility, drive acquisition through SEO, protect users from harmful content, enhance security, and improve incident response times.

Microsoft, smartly, avoided calling the thing Aunt Flo.

Headlines 📰 

DuckDuckGo latest to deploy AI: The search engine announced a new tool (DuckAssist) which “pulls and summarizes information from Wikipedia” to answer questions. Like everyone already does every day, only much faster. 🦆

AI might find life on Mars: Scientists see AI as a new, powerful tool to assess geological features in the Red Planet’s topography that could support life. 🌌 

Car makers find that AI can save them billions of dollars by designing more aesthetically pleasant vehicles: It often costs more than $1 billion to design a new car. Redesigns of unpopular models can cost three times that. AI’s assimilation of successful designs should reduce ugly, unpopular rollouts. 🚘️ 

BioAge CEO wants AI to extend human longevity: Former Stanford researcher Kristen Fortney’s company seeks to use AI to “pinpoint the molecular differences that predict healthy versus unhealthy aging.” ⚕️ 

Stable Diffusion can analyze brain scans and know what people see: This is not mind-reading, per se, and the technology now is “bulky and costly,” but this advancement portends a time when the premise of “What Women Want” isn’t science fiction.

Links 👀