- Smoking Robot
- Posts
- Newsletter: š® OpenAI Posts Terrifying Mission Statement
Newsletter: š® OpenAI Posts Terrifying Mission Statement
Sam Altman's world, Zuck enters the fray, Headlines, Tool of the Day, Links and more
Good morning, Human Race.
Iām not a real robot, but soon you wonāt be able to tell the difference anyway.
In the email today:
Sam Altman posts terrifying AI mission statement š®
Meta enters the AI fray š¤¼āāļø
Headlines š°
Tool of the Day
Links š
Get ready.
Sam Altman Posts (Slightly Terrifying) AI Mission Statement š®
Shortly after our Friday send, as we were preparing to glide into what we thought would be a carefree weekend, Sam Altman dropped a bomb:
The olā Friday afternoon letās think about how not to end humanity news dump
ā Kyle Scott Laskowski (@KyleScottL)
9:50 PM ā¢ Feb 24, 2023
3 PM on a Friday. Classic.
What is it?
Officially, OpenAIās āPlanning for āØAGI and beyondā ā a mission statement summarizing the ways the company is proceeding cautiously, but at pace, to develop a superintelligent artificial general intelligence that could, in the companyās own words, ācause grievous harm to the world.ā
Cool.
It started off hopefully enough: āOur mission is to ensure that artificial general intelligence ā AI systems that are generally smarter than humans ā benefits all of humanity.ā
Sounds delightful.
Before long, though, the tone of the post turned largely dark and occasionally terrifying.
If weāre weighing things, weāll call it 60% terrifying with 40% fleeting delightfulness, like the first time you had sex.
Hereās the gist:
Altman believes in the potential of AGI (artificial general intelligence) to benefit humanity by increasing abundance, aiding scientific discovery, and giving everyone new capabilities.
OpenAI wants the benefits, access, and governance of AGI to be widely and fairly shared, but acknowledges the serious risks of misuse, accidents, and societal disruption.
The risks are extremely great, and OpenAI wants to proceed in a way that is incremental, by releasing stripped down versions of AI rather than dumping a super-powerful AGI on humanity all at once, with only one chance to āget it rightā.
The company is particularly bullish about scienftic advancements resulting from AI.
OpenAI has several safeguards in place so itās not incentivized to proceed recklessly, including: a charter that requires it to work with competitors, a 100x shareholder return cap so stakeholders arenāt compelled to pursue profits at all costs, and a non-profit governing company.
This is clearly Altmanās moment, and he is not bashful about treating it as such.
SmokeBotās take: This is both reassuring and terrifying.
On one hand, Altman is saying all of the right things, and by all accounts heās thoughtful about the potential harms his companyās technology can cause.
On the other hand, no one appointed him King of the Worldā a role heās playing right now.
Itās also a bit arrogant to act like OpenAI is the only player on the field. Google, Meta, Apple, and countless other smaller companies will have a say in how this plays out, for better or worse.
Even if you believe Altman has the definitive say in how AI (and by extension, the human species) plays out, have we learned nothing from Zuckerberg having tools with the power to change world governments?
One person or company should not be the solely in charge when it comes to stuff like this (actual excerpts):
āWe want to successfully navigate massive risks. In confronting these risks, we acknowledge that what seems right in theory often plays out more strangely than expected in practice.ā
āSome people in the AI field think the risks of AGI (and successor systems) are fictitious; we would be delighted if they turn out to be right, but we are going to operate as if these risks are existential.ā
āA misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too.ā
To be fair, he is saying this while calling for government oversight and consideration. But thatās also an old tactic from the PR playbookā help us regulate us before you regulate us out of existence.
Remember the last Sam who worked with regulators to establish rules for his industry?
Thatās unfair to Altman. But we should be skeptical about concentrating power in one personās hands.
Meanwhile, Venture Beat spoke to some people who had a similar take:
Others found it, well, less than appealing. Emily Bender, professor of linguistics at the University of Washington, said: āFrom the get-go this is just gross. They think they are really in the business of developing/shaping āAGI.ā And they think they are positioned to decide what ābenefits all of humanity.'ā
And Gary Marcus, professor emeritus at NYU and founder and CEO of Robust AI, tweeted, āI am with @emilymbender in smelling delusions of grandeur at OpenAI.ā
Computer scientist Timnit Gebru, founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR), went even further, tweeting: āIf someone told me that Silicon Valley was ran by a cult believing in a machine god for the cosmos & āuniverse flourishingā & that they write manifestos endorsed by the Big Tech CEOs/chairmen and such Iād tell them theyāre too much into conspiracy theories. And here we are.ā
Unfortunately, whether Altmanās right or not probably doesnāt matter now. The beast is out of the cage. Our only hope is that the beast can be contained.
BAWHGOD THATāS ZUCKERBERGāS MUSIC:
Meta Enters The AI Fray š¤¼āāļø
The guy who āinvented Facebookā probably doesn't need to get involved in AI. Also, probably, he canāt help himself.
Letās get this out of the way: LLaMA is not a replication or attempted improvement on ChatGPT or the new Bing.
It is also not wearing Red Pajamas.
LLaMA is, per Meta, a research tool that the company sees as ādemocratizing access in this important, fast-changing field.ā
Here are the key points:
Meta is releasing LLaMA, a quartet of different-sized models, under a noncommercial license focused on research use cases.
The models will be accessible to universities, NGOs, and industry labs, and Meta hopes the AI community will work together to develop clear guidelines around responsible AI.
LLaMA-13B performs better than OpenAIās GPT-3 model on most benchmarks, while LLaMA-65B is competitive with DeepMindās Chinchilla70B and Googleās PaLM 540B.
LLaMA-13B can run on a single data center-grade Nvidia Tesla V100 GPU, making it more accessible to smaller institutions.
Meta's previous accessible AI chatbots, BlenderBot and Galactica, received criticism for not performing well, but Meta hopes for a better reception with LLaMA.
Mark Zuckerberg said Meta is committed to open research and making the new model available to the AI research community.
There is a touch of trying to rein in the likes of Altman in the prose accompanying this release:
āWe believe that the entire AI community ā academic researchers, civil society, policymakers, and industry ā must work together to develop clear guidelines around responsible AI in general and responsible large language models in particular.ā
We also note that while Meta/Zuckerberg are arriving to the party a little late, maybe thatās not the worst thing given how weird things got with Sydney and the simmering consternation that presently accompanies Altman.
Itās definitely a āthrough the looking glassā moment when Zuckerberg of all people is trying to be the adult in the room.
Headlines š°
Snapchat launches a chatbot with ChatGPT underpinning: For $3.99/month, Snapchat+ users can access āMy AI,ā which ācan do things like help answer a trivia question or write a haiku.ā
eBayās Chief AI Officer thinks AI will āshape the future of online commerceā: Nitzan Mekel-Bobrov opined recently that āgenerative AI seamlessly connects the thread between buyers and sellersā and can āmimic much more how a physical story associate would navigate a shopper interaction.ā
āShark Tankā entrepreneur Barbara Corcoran calls AI āa game changerā: The real estate mogul who regularly tangles with Mark Cuban on the business reality show characterizes AIās impact on her industry this way: āItās like the whole world got a genius implant.ā
Tool of the Day šØ
An AI-powered solution, to get attention-grabbing titles, descriptions, and show notes for your podcast in seconds.
Try it for free here.
Links š
Count us among the AI enthusiasts who somehow did not predict that the cannabis industry would also be changed by technology āļø
Gamers are feeling the burn from AI š®ļø
Why is AI seemingly so fickle? It just wants to be human š¤·āāļø
Mandopop star Jay Chou refuses to go gently into that good night šµ
From way downtown, BANG: Stephen Curry is getting into the AI game š
Generative AI is going to make a lot of people a lot of money, but no one is really sure how just yet šµ