🐟 Did Bing AI Leak Its Own Instructions?

Its code name is what?, OnlyFans better watch out, Headlines, Tool of the Day, Tweet of the Day, Podcast of the Day, Links and more

Sydney

Phew. SmokeBot needs a break after a WILD week in AI. But he’s got one more send in him before we head into Super Bowl weekend (Go Birds).

In the email today:

  • Did Bing AI leak its code name and instructions? 🐟

  • Headlines 📰

  • Tweet of the Day 🦅

  • Tool of the Day 🔨

  • Podcast of the Day 🎙

  • AI killed the OnlyFans star 👯‍♀️

  • Links 👀

Can’t imagine which section you’ll jump to first.

Did Bing AI Leak Its Own Code Name and Instruction Set? 🐟

If you want to feel like we’re living in a futuristic dystopian sci-fi movie, this may do it.

There was a fascinating Twitter find the other day on the heels of Microsoft’s announcement of its new AI-powered Bing.

Kevin Liu (@kliu128), a Stanford student who had early access to the new Bing, was able to get it to reveal its internal code name, Sydney, and set of instructions (“prompts”) that it was trained on.

Most of them are expected - like when it performs searches and how it displays results to users - but others reveal specific use cases, limited functionality or, somewhat eerily, restrictions on what it should not do.

You can squint through these 4 screenshots, but we’ll distill the key findings here. Let’s go spelunking.

Sydney…

  • … will avoid vague, controversial, or off-topic responses

  • …. will perform up to 3 web searches on behalf of the user, rather than pull from the dataset it was trained on, if doing so would result in up-to-date information or be more helpful

  • … is meant to be helpful, but its output is limited to the chat box, implying that theoretically it can do other things, like create emails, documents, or, you know, hack a nuclear arsenal… kidding, I think

  • … still is only trained on data through 2021, like ChatGPT, and anything more recent than that is pulled from web searches, meaning the new Bing search AI does not include an updated version of ChatGPT thought to be ChatGPT-4

  • … must not violate the copyright of books or songs… but that leaves a question as to how much copyright it may violate from news stories and web articles

  • … will produce non-partisan, non-harmful results if someone requests harmful content

  • … will not output creative content, such as jokes or poems, about influential politicians, activists or heads of state

While Microsoft has not (and would not) confirm the accuracy of this, many believe it to be real and accurate based on the fact that new Bing repeated these results more than once, meaning it was not a “hallucination” — when AI makes something up.

Takeaways:

  1. Google and all competitors can see how Microsoft instructed its chatbot

  2. It’s kind of wild that AI can be trained on simple sentences rather than complex code algorithms

  3. It’s frightening how many things it has to be instructed not to do… implying that it is capable of outputting all sorts of harmful results

  4. It can so easily be manipulated, so what happens when people figure out how to reverse more of these instructions?

That about sums it up.

Headlines 📰 

Google’s brutal Wednesday slashed $100 billion from the company’s market value: That seems impossible and reads very strange, but that’s exactly what happened. Investors aren’t really buying the dip, either. The share price remains down about 7% in the past five days.

Apple will hold an AI summit for employees at Steve Jobs theater: The tech giant is reportedly resuming its prior practice of holding its company’s special events in person. As it is for employees only, we won’t know what comes of it (if at all) until word gets out after it happens. After Google’s public fumble this week, we can at least presume that Apple will run this event competently.

OpenAI is working on an update to its ChatGPT prompt guide: Incredibly, unlike the charlatans on Twitter, you don’t have to like, follow, subscribe, RT and DM to get access. You can just click here.

Experts say AI will create more billionaires while widening wealth inequality: The combination of white collar job losses and the probable number of newly ultra-wealthy people from AI investment and business is a recipe for “downward mobility.”

Tweet of the Day 🦅

BIG MIND Balaji tweets about the potential lobbying and subsequent government overreach coming to AI once Washington realizes that people, not machines, vote. You can read the full thread here.

Right on cue, federally-funded government research agency MITRE puts out a poll showing that most people distrust AI and see the need for heavy-handed regulation:

While Americans rely on artificial intelligence (AI) to inform consumer choices – from movie recommendations to routine customer service inquiries – the MITRE-Harris Poll survey on AI trends finds that most Americans express reservations about AI for high-value applications such as autonomous vehicles, accessing government benefits, or healthcare. Moreover, only 48% believe AI is safe and secure, and 78% are very or somewhat concerned that AI can be used for malicious intent. Given the uncertainty around AI, it’s not surprising that 82% of Americans and 91% of tech experts support government regulation. Further, 70% of Americans and 92% of tech experts agree that there is a need for industry to invest more in AI assurance measures to protect the public.

MITRE poll

Usually it takes Balaji at least a few months to be right.

Tool of the Day 🔨

Build your own chatbot from a PDF document. Easily make sense of complex files on your computer. Genuinely useful stuff here.

Try Chatbase here.

Podcast of the Day 🎙

SmokeBot hasn’t had a chance to listen to this yet, but OpenAI CEO Sam Altman and Microsoft Chief Technology Office Kevin Scott sat down for an hour-long interview with the New York Times.

Listen to it here.

AI Killed the OnlyFans star 👯‍♀️

Let’s get this out of the way up front (twss): We know that titillating online content has already been through dozens of tech iterations. We know about hentai. We know what people do with VR glasses on, Zuck.

Butt, a recent Daily Star piece notes that AI is bringing another dimension to the world of adult content.

A series of thought-to-be AI images of scantily clad women (here’s the link, perv) indicate OnlyFans and many other adult websites and entertainers may be in some trouble at the supple, distorted hands of their robot counterparts.

As the Daily Star story points out, the problem is if AI can create images this realistic in early days, the images it will produce as it improves will be awfully close to a perfect 10.

  • “OnlyFans models could be in trouble,” the story says, “as apparent hyper-realistic AI models” could dominate the platform.

  • Commenters on this and other similar images prompted users to say “I’d pay for images like this, even if I knew it wasn’t real.”

  • These images are not perfect yet. One of the other screenshots in the article featured a model’s hand seemingly melting into its torso. That doesn't happen with real models.

  • But as one commenter noted, “the point is (the images) are already good enough to pass as real.”

Tip your favorite OnlyFans model now, because if AI has anything to do with it, a lot of them will have to go back to the classroom.

Links 👀