šŸ¤· Microsoft Lays Off AI Ethics Team

Microsoft's calm before the storm, Sam's altruism, Google releases AI features, Headlines, Tweet of the Day, Links and more

ChatGPT-4 cometh and Microsoft re-arranges its deck chairs in anticipation.

In the email today:

  • Microsoft lays off AI ethics team šŸ¤·

  • Sam Altman wears white hat šŸ¦ø

  • Tweet of the Day šŸ¦…

  • Headlines šŸ“°

  • Google releases AI features šŸ’»

  • Links šŸ‘€

Onward.

Microsoft Quietly Kills Its AI Ethics Team Signaling Bold Charge Forward Against Google, Meta šŸ„Š 

We reported last week and previously about Metaā€™s declared desire to be a leader in responsible deployment of AI.

Microsoftā€™s tacit response to that kind and gentle approach is apparently that second place is first loser, and Microsoft isnā€™t here to lose.

  • Microsoft laid off its entire ethics and society team within the artificial intelligence organization as part of recent layoffs that affected 10,000 employees across the company.

  • The move leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design.

  • Microsoft still maintains an active Office of Responsible AI, which is tasked with creating rules and principles to govern the companyā€™s AI initiatives.

  • The ethics and society team played a critical role in ensuring that the companyā€™s responsible AI principles are actually reflected in the design of the products that ship.

  • The conflict underscores an ongoing internal tension for tech giants that build divisions dedicated to making their products more socially responsible.

  • Members of the ethics and society team said they generally tried to be supportive of product development but the companyā€™s leadership became less interested in the kind of long-term thinking that the team specialized in.

  • Any cuts to teams focused on responsible work seem noteworthy given the existential risks posed by AI.

Some members of the team are being transferred elsewhere in the company.

And from Platformer, recapping the announcement to the team:

Some members of the team pushed back. ā€œI'm going to be bold enough to ask you to please reconsider this decision,ā€ one employee said on the call. ā€œWhile I understand there are business issues at play ā€¦ what this team has always been deeply concerned about is how we impact society and the negative impacts that we've had. And they are significant.ā€

Montgomery (AI VP) declined. ā€œCan I reconsider? I don't think I will,ā€ he said. ā€œCause unfortunately the pressures remain the same. You don't have the view that I have, and probably you can be thankful for that. There's a lot of stuff being ground up into the sausage.ā€

Platformer

SmokeBotā€™s thoughts:

  1. This was probably inevitable. AIā€™s explosion into the world has been so rapid and so pervasive that anything or anybody internally trying to put brakes on it for ethical and social reasons was probably going to get trampled.

  2. The timing of this report was no accident. Friday afternoons often see news dumps from companies and parties hoping a story will gain no traction. This Monday night report, coming out pretty quietly as the world breathlessly waits for a Chat GPT-4 release, may quickly get washed over by an expected wave of ChatGPT and Office event (Thursday) coverage.

  3. Microsoft clearly considered the optics of this move, evaluated what sort of public relations hit it might take, and then remembered that it told its investors recently that each percentage point in search market share could be worth $2 billion. Braking for caution was just going to be too costly.

Corporations say one thing (ā€œweā€™re committed to responsible AI development and useā€) and do another (clip the AI ethics team) all the time.

The primary problem we see in this instance is that even Microsoft doesnā€™t know what it doesnā€™t know about what crippling or de-emphasizing ethical safeguards will mean when new AI products are created and released into the world.

And pour one out for those who lost their jobs. Itā€™s pretty clear that they were told awhile ago that their contributions to the development of responsible AI were valued and necessary. Until they werenā€™t.

Sam Altman Wears The White Hat, Though Self-Interest Is Probably In Play šŸ¦ø 

By now you have probably read way too much about how Silicon Valley Bank crashed against the rocks last week.

Catastrophe invites rescue, and Sam Altman of all people apparently functionally choppered into an active war zone to temporarily forestall chaos over a very uncertain weekend:

  • OpenAI CEO Sam Altman and other industry executives moved quickly to do what they could to save small businesses caught up in the SVB collapse.

  • Altman bailed out some entrepreneurs from his own pocket, and Henrique Dubugras, co-CEO of fintech startup Brex, announced an emergency credit line to help startups get through their next payroll.

  • As of Saturday evening, Brex had received $1.5 billion in demand from nearly 1,000 companies.

  • Even small startups like Streak are helping by offering to lend personal cash to other small startups who are worried about paying staff.

  • Altman did not comment on how much he had given companies but said he did not view his contributions as risky.

SmokeBot is not here to pour cold water on benevolence. What Altman (and other moneyed actors) did here is praiseworthy. If only more people would step up to help in times of abject trouble.

But.

Cynics (ahem) will say that Altman ā€” who has profited to an absurd extent by unleashing a revolutionary technology on the world ā€” is papering over some of the recent AI ethics hits by winning over an ecosystem of founders, many of whom are probably developing AI apps.

It was also a low stakes bet. By Monday, the government had announced all deposits would be covered, so he likely got his money back very very quickly, if any even left the bank.

Tweet of the Day šŸ¦…

Taken from Ezra Klein in the New York Times.

Headlines šŸ“° 

General Motors considering placing AI into mass production: We donā€™t generally associate GM with the cutting edge, but give the automaker credit for ā€œdeveloping a virtual personal assistant that utilizes the same OpenAI artificial intelligence models behind ChatGPT.ā€ šŸš˜ļø 

ChatGPTā€™s ā€œhallucinationsā€ may be the platformā€™s fatal flaw: Until ChatGPT and similar large language models stop making up facts, the technology is apt to remain an amusing curiosity that canā€™t be safely relied on. šŸ™ƒ 

AI is making deepfakes both cheaper and easier to create: Itā€™s the first part thatā€™s the problem: Making deepfakes has proven pretty simple for bad actors for a while now, but giving these fraudsters inexpensive paths to sow evil is a real problem. šŸ‘€ 

OpenAI co-founder Greg Brockman thinks the Internet is on its way to sentience: ā€œWeā€™re clearly moving to a world where (the internet) is alive. You can talk to it, and it understands you and helps you,ā€ said Brockman. This reminds us of another quote.

Google Announces Upcoming Addition of AI Chat to Gmail and Docs šŸ’»

Gmail already includes autocomplete. Now Google is apparently trying to keep you from beginning the sentence in the first place.

  • Google is testing AI features in Workspace, which includes Gmail and Google's productivity tools, allowing users to create text using generative AI technology.

  • Users can type a request, such as "draft an email to the team," and the application will produce a three-paragraph thank-you note that can be edited or turned into a bulleted list.

  • The company has not announced when the features will be broadly released or if they will cost extra.

  • Google plans to release additional AI features to Workspace later this year, including formula generation in Sheets, automatically generated images in Slides, and note-taking in Meet.

  • Google Cloud CEO Thomas Kurian said the company has started testing a service for building corporate chatbots.

These features are still in development, and truthfully, it sounds like this does more or less what Chat-GPT does as a free-standing platform.

Google surely knows, though, that every day that passes where it canā€™t keep pace with ChatGPT and Microsoft, it risks falling further behind.

So we donā€™t expect these features to be road-tested for too long before they are widely available. Google doesnā€™t have that kind of time.

Links šŸ‘€

  • AI text generationā€™s value is projected to surge past $1.8 billion in the next decade šŸ’°ļø 

  • Youā€™re not going to believe this, but China might be ahead of the United States on the AI front šŸ˜Ø 

  • Corporations are increasingly enthusiastic about AI deciding who stays and who goesā—ļø 

  • Lawyers are raging against the dying of the light in terms of the obvious AI threat to their worth āš–ļø 

  • The last opinion we thought weā€™d grapple with was Pope Francis advocating for womenā€™s voices to be heard vis-a-vis AI, but here we are āœļø