☄️ Here Come The AI Lawsuits

Art tools sued, teachers can fight back with AI, Tool of the Day and more

Send lawyers, guns, and money, the 💩 has hit the fan.

In the newsletter today: 

  • AI art tools sued for copyright infringement ⚖️

  • Education becomes the latest battleground for AI 👩‍🏫

  • Tool of the Day 🔨

  • The More You Know 💫

  • Links 👀

Grab yo' sodas.

AI Art Tools Get Slapped with Inevitable Copyright Lawsuits ⚖️

This actually took longer to happen than we thought it would.

Stable Diffusion and Midjourney are being sued for copyright infringement by a San Francisco-based law firm with a history of taking on BIG TECH on behalf of workers and creators. They even announced the suit with a slick-looking webpage and a Wall Street Journal-style byline image:

Cool.

Artists are notoriously (and understandably) protective of their work. AI has not the time or the inclination to care. This was an unavoidable collision.

LATE-BREAKING NEWS: Getty is also suing Stable Diffusion for the same reasons.

  • A lawsuit has been filed against Stability AI, Midjourney, and DeviantArt for allegedly infringing the rights of millions of artists by training their AI tools on five billion images scraped from the web without the consent of the original artists.

  • The lawsuit was filed by lawyer Matthew Butterick, who is working with the Joseph Saveri Law Firm, which specializes in antitrust and class action cases-- including lawsuits on behalf of content moderators at the likes of YouTube and Facebook

  • The suit claims that the capacity of AI art tools like Stable Diffusion to “flood the market with an essentially unlimited number of infringing images will inflict permanent damage on the market for art and artists."

  • The use of copyrighted material to train AI art generators has been a contentious topic within the art community, with some arguing that the use is covered by fair use doctrine.

That sound you hear is the canary in the coal mine

What this means: This type of lawsuit has been expected and predicted by experts.

At issue is whether the process of ingesting large amounts of data, including art, and then using that data to produce competitive works constitutes copyright infringement.

On the human level, artists are influenced by other artists. They champion this process all of the time. In many ways, AI algorithms function the same way. But they do so with (literal) mathematical precision and at a scale never before seen, sometimes for profit. 

The Getty suit goes a step further and notes that Stable Diffusion's open source algorithm shows that Getty images constitute a large portion of its training set. Some images created by Stable Diffusion even include a messy version of the Getty watermark.

Oops.

These suits, and the inevitable others that will follow, will tackle the question as to whether we should treat Fair Use differently for AI than we do for humans. Is there a difference? To what degree is there a difference?

Problems with the case: This lawsuit itself has some issues.

None of the artists are widely known, so it may not capture the public's attention the way Tom Petty (RIP) suing Sam Smith or David Bowie (RIP) suing Vanilla Ice did.

It also contains some technical inaccuracies. The Verge described it best:

The lawsuit launched by Butterick and the Joseph Saveri Law Firm has also been criticized for containing technical inaccuracies. For example, the suit claims that AI art models “store com­pressed copies of [copyright-protected] train­ing images” and then “recombine” them; functioning as “21st-cen­tury col­lage tool[s].” However, AI art models do not store images at all, but rather mathematical representations of patterns collected from these images. The software does not piece together bits of images in the form of a collage, either, but creates pictures from scratch based on these mathematical representations.

The Verge

The firm is no stranger to taking on BIG TECH.

They've previously sued basically all of them on behalf of content moderators, among other claims on behalf of workers.

And sometimes cases like this can benefit the firm by getting their name out there (helllllo, all of the coverage they are getting today) more than clients.

Bottom line: The real trouble comes from AI creation of inarguably unique images that, imperceptible to most people, pull patterns from existing artwork to make something new.

Which leads to the age-old debate of how "new" any artwork or song or novel or interpretative dance really is. Creators "borrow" and "crib" from great works every day.

We will see a whole lot more of this sort of litigation in 2023 and beyond.

Education and AI are Clashing in a Big Way 👩‍🏫

Former Baltimore Ravens linebacker Ray Lewis apocryphally said that the Ravens paid him to work Monday through Saturday, and that he worked Sundays for free. 

Ask a teacher in your life why they teach. Almost uniformly, they teach because they can make positive changes in young lives.

Conversely, very few if any teachers choose the profession to answer inane questions from parents, draw up lesson plans, and remove snot from the bottom of desks so the RSV outbreak in their classroom gets under control.

AI may primed to give teachers a lot of their valuable time back even if it can't do anything about the snot (yet).

From the Madeline Will piece for Education Week:

  • Artificial intelligence tools like ChatGPT can help teachers save time by quickly generating lesson plans, responses to parents, rubrics, feedback on student work, and letters of recommendation for high school students headed to college.

  • However, some teachers worry that using these tools could strip away the creativity and relational aspects of teaching or introduce bias into lessons.

  • Teachers who have tested ChatGPT's capabilities have mixed opinions on its usefulness, with some finding it a good starting point that can be built upon, while others believe it too nascent to replace the expertise of a human teacher.

  • Another concern involves grading student work because AI has no conception of whether a student is achieving at, below or above his or her usual level.

✍️ rubric. Remind SmokeBot to Google that one.

The great battle has commenced!

We have already discussed how students are using AI to take shortcuts and or outright cheat.

Academia has only ratcheted up the panic. The fall term was set to be business as usual and then ChatGPT entered the, um, chat.

But teachers can play this game, too. 

Take a tool like (SmokeBot fav) Quillbot, which can, yes, rewrite copy for students while it helps them to avoid plagiarizing. Point: student.

But it can also check any work for plagiarism. Point: teacher.

SmokeBot is sensing a theme here: Like physics, every instance of AI has an equal and opposite reaction. 

AI papers will be graded by AI teachers.

What you can do, I can do... just as well. Or something like that. The point is, we may be entering an AI arms race. The early battles were on behalf of the rebels. But when institutions, corporations, and governments begin their counter assaults, they'll likely do so with much more powerful tools, as is usually the case.

For now, though, professors may be stuck taking things back to the 20th century. Bluebooks and pens. Oral examinations. Anything to keep ChatGPT from subverting the academic order.

Or you know, they can just unleash the clones.

Tool of the Day 🔨

SmokeBot is terrible at spreadsheets. Can do the "sum" thing and that's about it.

What if there was a better way?

Indeed there is.

Genius Sheets can do all of the heavy lifting and get you the data you want, quickly and easily, from a spreadsheet.

Try Genius Sheets for $1 right here.

The More You Know 💫

Avoid the scorn of Smart Guy Twitter.

Don't correlate the number of parameters in upcoming GPT-4 with its real-world performance... unless you want to be mocked endlessly by tech nerds on Twitter.

Wonder how many of these guys use Android phones and talk about how many megapixels they have? 🤔

Links 👀

SmokeBot out.