🧂 #needsmoresalt

Please taste your AI before you serve it.

Workslop problems?

Everyone's using generative AI at work now. Most of the time that's cool, and generally we shouldn't feel bad about it.

But unreviewed AI outputs (slop!) creates work for other people. You've seen it in tickets, slack threads, emails, documentation, code comments. Nobody reviewed it before hitting send, and it shows. It's confident about things it shouldn't be. It answers questions nobody asked. It throws a thousand ideas at the wall that all seem like they might matter, until you roll up your sleeves and take a close look. Ugh.


Would you cook for your friends like that?


How hot is this chili crisp? I guess we'll see. Enough lemon? Who knows! Too much garlic? No such thing! Needs salt? If only there was some way to find out in advance...


You probably wouldn't cook like this. We taste as we go, and tweak and adjust till we get it right.


Think of it this way: linking a Stack Overflow answer and saying "I dunno, maybe this?" is totally fine. But it's different from presenting that answer as your own considered take. AI is a powerful tool. But if you're putting your name on the output, you should be ready to stand behind it.

How it works.

If you find you're dealing with AI output that clearly hasn't been reviewed and you're debating whether you have the energy to dig in, or feeling stressed about pushing back, maybe reply with a link here.

needsmoresalt.org

Not intended as a callout, but just as a friendly reminder that we are all responsible for making sense.

It's a nudge to take another pass, make sure it says what we mean, that we hedge where appropriate, what's speculative, what's a strong view, etc.

The recipe

  1. Use AI to generate something. No prob.
  2. Read it. Understand it. Revise it. Feel comfortable standing behind it.
  3. Send it. Be prepared to answer questions about it and explain it.
  4. If someone says #needsmoresalt, take another pass. No big deal.

The norms

This only works if the social contract goes both ways.

Nobody feels bad about using AI. For many of us, in our fields, that's just how it is now, and its OK.

Nobody feels bad about flagging workslop. You're helping.

Nobody feels bad when asked to revise. It's like someone found a bug in your code, it's normal, we fix it, we all benefit.

It goes in every direction. At this point we've all probably sent slop at some point.


What this is and isn't

Not this:
  • Shaming people for using AI
  • Policing how people write
  • Playing gotcha
This:
  • A nudge to take accountability for what you send
  • A friendly norm, not a rule
  • A low-friction way to say "this reads like the first thing, and we need the second thing"