← Back to the Toolbox

Nightshade

If you scrape me, you poison your model.

Nightshade, from the same University of Chicago lab as Glaze, goes on offence. It modifies an image so that, to a human, it still looks like (say) a dog. But to a model being trained on it, it teaches the model that dog means cat.

Who it's for
Artists willing to escalate: to make unconsented scraping expensive, not just unrewarding.
Stance
Offensive / technical.
Cost
Free.
Requires cooperation?
No.

How it works, in plain English

Diffusion models are surprisingly fragile at the concept level. A small number of carefully-crafted images, seeded into a training set, can shift what the model thinks a concept means. The Nightshade paper reports noticeable impact with as few as 50–100 poisoned samples per concept, and over 84% attack success with around 200.

The image you release still looks, to human eyes, like whatever it always was: a dog, a hat, a painting of a meadow. But its latent representation has been steered so that the model, during training, updates in the wrong direction. After enough poisoned samples, prompts containing the targeted concept start returning incoherent or simply wrong images.

What it does for you

  • Raises the cost of scraping unconsented data. If a company can't trust its training set, it has to filter carefully (expensive) or license data honestly (the point).
  • Survives real defences: high-loss filtering, frequency analysis, and basic poison-detection methods all fail to remove Nightshade samples reliably.
  • Transfers across models: poisoned samples generated against one feature extractor still degrade others.
  • Free and open: same SAND Lab infrastructure as Glaze.

What it doesn't do

  • It won't take down a production model. It's a statistical nudge that gets stronger as more people use it.
  • It's ethically contested: see the callout below.
  • It requires commitment: a single artist glazing and nightshading changes little. Collective adoption is where the leverage lives.
  • Once a concept has enough clean training data, poisoning it gets harder. Nightshade works best for an artist's own name and rare stylistic concepts.

The concept-sparsity insight

The Nightshade paper makes a quiet but important point: rare concepts are easier to poison than common ones. "Dog" has millions of training images and is hard to meaningfully shift. "Greg Rutkowski", a specific artist's name, has a few hundred, and can be effectively corrupted with fewer than 100 poison samples. This means the tool's leverage falls exactly where the harm falls hardest: on specific artists whose names and styles become prompts.

Ethics: read this section carefully Nightshade is the most contested tool on this site. Some call it vandalism of someone else's model. The authors frame it as asymmetric leverage for creators who have no other leverage: a civil-disobedience tool against scraping that already violates consent. You can hold either view. What you shouldn't do is reach for it without knowing which you hold.

How to use it today

  1. Download Nightshade from the SAND Lab. The tool is separate from Glaze but shares infrastructure. A reasonably modern GPU speeds things up substantially.
  2. Open the app, add your image, and name the concept you want to poison (usually your own name, or a style you are known for).
  3. Choose a perturbation budget. The paper's defaults (LPIPS p = 0.07) balance stealth with effectiveness.
  4. Export. Upload to public platforms where scrapers will find it. You can, and the authors recommend you do, combine Glaze and Nightshade on the same image.

Honest framing

Nightshade does not fix the system. It does not hold companies accountable. It does not restore consent retroactively. What it does is invert the usual asymmetry: scraping is no longer free. For individual artists facing a system they cannot sue their way out of, that shift in incentive is, for now, one of the few forms of leverage available.

If the law catches up, if there is a clear, enforceable right to opt out of AI training in India and elsewhere, the case for Nightshade weakens. That would be a good thing. In the meantime, the case for informed use remains.

Further reading

← Back to the Toolbox