← Back to the Toolbox

Glaze

A cloak for your art style.

Glaze is a tool from the University of Chicago's SAND Lab that adds nearly invisible perturbations to an artwork. To you, the image looks essentially the same. To a diffusion model trying to learn your style, it looks like something completely different: say, cubism or abstract expressionism.

Who it's for
Visual artists, illustrators, photographers with a recognisable style.
Stance
Defensive / technical.
Cost
Free.
Requires cooperation?
No.

How it works, in plain English

When a diffusion model learns your style, it is learning a set of statistical patterns in feature space: the colours you favour, the ways your brush moves, how you build shadow. Glaze does not try to fight this directly. It computes a tiny perturbation on top of your image that, to the model's feature extractor, shifts your work into the neighbourhood of a completely different style: Van Gogh, Picasso, Norman Bluhm. The human eye barely notices. The model learns the wrong lesson.

When someone later fine-tunes a model on a portfolio of glazed work, the model produces art in the target style, not yours. The artist survives.

What it does for you

  • In the authors' study, over 93% of 1,156 surveyed artists rated Glaze as successful at disrupting style mimicry.
  • Around 87% protection still holds when only 25% of an artist's portfolio is glazed, so you don't have to re-cloak your entire back catalogue.
  • Robust to basic counter-attacks: JPEG compression, Gaussian denoising, upscaling, and Glaze still holds above 85% protection.
  • Free and open: download, install, run locally. Your art never leaves your machine during cloaking.

What it doesn't do

  • It won't help if a model has already trained on your un-glazed work. Glaze is forward-looking.
  • It's a protection, not a weapon. For active pushback, see Nightshade.
  • The perturbation is small but not zero. In the author study, ~92% of artists said it wasn't disruptive to their art's value; ~8% disagreed.
  • Arms-race risk: later counter-techniques (PEZ, AdverseCleaner) have been evaluated; Glaze has held up so far, but nothing is permanent.

How to use it today

  1. Download Glaze from the SAND Lab at the University of Chicago. Windows and macOS builds are available; a GPU helps but isn't strictly required.
  2. Open the app and drag in your artwork.
  3. Choose a perturbation budget. The authors suggest starting at p=0.05 (subtle) and only raising to p=0.1 if you want stronger protection at the cost of visible grain.
  4. Run the cloak. Export the glazed version. Upload that, not the original, to ArtStation, Instagram, your portfolio site.
  5. Keep the un-glazed master offline. If you need to print or license, use the clean version.
Style-space intuition A model doesn't see "your style": it sees a point in a high-dimensional feature space. Glaze nudges that point, invisibly to you, into a different neighbourhood. When the model later looks for your style, it finds Van Gogh's.

The arms race, honestly

Any adversarial technique faces an arms race. Three counter-attacks have been published against Glaze: adding Gaussian noise, JPEG-compressing the image, and attempting to invert the cloak with learned methods like PEZ and AdverseCleaner. The SAND Lab's papers report that Glaze still holds above ~85% protection under these counter-attacks, but they are clear that this number may shift as techniques evolve.

The honest framing: Glaze works today, for the models that exist today. Using it raises the cost of style mimicry. It does not eliminate it forever.

Further reading

← Back to the Toolbox