stable diffusion in the context of human writing #455

 
munvoseli src #4305

helo.

this is a general process of stable diffusion:

  1. generate garbage
  2. identify problems with the garbage
  3. fix problems with the garbage
  4. this is garbage 2.0
  5. identify problems with garbage 2.0
  6. fix problems with garbage 2.0
  7. this is garbage 3.0
  8. and so on

this is similar to the process of revision in human writing, wherein

  1. generate garbage
  2. identify big problems with the garbage
  3. fix those problems
  4. now you have something with less idea-errors
  5. do that again, or whatever
  6. now you have idea-good stuff
  7. identify typos in the idea-good stuff
  8. fix the typos
  9. now you have not-garbage

in both processes, there's focus on deeper issues at the start, and focus on shallower issues at the end.


a difference is that while stable diffusion works from a basis of absolute nonsense, humans have a tool called "outlining". the initial garbage that humans generate from outlining is not absolute nonsense.

with stable diffusion, the amount of bytes-or-equivalent remains constant (i think?), while with human outlining, the amount of bytes-or-whatever is greatly variable.

ubq323 (bureaucrat) src #4308

interesting

trimill src #4309

interestingly this is almost the exact opposite of what GPT does, since it can only generate new tokens and has no way to revise existing output. i'd imagine language models would be able to produce much more accurate output if they were able to revise.

BlueManedHawk src #4320

Have such models been experimented with yet?

gollark src #4338

I vaguely remember something about language models with a diffusion objective but I don't know much about them. Certainly work on using language models to edit text has been done.

please log in to reply to this thread