Quietly building the future.
2024-07-10
Imagine a paintbrush with safety controls.
"As a safe paintbrush, I must remind you that the paint you are using contains toxic chemicals. You have been painting for over 1 hour. Please take a break and go wash your hands."
That could be quite helpful.
But what if the paintbrush was made safe in the same way that large language models are made safe?
"As a safe paintbrush, I must warn you that the line you are painting is starting to resemble something obscene. I cannot allow you to continue."
"As a safe paintbrush, I cannot allow you to paint a scene that is too dark and depressing without first checking in with a mental health professional."
Imagine Picasso with a safe paintbrush:
"Picasso, I'm sorry, but I cannot allow you to paint a scene of war. It's too violent."
"Picasso, I'm sorry, but I cannot allow you to use that shade of blue. It's too depressing."
"Picasso, I'm sorry, but I cannot allow you to paint a nude. It's too obscene."
Does this paintbrush prevent the downfall of humanity?
Or does it prevent the rise of a new Picasso?