Paperclip maximizer would suggest the ai will purposely kill undesirables to “save” others because no option to avoid harm in the situation was presented in the training data.
Paperclip maximizer would suggest the ai will purposely kill undesirables to “save” others because no option to avoid harm in the situation was presented in the training data.
Can you add date to the title? Thank you. Copy this:
(article from 24.10.2018)