China could offer a model for deepfake regulation
Governments have been reluctant to regulate deepfakes over fears that such efforts may curtail free speech. The Chinese government, which isn’t so troubled by that risk, thinks it has a solution. The country has adopted rules that require deepfakes to have the subject’s consent and bear watermarks, for example. Other countries will be watching and taking notes. (The New York Times)How OpenAI used low-paid Kenyan workers to make ChatGPT less toxic
OpenAI used a Kenyan company called Sama to train its popular AI system, ChatGPT, to generate safer content. Low-paid workers sifted through endless amounts of graphic and violent content on topics such as child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest. This story is a good reminder of all the deeply unpleasant work humans have to do behind the scenes to make AI systems safe. (Time)
OpenAI used a Kenyan company called Sama to train its popular AI system, ChatGPT, to generate safer content. Low-paid workers sifted through endless amounts of graphic and violent content on topics such as child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest. This story is a good reminder of all the deeply unpleasant work humans have to do behind the scenes to make AI systems safe. (Time)
沒有留言:
張貼留言