User Controls
Man Jailed 18 Years For Crimes Against Digital Files !!!
-
2024-10-29 at 4:28 PM UTCman accused of hurting children where no child were hurt.
A man who used AI to create child abuse images using photographs of real children has been sentenced to 18 years in prison.
In the first prosecution of its kind in the UK, Hugh Nelson, 27, from Bolton, was convicted of 16 child sexual abuse offences in August, after an investigation by Greater Manchester police (GMP).
https://www.theguardian.com/uk-news/2024/oct/28/man-who-used-ai-to-create-child-abuse-images-jailed-for-18-years
it was never really about protecting the children. it was about control.
of them. over you. -
2024-10-29 at 4:30 PM UTCMisuse of AI technology breaks the consumer agreement and should be punishable by death. I would enforce this decentralized policy myself for free, and everyone would support me and give me money
-
2024-10-29 at 4:32 PM UTCI guess the AI wasn't so intelligent as to know what is and isn't appropriate to render.
...back to the AI drawing board (no pun intended). -
2024-10-29 at 4:41 PM UTCI think whoever made the gayI software should also go to chomo prison
-
2024-10-29 at 5:09 PM UTChttps://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine
The suspension of a Google engineer who claimed a computer chatbot he was working on had become sentient and was thinking and reasoning like a human being has put new scrutiny on the capacity of, and secrecy surrounding, the world of artificial intelligence (AI).
The technology giant placed Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google “collaborator”, and the company’s LaMDA (language model for dialogue applications) chatbot development system.
Lemoine, an engineer for Google’s responsible AI organization, described the system he has been working on since last fall as sentient, with a perception of, and ability to express thoughts and feelings that was equivalent to a human child.
https://www.theverge.com/2024/10/24/24278694/openai-agi-readiness-miles-brundage-ai-safetyMiles Brundage, OpenAI’s senior adviser for the readiness of AGI (aka human-level artificial intelligence), delivered a stark warning as he announced his departure on Wednesday: no one is prepared for artificial general intelligence, including OpenAI itself.
fully realistic AI generated porn is right around the corner. It's true that this will destroy the morality of modernist secular humanity PMΔ
“Neither OpenAI nor any other frontier lab is ready [for AGI], and the world is also not ready,” wrote Brundage, who spent six years helping to shape the company’s AI safety initiatives. “To be clear, I don’t think this is a controversial statement among OpenAI’s leadership, and notably, that’s a different question from whether the company and the world are on track to be ready at the relevant time.”
His exit marks the latest in a series of high-profile departures from OpenAI’s safety teams. Jan Leike, a prominent researcher, left after claiming that “safety culture and processes have taken a backseat to shiny products.” Cofounder Ilya Sutskever also departed to launch his own AI startup focused on safe AGI development.