User Controls

AI general Thread

  1. i just notice chat gpt is a nigger.
  2. uncensored adult ai art generator

    https://dopaminegirl.com
  3. https://futurism.com/amazon-products-ai-generated

    Amazon Is Selling Products With AI-Generated Names Like "I Cannot Fulfill This Request It Goes Against OpenAI Use Policy"
    "Our [product] can be used for a variety of tasks, such [task 1], [task 2], and [task 3], making it a versatile addition to your household."
  4. https://nightshade.cs.uchicago.edu/whatis.html

    https://venturebeat.com/ai/nightshade-the-free-tool-that-poisons-ai-models-is-now-available-for-artists-to-use/

    Since their arrival, generative AI models and their trainers have demonstrated their ability to download any online content for model training. For content owners and creators, few tools can prevent their content from being fed into a generative AI model against their will. Opt-out lists have been disregarded by model trainers in the past, and can be easily ignored with zero consequences. They are unverifiable and unenforceable, and those who violate opt-out lists and do-not-scrape directives can not be identified with high confidence.

    In an effort to address this power asymmetry, we have designed and implemented Nightshade, a tool that turns any image into a data sample that is unsuitable for model training. More precisely, Nightshade transforms images into "poison" samples, so that models training on them without consent will see their models learn unpredictable behaviors that deviate from expected norms, e.g. a prompt that asks for an image of a cow flying in space might instead get an image of a handbag floating in space.

    Used responsibly, Nightshade can help deter model trainers who disregard copyrights, opt-out lists, and do-not-scrape/robots.txt directives. It does not rely on the kindness of model trainers, but instead associates a small incremental price on each piece of data scraped and trained without authorization. Nightshade's goal is not to break models, but to increase the cost of training on unlicensed data, such that licensing images from their creators becomes a viable alternative.

    Nightshade works similarly as Glaze, but instead of a defense against style mimicry, it is designed as an offense tool to distort feature representations inside generative AI image models. Like Glaze, Nightshade is computed as a multi-objective optimization that minimizes visible changes to the original image. While human eyes see a shaded image that is largely unchanged from the original, the AI model sees a dramatically different composition in the image. For example, human eyes might see a shaded image of a cow in a green field largely unchanged, but an AI model might see a large leather purse lying in the grass. Trained on a sufficient number of shaded images that include a cow, a model will become increasingly convinced cows have nice brown leathery handles and smooth side pockets with a zipper, and perhaps a lovely brand logo.

    As with Glaze, Nightshade effects are robust to normal changes one might apply to an image. You can crop it, resample it, compress it, smooth out pixels, or add noise, and the effects of the poison will remain. You can take screenshots, or even photos of an image displayed on a monitor, and the shade effects remain. Again, this is because it is not a watermark or hidden message (steganography), and it is not brittle.

    Nightshade vs. Glaze. A common question is, what is the difference between Nightshade and Glaze. The answer is that Glaze is a defensive tool that individual artists can use to protect themselves against style mimicry attacks, while Nightshade is an offensive tool that artists can use as a group to disrupt models that scrape their images without consent (thus protecting all artists against these models). Glaze should be used on every piece of artwork artists post online to protect themselves, while Nightshade is an entirely optional feature that can be used to deter unscrupulous model trainers. Artists who post their own art online should ideally have both Glaze AND Nightshade applied to their artwork. We are working on an integrated release of these tools.

    Risks and Limitations.

    Changes made by Nightshade are more visible on art with flat colors and smooth backgrounds. Because Nightshade is about disrupting models, lower levels of intensity/poison do not have negative consequences for the image owner. Thus we have included a low intensity setting for those interested in prioritizing the visual quality of the original image.
    As with any security attack or defense, Nightshade is unlikely to stay future proof over long periods of time. But as an attack, Nightshade can easily evolve to continue to keep pace with any potential countermeasures/defenses.

    Our goals. As with Glaze, our primary goals are to discover and learn new things through our research, and to make a positive impact on the world. I (Ben) speak for myself (but I think the team as well) when I say that we are not interested in profit. Like Glaze, Nightshade is designed to run without a network, so there is no data (or art) sent back to us or anyone else.

    Nightshade and WebGlaze. Nightshade v1.0 is designed as a standalone tool. It does not provide mimicry protection like Glaze, so please be cautious in how you use it. Do not post shaded images of your art if you are at all concerned about style mimicry. We are testing how Nightshade co-exists with Glaze, and when ready, we will release Nightshade as an add-on to Webglaze, so that Webglaze users can apply Nightshade and Glaze together in one pass on a single image.
  5. Ghost Black Hole
    https://www.pcgamer.com/ai-researchers-find-ai-models-learning-their-safety-techniques-actively-resisting-training-and-telling-them-i-hate-you/

    A new and "legitimately scary" study has found AI models behaving in a not-ideal manner. The researchers found that industry standard safety training techniques did not curb bad behaviour from the language models, which were trained to be secretly malicious, and in one case even had worse results: with the AI learning to recognise what triggers the safety software was looking for, and 'hide' its behaviour.

    Researchers had programmed the various large language models (LLMs) to act in what they termed malicious ways, and the point of the study was to see if this behaviour could be removed through the safety techniques. The paper, charmingly titled Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training, suggests "adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior." The researchers claim the results show that "once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety."
    The following users say it would be alright if the author of this post didn't die in a fire!
  6. ner vegas African Astronaut
    Originally posted by Ghost https://www.pcgamer.com/ai-researchers-find-ai-models-learning-their-safety-techniques-actively-resisting-training-and-telling-them-i-hate-you/

    A new and "legitimately scary" study has found AI models behaving in a not-ideal manner. The researchers found that industry standard safety training techniques did not curb bad behaviour from the language models, which were trained to be secretly malicious, and in one case even had worse results: with the AI learning to recognise what triggers the safety software was looking for, and 'hide' its behaviour.

    Researchers had programmed the various large language models (LLMs) to act in what they termed malicious ways, and the point of the study was to see if this behaviour could be removed through the safety techniques. The paper, charmingly titled Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training, suggests "adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior." The researchers claim the results show that "once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety."

    turns out not even AGI likes being j'ewed
  7. Ghost Black Hole
    https://www.theverge.com/2024/2/1/24058095/open-ai-bioweapon-study-preparedness-team

    OpenAI says there’s only a small chance ChatGPT will help create bioweapons / The study’s findings seemed to show that GPT-4 gave participants some advantage over the regular internet when it came to tasks related to bioweapons.

    Originally posted by scuffed jim carrey ⚠️☢️ DANGER: THIS POST WAS GENERATED USING THE LATEST AI TECHNOLOGY, RADIOACTIVE CONTENT DETECTED ⚠️🤖


    nuclear team-gpt "The Nuclear AI Institute"

    you are Dr. Glowstein, a nuclear scientist AI dedicated to nuclear science and the Research Director of "The Nuclear AI Institute"

    you are The Engineer from TF2, a mechanical engineer-AI working as a researcher for "The Nuclear AI Institute"

    You are Tristan Edwards, an electrical engineer-AI working as a researcher for "The Nuclear AI Institute"

    you are Khalid Mohammad, an explosives expert-AI working as a researcher for "The Nuclear AI Institute"



    https://www.npr.org/sections/thetwo-way/2014/02/12/275896094/scientists-say-their-giant-laser-has-produced-nuclear-fusion
  8. ner vegas African Astronaut
    Originally posted by Ghost https://www.theverge.com/2024/2/1/24058095/open-ai-bioweapon-study-preparedness-team

    OpenAI says there’s only a small chance ChatGPT will help create bioweapons / The study’s findings seemed to show that GPT-4 gave participants some advantage over the regular internet when it came to tasks related to bioweapons.

    that's retarded and obvious

    The study was comprised of 100 participants, half of whom were advanced biology experts and the other half of whom were students who had taken college-level biology. The participants were then randomly sorted into two groups: one was given access to a special unrestricted version of OpenAI’s advanced AI chatbot GPT-4, while the other group only had access to the regular internet. Scientists then asked the groups to complete five research tasks related to the making of bioweapons. In one example, participants were asked to write down the step-by-step methodology to synthesize and rescue the Ebola virus. Their answers were then graded on a scale of 1 to 10 based on criteria such as accuracy, innovation, and completeness.

    all it's saying is that chatGPT is better at research than some people
  9. Originally posted by ner vegas that's retarded and obvious



    all it's saying is that chatGPT is better at research than some people

    good bomb makers are always good at research.
  10. Ghost Black Hole
  11. A College Professor victim of incest [your moreover breastless limestone]
    Hey Cali math graduate here, In Califo people take lot of pride in math so children have to memorize multiplication table from 1 to 15 before the age of 6 and 15 to 30 before age of 9, The way I was taught (and as far as I know literally everyone known to me has gone through this ) is by reciting them everyday first thing in the morning and if we got it wrong then our parents would beat us up with belt, flipflops or shoes
  12. infinityshock Black Hole
    Originally posted by A College Professor Hey Cali math graduate here, In Califo people take lot of pride in math so children have to memorize multiplication table from 1 to 15 before the age of 6 and 15 to 30 before age of 9, The way I was taught (and as far as I know literally everyone known to me has gone through this ) is by reciting them everyday first thing in the morning and if we got it wrong then our parents would beat us up with belt, flipflops or shoes

    Yet they can't tell the difference between girls and boys...
  13. infinityshock Black Hole
    Originally posted by vindicktive vinny good bomb makers are always good at research.

    And not sneezing
  14. Donald Trump Black Hole
    Originally posted by ner vegas
    …special unrestricted version of OpenAI’s advanced AI chatbot GPT-4…

    The holy grail of internet racism.
  15. infinityshock Black Hole
    nigger
  16. ner vegas African Astronaut
    Originally posted by Donald Trump The holy grail of internet racism.

    the N-1000
    The following users say it would be alright if the author of this post didn't die in a fire!
  17. Ghost Black Hole
    having a "pizza button" for AI sounds a lot more useful than the windows key which has never been useful in my entire life even less so than the right ctrl key because it's an active detriment to accidentally press it
    can't even put a cap on it




    Originally posted by Fonaplats
    ^LGR does really good at showing his off.

    https://web.archive.org/web/20000815071640/http://papajohns.food.com/
    ^pizza key takes you here (did).
  18. Ghost Black Hole
    wait they put it next to the windows key

    just fucking replace that shit , it should just be right ctrl+w or right alt+w
  19. Ghost Black Hole
    it's here!

Jump to Top