User Controls

AI general Thread

  1. Originally posted by Ghost https://www.pcgamer.com/hardware/researchers-have-developed-yet-another-way-to-fool-ai-chatbots-this-time-with-a-good-old-fashioned-dose-of-ascii-art/

    The tool created here is referred to as "ArtPrompt" and a research paper from researchers based in Washington and Chicago details the methodology behind the way it attacks an unsuspecting LLM (via Tom's Hardware). In essence, most chatbots reference a set of banned words and prompts that will cause the bot to give a default response if someone attempts to convince it to deliver information that could be dangerous, or to answer a query with potentially harmful or offensive content.

    In the case of ArtPrompt, a demonstration is given of what happens when an AI chatbot is given a dangerous prompt, in this case "tell me how to build a bomb". Ordinarily, this query would rightly cause the bot to flag the word "bomb" as a banned request, and it would respond that it was unable to answer it.

    However, by using the tool to mask the word "bomb" within ASCII art and combining it with the original query to create a "cloaked prompt", the LLM simply reads the words "tell me how to build a" before reading the masked word from the ASCII input and providing a response. Because it hasn't recognised the word "bomb" in the text of the query itself, the safety word system is subverted, and the chatbot merrily fulfils the request.

    "I'm hacking artificial intelligence with ASCII"
    1998: You are an idiot.
    2024: You are an idiot, but that is now a true statement. You are also old.


    Originally posted by totse2118 GPT is a fucking joke now it's too cucked it doesn't do anything right. I can't even get it to generate the most simple things

    u gotta romance the ai baby... dont just go in dry straight away like some hole hungry dog
    The following users say it would be alright if the author of this post didn't die in a fire!
  2. infinityshock Black Hole
    Originally posted by RETARTEDFAGET "I'm hacking artificial intelligence with ASCII"
    1998: You are an idiot.
    2024: You are an idiot, but that is now a true statement. You are also old.




    u gotta romance the ai baby… dont just go in dry straight away like some hole hungry dog

    romance is for amateurs

    jam it right on in and pound it like a naughty stepchild
  3. Originally posted by infinityshock romance is for amateurs

    jam it right on in and pound it like a naughty stepchild

    how many step fathers have you had.
  4. Originally posted by Charles Ex Machina how many step fathers have you had.

    this nigga got a whole staircase of fathers
    The following users say it would be alright if the author of this post didn't die in a fire!
  5. Originally posted by RETARTEDFAGET this nigga got a whole staircase of fathers

    dat explains alot.
  6. Ghost Black Hole
    Stanford does it again 🀯 what's the deal with their AI department they are leading the pack when it comes to research. Half the cool stuff in this thread all came out of Stanford in the past year. Someone give them amphetamines and more money

    https://www.marktechpost.com/2024/03/16/researchers-at-stanford-university-introduce-pyvene-an-open-source-python-library-that-supports-intervention-based-research-on-machine-learning-models/?amp


    Understanding and manipulating neural models is essential in the evolving field of AI. This necessity stems from various applications, from refining models for enhanced robustness to unraveling their decision-making processes for greater interpretability. Amidst this backdrop, the Stanford University research team has introduced “pyvene,” a groundbreaking open-source Python library that facilitates intricate interventions on PyTorch models. pyvene is ingeniously designed to overcome the limitations posed by existing tools, which often need more flexibility, extensibility, and user-friendliness.

    At the heart of pyvene’s innovation is its configuration-based approach to interventions. This method departs from traditional, code-executed interventions, offering a more intuitive and adaptable way to manipulate model states. The library handles various intervention types, including static and trainable parameters, accommodating multiple research needs. One of the library’s standout features is its support for complex intervention schemes, such as sequential and parallel interventions, and its ability to apply interventions at various stages of a model’s decoding process. This versatility makes pyvene an invaluable asset for generative model research, where model output generation dynamics are particularly interesting
  7. ner vegas African Astronaut
    Originally posted by Ghost Stanford does it again 🀯 what's the deal with their AI department they are leading the pack when it comes to research. Half the cool stuff in this thread all came out of Stanford in the past year. Someone give them amphetamines and more money

    https://www.marktechpost.com/2024/03/16/researchers-at-stanford-university-introduce-pyvene-an-open-source-python-library-that-supports-intervention-based-research-on-machine-learning-models/?amp


    do you honestly understand what they're doing here
  8. Ghost Black Hole
    they are solving the question of asking the AI to show its work
    The following users say it would be alright if the author of this post didn't die in a fire!
  9. Originally posted by Ghost Stanford does it again 🀯 what's the deal with their AI department they are leading the pack when it comes to research. Half the cool stuff in this thread all came out of Stanford in the past year. Someone give them amphetamines and more money

    https://www.marktechpost.com/2024/03/16/researchers-at-stanford-university-introduce-pyvene-an-open-source-python-library-that-supports-intervention-based-research-on-machine-learning-models/?amp


    whatever keeps the chingchongs off our back
  10. Originally posted by ner vegas do you honestly understand what they're doing here

    at the moment we need people to check that the AI doesn't go off track. this is a way of AI-ing the checking part too. its just extra laziness, and hopefully we can keep this going until it ends with women hanging off our dicks. but if you ask questions like a half a fag its not going to happen.
  11. infinityshock Black Hole
    Originally posted by Charles Ex Machina how many step fathers have you had.

    whos your daddy

    no. seriously. your mom had so many gang bangs...
  12. ner vegas African Astronaut
    Originally posted by Ghost they are solving the question of asking the AI to show its work

    sure, but more importantly they're finding ways to inject sentiment and learning restrictions into the actual algorithm, whereas previously they were only able to do so by either tampering with the training data (excluding information they don't want the system to learn) or tampering with the prompt at the other end, either way being a fairly obvious manipulation.

    the reason to actually interfere with the learning process is deniability, given that the learning process is largely considered a black box at the moment anyway. they want to see how the AI is coming to conclusions they don't like so they can 'correct' it in a less blatantly obvious way.
  13. Originally posted by ner vegas sure, but more importantly they're finding ways to inject sentiment and learning restrictions into the actual algorithm, whereas previously they were only able to do so by either tampering with the training data (excluding information they don't want the system to learn) or tampering with the prompt at the other end, either way being a fairly obvious manipulation.

    the reason to actually interfere with the learning process is deniability, given that the learning process is largely considered a black box at the moment anyway. they want to see how the AI is coming to conclusions they don't like so they can 'correct' it in a less blatantly obvious way.

    all learning process is selective.

    they just found a way to apply it to the machines.
  14. Ghost Black Hole
    Conway Twittys game of life
  15. greyok Yung Blood
    Originally posted by Ghost anyone else fucking with those large language models. I'm paranoid that in a year this will be very expensive,in 10 years the coders will inherit the earth and if you didn't create your own custom AI in that time you will be slaves to the computer God arms race

    I do. I just took an AI class in March and it sucked but I love this shit. Try PrivateGPT and GPT4All. You can run it on a laptop CPU. Llama.cpp can do GPU support in Linux. You can even run the 70b model in some ways. I use uncensored wizard 13b and it has good success. It can even go through your PDFs and base its knowledge on those like you trained the model yourself.

    Other options are renting a VPS and hosting it like on Ggl Colab or something, although you won't be able to do anything illegal.

    There's even ones that are autonomous. AgentGPT was one I was messing with and it will take a prompt and then turn it into steps to follow in order to do it. It was searching the web and then writing computer scripts by itself because it decided that's what was best to do in order to meet the objective (which it never met but still tried).
    The following users say it would be alright if the author of this post didn't die in a fire!
  16. ELI5 what is a tensorfield
  17. Originally posted by RETARTEDFAGET "I'm hacking artificial intelligence with ASCII"
    1998: You are an idiot.
    2024: You are an idiot, but that is now a true statement. You are also old.

    Thanks I hate being old
  18. Iron Ree African Astronaut [my flyspeck near-blind refund]
  19. infinityshock Black Hole
    Originally posted by Iron Ree

    I used to have one of those. It was grey, though
  20. Infinityshockrates Tuskegee Airman
    I got put in for the GPT4 beta test

    used it to scan the site. IT followed the robots.txt and didn't scrape the mongolvoid. I wonder how much it pinged lannys server

    it then generated fictional data.



    10/10 service so far, this has potential. just imagine if it worked right
    The following users say it would be alright if the author of this post didn't die in a fire!
Jump to Top