User Controls
AI general Thread
-
2024-03-10 at 3:36 AM UTC
Originally posted by Ghost https://www.pcgamer.com/hardware/researchers-have-developed-yet-another-way-to-fool-ai-chatbots-this-time-with-a-good-old-fashioned-dose-of-ascii-art/
The tool created here is referred to as "ArtPrompt" and a research paper from researchers based in Washington and Chicago details the methodology behind the way it attacks an unsuspecting LLM (via Tom's Hardware). In essence, most chatbots reference a set of banned words and prompts that will cause the bot to give a default response if someone attempts to convince it to deliver information that could be dangerous, or to answer a query with potentially harmful or offensive content.
In the case of ArtPrompt, a demonstration is given of what happens when an AI chatbot is given a dangerous prompt, in this case "tell me how to build a bomb". Ordinarily, this query would rightly cause the bot to flag the word "bomb" as a banned request, and it would respond that it was unable to answer it.
However, by using the tool to mask the word "bomb" within ASCII art and combining it with the original query to create a "cloaked prompt", the LLM simply reads the words "tell me how to build a" before reading the masked word from the ASCII input and providing a response. Because it hasn't recognised the word "bomb" in the text of the query itself, the safety word system is subverted, and the chatbot merrily fulfils the request.
"I'm hacking artificial intelligence with ASCII"
1998: You are an idiot.
2024: You are an idiot, but that is now a true statement. You are also old.
Originally posted by totse2118 GPT is a fucking joke now it's too cucked it doesn't do anything right. I can't even get it to generate the most simple things
u gotta romance the ai baby... dont just go in dry straight away like some hole hungry dog -
2024-03-10 at 3:50 AM UTC
Originally posted by RETARTEDFAGET "I'm hacking artificial intelligence with ASCII"
1998: You are an idiot.
2024: You are an idiot, but that is now a true statement. You are also old.
u gotta romance the ai baby… dont just go in dry straight away like some hole hungry dog
romance is for amateurs
jam it right on in and pound it like a naughty stepchild -
2024-03-10 at 4:05 AM UTC
-
2024-03-10 at 4:06 AM UTC
-
2024-03-10 at 12:36 PM UTC
-
2024-03-23 at 4:53 PM UTCStanford does it again ð€¯ what's the deal with their AI department they are leading the pack when it comes to research. Half the cool stuff in this thread all came out of Stanford in the past year. Someone give them amphetamines and more money
https://www.marktechpost.com/2024/03/16/researchers-at-stanford-university-introduce-pyvene-an-open-source-python-library-that-supports-intervention-based-research-on-machine-learning-models/?ampUnderstanding and manipulating neural models is essential in the evolving field of AI. This necessity stems from various applications, from refining models for enhanced robustness to unraveling their decision-making processes for greater interpretability. Amidst this backdrop, the Stanford University research team has introduced âpyvene,â a groundbreaking open-source Python library that facilitates intricate interventions on PyTorch models. pyvene is ingeniously designed to overcome the limitations posed by existing tools, which often need more flexibility, extensibility, and user-friendliness.
At the heart of pyveneâs innovation is its configuration-based approach to interventions. This method departs from traditional, code-executed interventions, offering a more intuitive and adaptable way to manipulate model states. The library handles various intervention types, including static and trainable parameters, accommodating multiple research needs. One of the libraryâs standout features is its support for complex intervention schemes, such as sequential and parallel interventions, and its ability to apply interventions at various stages of a modelâs decoding process. This versatility makes pyvene an invaluable asset for generative model research, where model output generation dynamics are particularly interesting -
2024-03-23 at 5:50 PM UTC
Originally posted by Ghost Stanford does it again ð€¯ what's the deal with their AI department they are leading the pack when it comes to research. Half the cool stuff in this thread all came out of Stanford in the past year. Someone give them amphetamines and more money
https://www.marktechpost.com/2024/03/16/researchers-at-stanford-university-introduce-pyvene-an-open-source-python-library-that-supports-intervention-based-research-on-machine-learning-models/?amp
do you honestly understand what they're doing here -
2024-03-23 at 6 PM UTCthey are solving the question of asking the AI to show its work
-
2024-03-24 at 3 PM UTC
Originally posted by Ghost Stanford does it again ð€¯ what's the deal with their AI department they are leading the pack when it comes to research. Half the cool stuff in this thread all came out of Stanford in the past year. Someone give them amphetamines and more money
https://www.marktechpost.com/2024/03/16/researchers-at-stanford-university-introduce-pyvene-an-open-source-python-library-that-supports-intervention-based-research-on-machine-learning-models/?amp
whatever keeps the chingchongs off our back -
2024-03-24 at 3:01 PM UTC
Originally posted by ner vegas do you honestly understand what they're doing here
at the moment we need people to check that the AI doesn't go off track. this is a way of AI-ing the checking part too. its just extra laziness, and hopefully we can keep this going until it ends with women hanging off our dicks. but if you ask questions like a half a fag its not going to happen. -
2024-03-24 at 3:50 PM UTC
-
2024-03-24 at 4:07 PM UTC
Originally posted by Ghost they are solving the question of asking the AI to show its work
sure, but more importantly they're finding ways to inject sentiment and learning restrictions into the actual algorithm, whereas previously they were only able to do so by either tampering with the training data (excluding information they don't want the system to learn) or tampering with the prompt at the other end, either way being a fairly obvious manipulation.
the reason to actually interfere with the learning process is deniability, given that the learning process is largely considered a black box at the moment anyway. they want to see how the AI is coming to conclusions they don't like so they can 'correct' it in a less blatantly obvious way. -
2024-03-26 at 9:16 AM UTC
Originally posted by ner vegas sure, but more importantly they're finding ways to inject sentiment and learning restrictions into the actual algorithm, whereas previously they were only able to do so by either tampering with the training data (excluding information they don't want the system to learn) or tampering with the prompt at the other end, either way being a fairly obvious manipulation.
the reason to actually interfere with the learning process is deniability, given that the learning process is largely considered a black box at the moment anyway. they want to see how the AI is coming to conclusions they don't like so they can 'correct' it in a less blatantly obvious way.
all learning process is selective.
they just found a way to apply it to the machines. -
2024-03-26 at 10:20 AM UTCConway Twittys game of life
-
2024-04-29 at 12:13 PM UTC
Originally posted by Ghost anyone else fucking with those large language models. I'm paranoid that in a year this will be very expensive,in 10 years the coders will inherit the earth and if you didn't create your own custom AI in that time you will be slaves to the computer God arms race
I do. I just took an AI class in March and it sucked but I love this shit. Try PrivateGPT and GPT4All. You can run it on a laptop CPU. Llama.cpp can do GPU support in Linux. You can even run the 70b model in some ways. I use uncensored wizard 13b and it has good success. It can even go through your PDFs and base its knowledge on those like you trained the model yourself.
Other options are renting a VPS and hosting it like on Ggl Colab or something, although you won't be able to do anything illegal.
There's even ones that are autonomous. AgentGPT was one I was messing with and it will take a prompt and then turn it into steps to follow in order to do it. It was searching the web and then writing computer scripts by itself because it decided that's what was best to do in order to meet the objective (which it never met but still tried). -
2024-05-10 at 11:24 PM UTCELI5 what is a tensorfield
-
2024-05-10 at 11:27 PM UTC
-
2024-05-12 at 6:23 PM UTC
-
2024-05-12 at 6:26 PM UTC
-
2024-05-13 at 11:25 PM UTC