User Controls
Help me improve my spambot.
-
2016-02-26 at 11 PM UTCSo as some of you know Suicidal Fish recently got into a bit of trouble over at computerforums.com which, by the way is a website so cancerous it should not be allowed to exist. Which is why i am joining the effort to inflict some cyber terrorism upon them. I already did a quick crawling and fuzzing for vulns, but there were no obvious ones so i decided i'd make a spambot. Now bear in mind i never make bots and i'm also unfamiliar with the several web based modules python has, but that's ok, i read some documentation but sadly there are still a couple things that don't quite work as i'd like them to.
[SIZE=48px][SIZE=28px]Post is depracated. Please use this post from this point[/SIZE][/SIZE]
This part of the code works and does it's job swimmingly.
import os
import random
from BeautifulSoup import BeautifulSoup
from urlparse import urlparse
import requests
import mechanize
import time
# Mechanize browser for form input
self = mechanize.Browser()
self.set_handle_robots(False)
self.addheaders = [("User-agent","Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.13) Gecko/20101206 Ubuntu/10.10 (maverick) Firefox/3.6.13")]
def login(self):
self.open("http://www.computerforums.org/forums/login.php?do=login")
self.form = list(self.forms())[0]
self["vb_login_username"] = "Space Negro"
self["vb_login_password"] = "prettyflypassword"
response = self.submit()
time.sleep(2)
print response
So we got logging in covered, but to be a proper spambot you need to be able to spider or crawl for links, i decided to use the BeautifulSoup module for this because i needed some C and C++ things for scrapy to work so whatever.
def spider():
global links
sitemap_xml = "http://www.computerforums.org/forums/sitemap/sitemap.xml"
sitemap_response = requests.get(sitemap_xml)
soup = BeautifulSoup(sitemap_response.content)
for links in soup.findAll('li'):
for url in links.findAll('a', 'href'):
for result in url.requests.get(url):
if result:
results.append(result)
print links
So this works kinda'. I can't just do soup.findAll("url") because it doesn't know how to do that apparantly. So i had my code look for 'li' tags because that's the kinda tag all the links are within but also some other things so i had to sort for "a href" as well to get the actual links, but when i print links i get maybe one or two back and i know there are more because when i print sitemap_response i see a shitload. Also i still don't know how to let it look for pages below the ones it stores.
Also i needed a way to automatically go to a random page and post my intended message.
def post(self):
global links
while True
single_url = (random.choice(links))
self.open(single_url)
self.form = list(self.forms())[0]
self["vbform"] = "You guys think you're computer experts? Check out [URL="http://niggasin.space/forum/technophiliacs-technophiles"]this[/URL] forum."
response = self.submit()
time.sleep(1)
print response
def main():
login(self)
spider()
post(self)
main()
As it stands i am getting this error when the line that has single_url on it gets executed.
Traceback (most recent call last):
File "C:\bot.py", line 66, in <module>
main()
File "C:\bot.py", line 64, in main
post(self)
File "C:\bot.py", line 52, in post
single_url = (random.choice(links))
File "C:\Python27\lib\random.py", line 275, in choice
return seq[int(self.random() * len(seq))] # raises IndexError if seq is emp
ty
File "C:\Python27\lib\site-packages\BeautifulSoup.py", line 613, in __getitem_
_
return self._getAttrMap()[key]
KeyError: 0
Any help would be appreciated, thanks guys. -
2016-02-27 at 3:54 AM UTCI would help but I don't know shit about Python. I did notice that that forum lets users post as much as they want with no flood limit.
-
2016-02-27 at 5:06 AM UTC
I would help but I don't know shit about Python. I did notice that that forum lets users post as much as they want with no flood limit.
This is going to be good when it works as it should then. -
2016-02-27 at 11:10 AM UTCI have identified the problem to be one of iterating through a list. So when i store all <li> tags i need to iterate over every instance of li and from there enhancement on <a href> tags to find the actual links, then i need to parse the links store them to a variable from which i can select one url at random, or something similar.
-
2016-02-27 at 1:11 PM UTCOk so i've tried it like this now.
def spider():
global links
links = []
# Get URLs from sitemap and spider
sitemap_xml = "http://www.computerforums.org/forums/sitemap/sitemap.xml"
sitemap_response = requests.get(sitemap_xml)
soup = BeautifulSoup(sitemap_response.content)
# print soup
tag = soup.findAll('li')
tag.append(links)
for x in list(links):
url = enhancement("<a href>", links)
result = requests.get(url)
result.append(links)
print links
So now i when i print links i get an empty value for it like so: [ ] -
2016-02-27 at 7:25 PM UTCYou are a bad, bad man.
-
2016-02-28 at 8:06 AM UTC
You are a bad, bad man.
And you're useless. But fear not, i am useful for the both of us, for success is close, you probably won't know what i'm talking about but i've had an epiphany. First off i was using the wrong version of the BeautifulSoup module, which stores shit it finds in with the findAll() function as a Soup object which gives me a type error if i want to iterate over it since it's not a list object and if i put a Soup object inside a list object i still can't iterate over it because if i want search for a regular expression for example i can't since the only thing inside the list object is a Soup object not a string, secondly i highly underestimated the power of the Mechanize module, and here's the kicker, if i do this:
self = mechanize.Browser()
Python is essentially emulating a browser without a GUI, which has the ability to identify and parse links on it's own, so fuck making a list of URLs with BeautifulSoup if i can do it automatically with Mechanize. Like so.
self.open("http://www.computerforums.org/forums/sitemap/sitemap.xml")
depth_one = list(self.links())
for link in depth_one:
if "html" in link.url:
self.open(link.url)
depth_two = list(self.links())
Well that was easy, kind of hard to tell if i got all the links though. -
2016-02-28 at 11:31 AM UTCWew, nearly there.
import os
import random
import requests
import mechanize
import time
import re
# Mechanize browser
self = mechanize.Browser()
self.set_handle_robots(False)
self.addheaders = [("User-agent","Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.13) Gecko/20101206 Ubuntu/10.10 (maverick) Firefox/3.6.13")]
def login(self):
global depth_two
print "[+]Logging in."
self.open("http://www.computerforums.org/forums/login.php?do=login")
self.form = list(self.forms())[0]
self["vb_login_username"] = "Space Negro"
self["vb_login_password"] = "prettyflypassword"
response = self.submit()
time.sleep(1)
print
print "[+]Response."
print response
print
print "[+]Crawling for links."
self.open("http://www.computerforums.org/forums/sitemap/sitemap.xml")
depth_one = list(self.links())
for link in depth_one:
if "html" in link.url:
self.open(link.url)
depth_two = list(self.links())
print
print "[+]Done."
def loop():
post(self)
def post(self):
global depth_two
while True:
for link in depth_two:
if not "sitemap" in link.url:
single_url = (random.choice(link.url))
try:
response = self.open(single_url)
except:
print "[!]HTTP Error while trying to open page, trying new page..."
loop()
forms = ParseResponse(response, backwards_compat=False) #DEBUG
print forms #DEBUG
break #DEBUG
# Error
self.form = list(self.forms())[2] # List index out of range?
self["message"] = "You guys think you're computer experts? Check out [URL="http://niggasin.space/forum/technophiliacs-technophiles"]this[/URL] forum."
final = self.submit()
time.sleep(1)
print "[+]Message posted succesfully"
print
print final
def main():
login(self)
post(self)
main()
Only problem left is selecting the proper form. See comments in last chunk of code for details. -
2016-02-28 at 12:15 PM UTCThe irony is strong with this thread, if anything i've totally learned a shitload. i can feel my 1337ness expanding.
-
2016-02-28 at 6:48 PM UTCI don't understand why you're crawling for links though... You can just change the number of the post in a URL and it'll redirect to that specific thread.
Example...
http://www.computerforums.org/forums/social-lounge-off-topic/do-any-you-like-cats-227779.html
...change the "227779" to "227778"...
http://www.computerforums.org/forums/social-lounge-off-topic/do-any-you-like-cats-227778.html
...and I am redirected to...
http://www.computerforums.org/forums/social-lounge-off-topic/downloading-youtube-videos-ss-trick-227778.html
...change the "227778" to "227777"...
http://www.computerforums.org/forums/social-lounge-off-topic/downloading-youtube-videos-ss-trick-227777.html
...and I am redirected to...
http://www.computerforums.org/forums/networking-dns/nic-not-working-please-help-me-227777.html
Maybe we should start bumping a shitload of old threads at the same time. Starting in this range...
http://www.computerforums.org/forums/software-operating-systems/connecting-linux-internet-107777.html
107778
107779
107780
107781
107782
...
...
... -
2016-02-28 at 7:14 PM UTC
I don't understand why you're crawling for links though… You can just change the number of the post in a URL and it'll redirect to that specific thread.
Example…
http://www.computerforums.org/forums...ts-227779.html
…change the "227779" to "227778"…
http://www.computerforums.org/forums...ts-227778.html
…and I am redirected to…
http://www.computerforums.org/forums...ck-227778.html
…change the "227778" to "227777"…
http://www.computerforums.org/forums...ck-227777.html
…and I am redirected to…
http://www.computerforums.org/forums...me-227777.html
Maybe we should start bumping a shitload of old threads at the same time. Starting in this range…
http://www.computerforums.org/forums...et-107777.html
107778
107779
107780
107781
107782
…
…
…
Yeah C niggas keep telling me 'bout this nifty string replace you got apparently i'm sure there's an equivalent in python, also, crawling all forum links literally takes two seconds since i'm using the sitemap just as a search engine would, only thing i have to do then is iterate over the links to enhancement them to the kind of link i need then i just pick a random link from that selection.
Furthermore, i'm not changing my program nao you faygit, not after all the work i put into it getting list comprehension right unless i absolutely have to. I took a break btw, i still need to debug the last piece of code where it selects the proper form for posting. -
2016-02-28 at 9:20 PM UTCJust seems like a lot of work when we could just post a reply to a url that has an increasing integer within it.
-
2016-02-28 at 10:41 PM UTC
Just seems like a lot of work when we could just post a reply to a url that has an increasing integer within it.
You're right, redundancy is bad, besides i'm still failing and i know why, so perhaps i'll look into what you are proposing. I'll get back at you in a semi expedient manner. -
2016-03-01 at 1:40 AM UTCOy gevalt you goys!
Debugged, tested and refractored. Works like a charm, and look at that fancy digit generator function. String replace was weird in python so i just made a function that takes amount and range as args and use it to generate a string of digits, randomly.
import os
import random
import mechanize
import time
import string
# Mechanize browser and set user agent
br = mechanize.Browser()
br.set_handle_robots(False)
br.addheaders = [("User-agent","Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.13) Gecko/20101206 Ubuntu/10.10 (maverick) Firefox/3.6.13")]
def login():
print "[+]Logging in."
# Open login page
br.open("http://www.computerforums.org/forums/login.php?do=login")
# Select first form(login form) and set values to the credentials -
# of the account made in advance for spamming purposes
br.form = list(br.forms())[0]
br["vb_login_username"] = "username"
br["vb_login_password"] = "prettyflypassword"
# Submit values for username and password fields
response = br.submit()
print "\n[+]Response:"
print
print response
print
print "[+]Selecting random URL by page/thread ID"
# Call function to start posting spamming
post()
# Function to generate a random string of digits to replace the original page/thread ID
def digit_generator(size=5, chars=string.digits):
return ''.join(random.choice(chars) for _ in range(size))
def post():
try:
while True:
random_url = "http://www.computerforums.org/forums/software-operating-systems/connecting-linux-internet-1" + digit_generator(5, "0987654321") + ".html"
print
print "[+]Selected URL:"
print
print random_url
br.open(random_url)
# Select 'vbform' which is the name of the quick reply form -
# if not present we've either been banned or are otherwise -
# unable to post in this thread
try:
br.select_form("vbform")
except:
print "\n[!]Could not find quick reply form. Unable to post on page"
print "\n[+]Consider inspecting selected URL manually in your browser"
choice = raw_input("Retry? Y/n")
if "y" in choice:
print "\nRetrying"
login()
elif "n" in choice:
print "\nQuitting"
break
else:
print "\nUnhandled option, quitting"
break
print "\nPosting message"
# Message to spam
br["message"] = "Spam goes here"
# Set values for checkbox control where needed
try:
br[quickreply] = 1
br[forcepost] = 1
except:
pass
response = br.submit()
print "\n[+]Response: "
print
print response
print
print "[+]Message was posted succesfully"
# Handle CTRL+C
except KeyboardInterrupt:
print "CTRL+C Caught, quitting"
break
login()
Ready when you are. -
2016-03-01 at 1:43 AM UTCMy body is ready. What do we do?
-
2016-03-01 at 2:19 AM UTCWhat is this bot going to spam? Kek
-
2016-03-01 at 3:11 AM UTC
What is this bot going to spam? Kek
Whatever i tell it to. -
2016-03-01 at 3:13 AM UTC[SIZE=72px]I said, MY BODY IS READY. WHAT DO WE DO??????[/SIZE]
-
2016-03-01 at 3:21 AM UTC
[SIZE=72px]I said, MY BODY IS READY. WHAT DO WE DO??????[/SIZE]
Get the python interpreter and the mechanize module and follow the instructions on my github.
https://github.com/NullArray/vBulBot
To spam computerforums.org for the lulz. -
2016-03-02 at 10:55 AM UTCNice work, you seem to be getting pretty good with python.