User Controls

  1. 1
  2. 2
  3. 3
  4. ...
  5. 1225
  6. 1226
  7. 1227
  8. 1228
  9. 1229
  10. 1230
  11. ...
  12. 1426
  13. 1427
  14. 1428
  15. 1429

Posts by Sophie

  1. Sophie Pedophile Tech Support
    Whatever you do, kill the homeless person anyway.
  2. Sophie Pedophile Tech Support
    That's incest.

    Incest is hawt, but homosexuality is nawt.
  3. Sophie Pedophile Tech Support
    lol you're cringey

    You're doing it wrong.
  4. Sophie Pedophile Tech Support
    LOL nah

    Your allcaps 'lol' seems to contradict that statement.
  5. Sophie Pedophile Tech Support
    sorry m8 was busy coding and doing drugs yesterday
  6. Sophie Pedophile Tech Support
    The irony is strong with this thread, if anything i've totally learned a shitload. i can feel my 1337ness expanding.
  7. Sophie Pedophile Tech Support
    Turns out combining energy manipulation with physical combat doesn't help you when you're trying to run a website.

    Who'd have thunk it?
  8. Sophie Pedophile Tech Support
    using tor requires completing a captcha every 5 mins. fix this shit.

    I was going to suggest a solution, but i reconsidered because i'd rather see you suffer.
  9. Sophie Pedophile Tech Support
    Wew, nearly there.


    import os
    import random
    import requests
    import mechanize
    import time
    import re

    # Mechanize browser
    self = mechanize.Browser()
    self.set_handle_robots(False)
    self.addheaders = [("User-agent","Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.13) Gecko/20101206 Ubuntu/10.10 (maverick) Firefox/3.6.13")]


    def login(self):
    global depth_two

    print "[+]Logging in."

    self.open("http://www.computerforums.org/forums/login.php?do=login")

    self.form = list(self.forms())[0]
    self["vb_login_username"] = "Space Negro"
    self["vb_login_password"] = "prettyflypassword"


    response = self.submit()
    time.sleep(1)

    print
    print "[+]Response."
    print response
    print
    print "[+]Crawling for links."

    self.open("http://www.computerforums.org/forums/sitemap/sitemap.xml")

    depth_one = list(self.links())

    for link in depth_one:
    if "html" in link.url:
    self.open(link.url)
    depth_two = list(self.links())

    print
    print "[+]Done."

    def loop():
    post(self)

    def post(self):
    global depth_two

    while True:
    for link in depth_two:
    if not "sitemap" in link.url:
    single_url = (random.choice(link.url))

    try:
    response = self.open(single_url)
    except:
    print "[!]HTTP Error while trying to open page, trying new page..."
    loop()

    forms = ParseResponse(response, backwards_compat=False) #DEBUG
    print forms #DEBUG
    break #DEBUG

    # Error
    self.form = list(self.forms())[2] # List index out of range?
    self["message"] = "You guys think you're computer experts? Check out [URL="http://niggasin.space/forum/technophiliacs-technophiles"]this[/URL] forum."

    final = self.submit()
    time.sleep(1)

    print "[+]Message posted succesfully"
    print
    print final

    def main():
    login(self)
    post(self)

    main()


    Only problem left is selecting the proper form. See comments in last chunk of code for details.
  10. Sophie Pedophile Tech Support
    You are a bad, bad man.

    And you're useless. But fear not, i am useful for the both of us, for success is close, you probably won't know what i'm talking about but i've had an epiphany. First off i was using the wrong version of the BeautifulSoup module, which stores shit it finds in with the findAll() function as a Soup object which gives me a type error if i want to iterate over it since it's not a list object and if i put a Soup object inside a list object i still can't iterate over it because if i want search for a regular expression for example i can't since the only thing inside the list object is a Soup object not a string, secondly i highly underestimated the power of the Mechanize module, and here's the kicker, if i do this:

    self = mechanize.Browser()


    Python is essentially emulating a browser without a GUI, which has the ability to identify and parse links on it's own, so fuck making a list of URLs with BeautifulSoup if i can do it automatically with Mechanize. Like so.


    self.open("http://www.computerforums.org/forums/sitemap/sitemap.xml")

    depth_one = list(self.links())

    for link in depth_one:
    if "html" in link.url:
    self.open(link.url)
    depth_two = list(self.links())


    Well that was easy, kind of hard to tell if i got all the links though.
  11. Sophie Pedophile Tech Support
    Ok so i've tried it like this now.


    def spider():
    global links
    links = []
    # Get URLs from sitemap and spider
    sitemap_xml = "http://www.computerforums.org/forums/sitemap/sitemap.xml"

    sitemap_response = requests.get(sitemap_xml)
    soup = BeautifulSoup(sitemap_response.content)

    # print soup

    tag = soup.findAll('li')
    tag.append(links)

    for x in list(links):
    url = enhancement("<a href>", links)
    result = requests.get(url)
    result.append(links)

    print links


    So now i when i print links i get an empty value for it like so: [ ]
  12. Sophie Pedophile Tech Support
    I have identified the problem to be one of iterating through a list. So when i store all <li> tags i need to iterate over every instance of li and from there enhancement on <a href> tags to find the actual links, then i need to parse the links store them to a variable from which i can select one url at random, or something similar.
  13. Sophie Pedophile Tech Support
    I would help but I don't know shit about Python. I did notice that that forum lets users post as much as they want with no flood limit.

    This is going to be good when it works as it should then.
  14. Sophie Pedophile Tech Support
    So as some of you know Suicidal Fish recently got into a bit of trouble over at computerforums.com which, by the way is a website so cancerous it should not be allowed to exist. Which is why i am joining the effort to inflict some cyber terrorism upon them. I already did a quick crawling and fuzzing for vulns, but there were no obvious ones so i decided i'd make a spambot. Now bear in mind i never make bots and i'm also unfamiliar with the several web based modules python has, but that's ok, i read some documentation but sadly there are still a couple things that don't quite work as i'd like them to.

    [SIZE=48px][SIZE=28px]Post is depracated. Please use this post from this point[/SIZE][/SIZE]


    This part of the code works and does it's job swimmingly.

    import os
    import random

    from BeautifulSoup import BeautifulSoup
    from urlparse import urlparse

    import requests
    import mechanize
    import time

    # Mechanize browser for form input
    self = mechanize.Browser()
    self.set_handle_robots(False)
    self.addheaders = [("User-agent","Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.13) Gecko/20101206 Ubuntu/10.10 (maverick) Firefox/3.6.13")]


    def login(self):
    self.open("http://www.computerforums.org/forums/login.php?do=login")

    self.form = list(self.forms())[0]
    self["vb_login_username"] = "Space Negro"
    self["vb_login_password"] = "prettyflypassword"


    response = self.submit()
    time.sleep(2)

    print response


    So we got logging in covered, but to be a proper spambot you need to be able to spider or crawl for links, i decided to use the BeautifulSoup module for this because i needed some C and C++ things for scrapy to work so whatever.


    def spider():
    global links

    sitemap_xml = "http://www.computerforums.org/forums/sitemap/sitemap.xml"

    sitemap_response = requests.get(sitemap_xml)
    soup = BeautifulSoup(sitemap_response.content)

    for links in soup.findAll('li'):
    for url in links.findAll('a', 'href'):
    for result in url.requests.get(url):
    if result:
    results.append(result)

    print links


    So this works kinda'. I can't just do soup.findAll("url") because it doesn't know how to do that apparantly. So i had my code look for 'li' tags because that's the kinda tag all the links are within but also some other things so i had to sort for "a href" as well to get the actual links, but when i print links i get maybe one or two back and i know there are more because when i print sitemap_response i see a shitload. Also i still don't know how to let it look for pages below the ones it stores.

    Also i needed a way to automatically go to a random page and post my intended message.


    def post(self):
    global links
    while True
    single_url = (random.choice(links))

    self.open(single_url)
    self.form = list(self.forms())[0]
    self["vbform"] = "You guys think you're computer experts? Check out [URL="http://niggasin.space/forum/technophiliacs-technophiles"]this[/URL] forum."

    response = self.submit()
    time.sleep(1)

    print response

    def main():
    login(self)
    spider()
    post(self)

    main()


    As it stands i am getting this error when the line that has single_url on it gets executed.


    Traceback (most recent call last):
    File "C:\bot.py", line 66, in <module>
    main()
    File "C:\bot.py", line 64, in main
    post(self)
    File "C:\bot.py", line 52, in post
    single_url = (random.choice(links))
    File "C:\Python27\lib\random.py", line 275, in choice
    return seq[int(self.random() * len(seq))] # raises IndexError if seq is emp
    ty
    File "C:\Python27\lib\site-packages\BeautifulSoup.py", line 613, in __getitem_
    _
    return self._getAttrMap()[key]
    KeyError: 0


    Any help would be appreciated, thanks guys.
  15. Sophie Pedophile Tech Support
    I think he has a pretty good chance, i'd like to see him win.
  16. Sophie Pedophile Tech Support
    Never built a bot but this is what i could come up with, have not tested it yet and it quite possibly contains several errors. I'll have to test it's functionality tomorrow, unless Lanny or someone else who knows python decides to do so and improve my concept.


    import os
    import random

    from BeautifulSoup import BeautifulSoup
    from urlparse import urlparse

    import mechanize

    # Mechanize browser for form input
    self = mechanize.Browser()
    self.set_handle_robots(False)
    self.addheaders = [("User-agent","Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.13) Gecko/20101206 Ubuntu/10.10 (maverick) Firefox/3.6.13")]


    def login(self):
    self.open("http://www.computerforums.org/forums/login.php?do=login")

    self.form = list(self.forms())[0]
    self["vb_login_username"] = "Space Negro"
    self["vb_login_password"] = "prettyflypassword"


    response = self.submit()
    # print respone


    def spider():
    global parsed
    # Get URLs from sitemap and spider
    sitemap_xml = "http://www.computerforums.org/forums/sitemap/sitemap.xml"

    sitemap_response = requests.get(sitemap_xml)
    soup = BeautifulSoup(sitemap_response.content)

    elements = soup.findAll("url")
    urls = [elem.find("loc").string for elem in elements]
    # Save parsed URLs to parsed var.
    for url in urls:
    parsed = urlparse(url)


    def post(self):
    while True:
    # Select one of the URLs we found randomly and post spam, repeat ad infinitum
    single_url = (random.choice(parsed))

    self.open(single_url)
    self.select_form(name = "vbform")
    self["vbform"] = "You guys think you're computer experts? Check out [URL="http://niggasin.space/forum/technophiliacs-technophiles"]this[/URL] forum."

    self.submit()


    def main():
    login(self)
    spider()
    post(self)

    main()


    Updated code, still buggy since for some reason the Mechanize module can't find the forms by name and i know for sure that these forms are named thusly, i'll have to identify them by another attribute i suppose.

    Ok function to login works, spidering as well, but there's a bug in the last function which i am fixing now.

    It seemed as if the original spider didn't return any results but the alternative as i have it is not much better meh.


    def spider():
    global link
    # Get URLs from sitemap and spider
    sitemap_xml = "http://www.computerforums.org/forums/sitemap/sitemap.xml"

    sitemap_response = requests.get(sitemap_xml)
    soup = BeautifulSoup(sitemap_response.content)

    for link in soup.findAll('li'):
    for url in link.findAll('a', 'href'):
    for result in url.requests.get(url):
    if result:
    results.append(result)


    print link
  17. Sophie Pedophile Tech Support
    That looks pretty damn good, I might just go ahead and try it as I have some btc lying around. Thanks broslaf.

    edit: I actually checked baphomet and also saw that they recommend cryptostorm - it offers a free tier for me to try. I have an unused openwrt router lying around, so I'll set it up as a dedicated VPN access point. It's been fucking years since I was able to torrent stuff (books, manuals).

    If I could I would thank you 5x times.

    Oh, stop it you. It's my pleasure.
  18. Sophie Pedophile Tech Support
    I'm thinking about trying out that cyberghost you mentioned in your thread. Is it fairly straightforward? I'll troll these guys and drop some links to niggasin.space if I can figure it out. But how do you shut down their site?

    Yes try cyberghost. As to how you can shutdown the site.

    1. Find out somehow, their real IP(They are currently behind cloudflare), and hit it with a medium sized DDoS.
    2. Post lots of cp, then screenshot your posts and send them to their host so that they may terminate the site for violation of ToS.
    3. Find a vulnerability in the web application, in this case vBul, and drop their database, then dehash the admin credentials, log in and delete everything. (Already tried this btw could not find an SQL Injection vuln) However, if they have upload functionality like attachements you might be able to trick the system into accepting a PHP shell, then you just connect to it and you pretty much own the server. Or maybe, they have a subdomain like mail.computerforums.com that is much less secure because it's not usually available to the public, but i have a program that will bruteforce all sobdomains in a matter of minutes so that's no problem.

    If we seriously want to damage them we could but it will take some effort.

    This is all i can think of right now, also with enough determination and people we might be able to bully the admin into giving up, maybe try to dox him and we'll configure the spambots to post his dox in every single thread.
  19. Sophie Pedophile Tech Support
    Also VPNs are not blocked like TOR.
  20. Sophie Pedophile Tech Support
    Get Sophie or someone to build a bot that spams the site with links like that under various different usernames at the same time. DDOS their site after a while. There is a slight chance some people will come here looking for answers. It's easier to steal their members than troll them on a site with such strict rules.

    Never built a bot but this is what i could come up with, have not tested it yet and it quite possibly contains several errors. I'll have to test it's functionality tomorrow, unless Lanny or someone else who knows python decides to do so and improve my concept.


    import os
    import random

    from BeautifulSoup import BeautifulSoup
    from urlparse import urlparse

    import mechanize

    # Mechanize browser for form input
    self = mechanize.Browser()
    self.set_handle_robots(False)
    self.addheaders = [("User-agent","Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.13) Gecko/20101206 Ubuntu/10.10 (maverick) Firefox/3.6.13")]


    def login(self):
    self.open("http://www.computerforums.org/forums/login.php?do=login")

    self.form = list(self.forms())[0]
    self["vb_login_username"] = "Space Negro"
    self["vb_login_password"] = "prettyflypassword"


    response = self.submit()
    # print respone


    def spider():
    global parsed
    # Get URLs from sitemap and spider
    sitemap_xml = "http://www.computerforums.org/forums/sitemap/sitemap.xml"

    sitemap_response = requests.get(sitemap_xml)
    soup = BeautifulSoup(sitemap_response.content)

    elements = soup.findAll("url")
    urls = [elem.find("loc").string for elem in elements]
    # parse urls
    for url in urls:
    parsed = urlparse(url)


    def post(self):
    while True:
    # Select one of the URLs we found randomly and post spam, repeat ad infinitum
    parsed = spider
    single_url = (random.choice(parsed))

    self.open(single_url)
    self.select_form(name = "vbform")
    self["vbform"] = "You guys think you're computer experts? Check out [URL="http://niggasin.space/forum/technophiliacs-technophiles"]this[/URL] forum."

    self.submit()


    def main():
    login(self)
    spider()
    post(self)

    main()


    Updated code, still buggy since for some reason the Mechanize module can't find the forms by name and i know for sure that these forms are named thusly, i'll have to identify them by another attribute i suppose.

    Ok function to login works, spidering as well, but there's a bug in the last function which i am fixing now.
  1. 1
  2. 2
  3. 3
  4. ...
  5. 1225
  6. 1226
  7. 1227
  8. 1228
  9. 1229
  10. 1230
  11. ...
  12. 1426
  13. 1427
  14. 1428
  15. 1429
Jump to Top