User Controls

Lanny, your site is cramping my style coding style.

  1. #1
    Sophie Pedophile Tech Support

    import urllib2

    myurl = "http://niggasin.space"

    try:
    response = urllib2.urlopen(myurl)
    response_headers = response.info()

    print response_headers

    except urllib2.HTTPError, e:
    print ("Error code: %s" % e.code)



    Well it's probably just me but this prints 403 forbidden why's that :/
  2. #2
    Sophie Pedophile Tech Support
    Ok Lan you're a huge faggot because this code:


    import urllib2

    # Derive from Request class and override get_method to allow a HEAD request.
    class HeadRequest(urllib2.Request):
    def get_method(self):
    return "HEAD"

    myurl = 'http://www.google.com'
    request = HeadRequest(myurl)

    try:
    response = urllib2.urlopen(request)
    response_headers = response.info()

    print response_headers


    except urllib2.HTTPError, e:
    print ("Error code: %s" % e.code)




    Prints this:


    Date: Thu, 31 Mar 2016 19:13:00 GMT
    Expires: -1
    Cache-Control: private, max-age=0
    Content-Type: text/html; charset=ISO-8859-1
    P3P: CP="This is not a P3P policy! See https://www.google.com/support/accounts/a
    nswer/151657?hl=en for more info."
    Server: gws
    X-XSS-Protection: 1; mode=block
    X-Frame-Options: SAMEORIGIN
    Set-Cookie: NID=78=ky3O8wfyx8xrhFO-79CCIrL1e2xeQy2cXUlsPSK1BYtXE6hFkXuCB6XmJZlDe
    mCK5lPiMYf3zIAGC5QTeIc3pYTnyF_Vizz7xnL6p7BxaMgdjmpz6X59ki21uoOO7OZS; expires=Fri
    , 30-Sep-2016 19:13:00 GMT; path=/; domain=.google.nl; HttpOnly
    Set-Cookie: CONSENT=CG.251da3; expires=Sun, 01-May-2016 19:13:00 GMT; path=/; do
    main=.google.ru
    Accept-Ranges: none
    Vary: Accept-Encoding
    Connection: close


    But if i add niggasin.space as URL python cucks me and gives me a 403 forbidden.
  3. #3
    Sophie Pedophile Tech Support
    Nevermind, Python user-agent is VERBOTEN. I'll spoof one instead. Turns out urllib2 is a proper bitch if you want to spoof user agent and only retrieve response headers.
  4. #4
    Sophie Pedophile Tech Support
    urllib and friends are gay as fuck, just like beautiful soup. All i need is mechanize.


    import mechanize

    br = mechanize.Browser()
    br.set_handle_robots(False)
    br.addheaders = [("User-agent","Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.13) Gecko/20101206 Ubuntu/10.10 (maverick) Firefox/3.6.13")]

    target = br.open("http://niggasin.space")

    print target.info()


    6 lines of code, fuck you urllib 1/2 httplib and all that bullshit.


    Date: Thu, 31 Mar 2016 22:12:48 GMT
    Content-Type: text/html
    Transfer-Encoding: chunked
    Connection: close
    Set-Cookie: __cfduid=d2e0dba5e8dc813ca8d712d9e573d00b31459462367; expires=F
    1-Mar-17 22:12:47 GMT; path=/; domain=.niggasin.space; HttpOnly
    X-Powered-By: PHP/5.3.3-7+squeeze19
    Set-Cookie: bbsessionhash=e0d89d6eaf061467f9121d3ec84c6062; path=/; httponl
    Set-Cookie: bblastvisit=1459462367; path=/; httponly
    Set-Cookie: bblastactivity=1459462367; path=/; httponly
    X-UA-Compatible: IE=edge,chrome=1
    Vary: Accept-Encoding
    Server: cloudflare-nginx
    CF-RAY: 28c725164a072b34-AMS
    content-type: text/html; charset=UTF-8



  5. #5
    Sophie Pedophile Tech Support
    Yes T&T is my blog, deal with it.
  6. #6
    SBTlauien African Astronaut
    The hell are you talkin bout nigga?
  7. #7
    Sophie Pedophile Tech Support
    The hell are you talkin bout nigga?

    I needed to programatically retrieve the response header of niggasin.space, but urllib2's user agent won't do. I get a 403 forbidden when i try to do so. So i used mechanize to spoof a user-agent and got the response header anyway.
  8. #8
    mmQ Lisa Turtle
    Well I could'a' told you THAT! Silly goose.
Jump to Top