Originally posted by Sudo
Do you have anything from nopebeer? I bet you won't get that reference but basically do you have a ton of synths and chemistry info?
I never downloaded the files or pdf them back then. I took prnt screens of shit that happened back then and even video taped on september 11th. those files and I put the backup on my kids computer because I didn't have anything other than a zipdrive (100 meg cartridges) like other people who had multiple slave drives.
it got deleted. I depended on waybackmachine. they're saying it started up Oct 2001. I don't think that is correct or archive took it over.
And another thing, I had many times talked about when I was "wildcard IP banned" a term used to block anyone with the first set of IP numbers (Or second set along with set 3 and 4) and I logged in, I would get robot.txt meaning "DO NOT CAPTURE THIS PAGE" this was done after 9/11 so how did waybackmachine capture early threads ? did it go retro and record all the earlier pages? and why NOW are they scrubbed?
This is a post-policy made after totse and zoklet requested their shit be scrubbed? not before? how do I have copies of pages?
this makes zero fucking sense if Totse had robot.txt (which I admit I saw he pulled on me and sent me to a simple html page that just said in basic text, no link or html code the word robot.txt
so how did it get crawled? he put that in shortly after 9/11/01
waybackmachine then goes on to say this
Oakland Archive Policy
Wayback's retroactive exclusion policy is based in part upon Recommendations for Managing Removal Requests and Preserving Archival Integrity published by the School of Information Management and Systems at University of California, Berkeley in 2002, which gives a website owner the right to block access to the site's archives.[45] Wayback has complied with this policy to help avoid expensive litigation.[46]
The Wayback retroactive exclusion policy began to relax in 2017, when it stopped honoring robots.txt on U.S. government and military web sites for both crawling and displaying web pages. As of April 2017, Wayback is ignoring robots.txt more broadly, not just for U.S. government websites.[47][48][49][50]