User Controls
Posts That Were Thanked by SBTlauien
-
2017-06-06 at 2:44 AM UTC in What would you do if a waitress obvious spilled a whole pot of hot coffee on you?Start dancing from the thought of the payout I'd get from suing them.
-
2017-06-01 at 8:48 AM UTC in Sophie Just Passed Post Count 12345
-
2017-06-01 at 1:04 AM UTC in Sophie Just Passed Post Count 12345
-
2017-06-01 at 12:17 AM UTC in If you slice open a major artery in your leg...
Originally posted by Sophie How long until you bleed out? Today i was just sitting on a bench smoking a cig looking down at my feet like a sperg and since i am so white i am almost transparent i could see one big ass vein/artery in the calf area of my leg. I thought, dang, that is one big ass vein. And you know i would slice it open myself just to find out, but i am not sure how i feel about bleeding to death.
inb4doitanyway
assuming no one attempts to stop the bleeding...the length of time between actually being 'opened' and death depends on so many different factors it is almost impossible to say based on what you described. exsanguination is more complicated than the limited info you posted.
even if the blood vessel in the calf is completely, through and through severed, death from cardiac arrest due to the heart running out of 'juice' is almost impossible to tell because of different physiologies between different people. get a group a people with the exact same wound, as described, and at one end of the spectrum could be cardiac failure in a minute, at the other end could be someone who can run half a marathon before expiring. -
2017-05-29 at 6:34 AM UTC in The greatest revenge.
-
2017-05-29 at 5:10 AM UTC in The greatest revenge.Here's a more effective, very realistic, method of extreme revenge:
1.Cut off someone's limbs bit by bit (from knuckles, to wrists, to elbows, etc.) until they are without arms and legs entirely.
2. Cut out their tongue.
3. Gouge out their eyes.
4. Pop their eardrums.
5. Keep them alive indefinitely.
They are then destined to spend the rest of their miserable lives unable to go anywhere, do anything, or even communicate with anyone around them. They are trapped forever with their own thoughts, and nothing more.
Done. -
2017-05-26 at 3:14 AM UTC in Comcast Userdata Used to Create Support for Anti-Net Neutrality LawsApparently someone who has full access to Comcast's userdata has been sending feedback to the FCC in support of plans to roll back 'Net Neutrality' using customer details...
As in, Comcast or someone with access to their userdata has been impersonating Comcast customers to build support for Comcast business interests.
https://www.comcastroturf.com/ - if you're a Comcast customer this site will check to see if your account details have been used to send feedback to the FCC.
http://bgr.com/2017/05/23/comcast-fcc-net-neutrality-legal-analysis/ - summary as to what's going on. Of course Comcast has tried to get an injunction against the above site, hasn't been successful so far though. -
2017-05-26 at 2:21 AM UTC in Lanny Brand CI Server.Lol, nah, Jenkins isn't my project. They picked the name because it's like a "butler name", similar to Jeeves. We use it at work, I kinda chuckle at the emails of people being like "jenkins is fucking slow" and "jenkins isn't working again".
Originally posted by SBTlauien Looks interesting but I personally don't think I would need it for anything I do.
So what exactly is an automation server? I looked it up and it appears to be something that makes it easier for a programmer to user his/her program with other applications. Is that correct?
So it's basically just a build server, but it's expanded to do a lot of stuff other than builds so they call this kind of thing an "automation server". A really common workflow is to periodically grab the latest code on a project, build it, run a test suite, and deploy it to some testing environment and notify people if any step in that process fails. That may sound pretty trivial if you haven't worked on massive projects before but deployment ready builds can a long time to run, on the order of hours isn't uncommon, involvings dozens of discrete steps. Test suites can be comparably slow. Deployments can likewise have a lot of steps and be pretty complex if you're doing things like signing/verification or DNS sorcery or any of a dozen other things that fall under the heading of "deployment".
There's an argument to be made that things being complex enough to justify this kind of software existing is a pathology, but it does make life easier. I do a similar kind of thing for ISS, automatic build/test you can check out here if you're interested for some reason. Deployment is still an SSH script though. -
2017-05-25 at 12:11 PM UTC in I'm quitting alcohol for at least 3 months.I recently saved a person who was near death on alcohol and benzos. This person had bruises from head to toe, their eyes were like two windows into hell, they were to the point they wouldn't eat for five or six days at a time, hair was falling out (they had been wearing hair extensions to hide it), couldn't even stand up for more than a few minutes, lost their vision almost completely in the mornings, lost their job, their family, had no money at all, couldn't even go near anyone or even look at anyone, because of an overwhelming sense of paranoia and anger. Basically, just hours or days away from death. And suicidal as well. They were up to two liters of hard liquor a day.
The only way for me to get them out of it was to literally go to their home, physically pick them up off their bed, throw them in a taxi, and take them personally to the concurrent disorders stabilization unit, where I had to spend five hours answering the same questions over and over and over again, finally having to demand an immediate meeting with the unit's head administrator, and then another two hours convincing that person that there was no time to wait, that I would personally hold the hospital accountable if this person did not receive immediate care. Finally, after spending the entire day at the unit, I managed to get the patient admitted and into a bed, where they were administered Valium for the alcohol addiction, under observation, over a period of 10 days, and then weaned off the benzos for the next 10 days. I went to the hospital each day for the 20 days and offered support and encouragement, financial support and guidance.
Today, that person is in perfect health, has regained their relationships, is working, and doesn't touch either booze or pills anymore. It was a huge undertaking, but ended up to be well worth the effort. I did my part. -
2017-05-24 at 9:26 PM UTC in I'm quitting alcohol for at least 3 months.
-
2017-05-24 at 12:19 AM UTC in ATTN: Bill KrozbyI'll just leave this here. Post more in my threads and get more threads wirth your dox. Also, i am posting this to /baphomet/.
Douglass E. Monks
DOB: 6-26-87
4410 Avenue F, Appartment 309
Austin, TX 78751
IP: 72.182.102.192
Phone: (+1)512-945-6786
Arrest Record
http://www.bustedmugshots.com/search?first_name=Douglas&last_name=Monks&state=TX
Possible Relatives
Paula Berry Monks, Facebook - https://www.facebook.com/paula.monks.77 and https://www.facebook.com/pbmonks
Larry Monks, Facebook - https://www.facebook.com/larry.monks.902
Steven Monks, Facebook - https://www.facebook.com/steven.monks.79
Riley Monks, Facebook - https://www.facebook.com/riley.monks
What's you dad's email adress? Oh wait, i can probably figure that out on my own. -
2017-05-22 at 9:13 PM UTC in My life has really been horrible, but...
-
2017-05-22 at 8:02 PM UTC in People that lack a conscience make me sick
Originally posted by Lanny I don't mean to be a jerk, but just based on your posts in the past it seems like you're willing to do what's necessary to get ahead as well no? I think I remember you posting about stealing from employers and gaming unemployment benefits. Do those not fall under morally unacceptable for you or am I remembering wrong?
IDK about SBT but my problem is with pettiness, causing needless harm and thoughtlessness. I know it sounds little odd but if you go through life not thinking about other people as you do something "wrong", you are a piece of shit, and all I want is for someone to look at when they're wronging someone and think about it, and see if they could change their plan to be less of a cuntass. I can respect drawing the shortest line from point A to point B. But you've got to navigate the maze, not draw over it. If your position is "fuck you, I got mine", you are a piece of shit. -
2017-05-22 at 1:47 PM UTC in Watercooling Setup Plans
Originally posted by SBTlauien Do you think all of that is necessary though?
LOL no
--snip for massive photo see next page--
I've got some more parts coming but after how long it took me to get it running nicely, I will not be taking it apart again for a while. The only thing I plan on adding for the moment is an LED light colour controller; all of the LEDs in the case are RGB but my controller is a piece of shit so they're all permanently switched to green.
The green is growing on me though.
Post last edited by aldra at 2017-05-23T04:22:59.667353+00:00 -
2017-05-21 at 10:35 PM UTC in Lanny's Quick and Dirty Web Scraping TutorialSomebody asked me for some help with a scraper for mangareader.net recently and I ended up writing a fair amount on it, most of it is applicable to web scraping in general. Thought it might have some use to a wider audience so here's a lightly edited version:
For web scraping kind of things you want to use node which can run outside of a browser context, the browser security model makes running scraping code in-browser a significant hurdle. You'll need node and npm installed.
On a very high level web scraping consists of two parts: enumeration and retrieval. Enumeration is finding the list of all the things you want to scrape. Retrieval is fetching the actual content you want (pages of manga in this case) once they've been identified
If you want to do a full site scrape enumeration looks kind of hierarchical: first you'll want to collect all the series hosted on a site, then all the chapters belonging to each series, then each page belonging to each chapter. For the sake of simplicity I suggest starting with scraping all the pages from just one chapter. Once you have code that works on one chapter you can start to generalize, turn it into a parameterized process that fetches all the pages of some chapter and move onto enumerating chapters of a series. You can work bottom-up in this fashion. In general this is a pretty good strategy for scraping.
So to look at something a little more concrete let's take a look at scraping all the pages from a particular chapter on mangareader. Let's look at the markup for a page like this one: http://www.mangareader.net/bitter-virgin/1
We're looking for something that will list all the pages in the chapter. The jump-to-page thing looks promising:
<select id="pageMenu" name="pageMenu"><option value="/bitter-virgin/1" selected="selected">1</option>
<option value="/bitter-virgin/1/2">2</option>
<option value="/bitter-virgin/1/3">3</option>
...
</select>
Awesome, it looks like there's an element that has a bunch of sub elements with `value` attributes that point to each page in the chapter. Now we need to write some code to grab those. Here's what I came up with on the fly, no error handling or anything but it's simple. It depends on two libraries, "request" and "jsdom" for making requests and parsing responses respectively, you'll need to install these on your system using npm if you haven't before:
#!/usr/bin/env node
var jsdom = require('jsdom');
var request = require('request');
var firstPage = 'http://www.mangareader.net/bitter-virgin/1';
// Make a request to the firstPage url which we know contains urls to each
// page of the chapter.
request(firstPage, function(err, response, body) {
// `response` is just a string, here we parse the content into something we
// can work with.
var content = new jsdom.JSDOM(body);
// Use a CSS selector to get the elements we're interested in. The selector is
// the '#pageMenu option' part. It says "return the list of all option elements
// which are a descendent of the element with pageMenu as its id". We know
// each of those elements have a value attribute we're interested in
var opts = content.window.document.querySelectorAll('#pageMenu option')
// Iterate over all the options elements we just collected and print their
// value attribute.
opts.forEach(function(opt) {
console.log(opt.value);
});
});
This works for me, it outputs a list of page urls. So there's enumeration done, now we need to write fetching logic. Each page has an element with "img" as its id, which points to the image of the page. So we need to fetch the viewer page to get that and then fetch the image itself and save it. Here's what that looks like:
#!/usr/bin/env node
var jsdom = require('jsdom');
var request = require('request');
var fs = require('fs');
// Visit a page url and download and save the page image
function fetchPage(pageUrl, filename, done) {
// Grab the page url. Note there's a difference between the asset at the page
// url (which contains html for ads and navigation and such) and the actual
// image which we want to save.
request(pageUrl, function(err, response, body) {
// Parse content
var content = new jsdom.JSDOM(body);
// Identify img tag pointing to the actual manga page image
var img = content.window.document.querySelector('#img')
// Make another request to get the actual image data
request({
url: img.src,
// Must specify null encoding so the response doesn't get intrepreted
// as text
encoding: null
}, function(err, response, body) {
// Write the image data to the disk.
fs.writeFileSync(filename, body);
// Call done so we can either fetch another page or terminate the program.
done();
});
});
}
// Fetch the each image from a list of pages and save them to disk
function fetchPageList(pageList, timeout) {
var idx = 0;
// This recursive style may looks strange, you don't _really_ need to worry
// about it, but it's important becauses the async nature of HTTP request.
// If we did a simple for loop here every request would be fired in parallel,
// so instead we'll process one page at a time, starting the next page's fetch
// slightly after the previous one finishes.
function fetchNext() {
// Wait some amount of time between making each request. We could make
// all these reqeust in parallel or one right after the other but aside
// from being unkind to the host, many websites will refuse to serve requests
// if we make too many at once or too close to eachother
setTimeout(function() {
// Fetch the actual page
fetchPage(pageList[idx], 'bitter-virgin-' + idx + '.jpg', function() {
// Fetch complete. Move onto the next item if there is one.
idx++;
if (idx < pageList.length) {
fetchNext();
} else {
console.log('Done!');
}
});
}, timeout);
}
fetchNext();
}
var firstPage = 'http://www.mangareader.net/bitter-virgin/1';
// Make a request to the firstPage url which we know contains urls to each
// page of the chapter.
request(firstPage, function(err, response, body) {
// `response` is just a string, here we parse the content into something we
// can work with.
var content = new jsdom.JSDOM(body);
fs.writeFileSync('bar', body);
// Use a CSS selector to get the elements we're interested in. The selector is
// the '#pageMenu option' part. It says "return the list of all option elemnts
// which are a descendent of the element with pageMenu as its id". We know
// each of those elements have a value attribute we're interested in
var opts = content.window.document.querySelectorAll('#pageMenu option')
var pageList = [];
opts.forEach(function(opt, index) {
pageList.push('http://www.mangareader.net' + opt.value);
});
fetchPageList(pageList, 200);
});
As it stands fetchPage is parameterized, that is it's equipped to fetch any page from any manga on mangareader. `fetchPageList` and the logic for fetching a chapter assume one particular series however. To do a full scrape of the site they need to be generalized so that another process can enumerate the series and chapters and execute the generalized version of this process for each. That generalization is left as an exercise for the reader. -
2017-05-20 at 6:31 PM UTC in Just found this place
-
2017-05-19 at 11:52 PM UTC in keeping light and heat out of a bedroomair conditioning
-
2017-05-19 at 5:17 PM UTC in FBI currently uses mind reading and brain to brain communication; Remote Neural Monitoring
-
2017-05-19 at 5:43 AM UTC in Intel's Management EngineSHORT VERSION:
Intel's Management Engine, or Active Management Technology, depending on where you look is a low-level subsystem that's attached to every Intel chip produced after 2008 (I believe). It runs whenever the chip is powered (even if the computer itself is switched off), and it's purpose is to 'provide trust' that the processor isn't compromised. It's completely invisible to the user, but has complete access to the processor as well as access to power up or down the machine, interfere with the boot process, send/receive TCP network traffic through it's own independent MAC forwarded by the network adapter and run arbitrary code locally. Efforts to dump it's code and understand it's workings, potentially leading to an exploit are underway but due to the way the core firmware is compressed and obfuscated, as well as peripheral functions being stored on ROM chips, makes progress very difficult.
When it's exploited, if it hasn't been already, every machine running a recent Intel chip will be outfitted with a rootkit that can't be disabled (breaking or disabling the ME coprocessor forces the computer to shut down on a timer). Don't think switching to AMD will make much of a difference either... They have a very similar system (TrustZone) that's implemented via an on-chip ARM coprocessor.
TECHNICAL:
The AMT unit itself is a separate on-chip coprocessor that has several supporting components such as ROM and RAM for firmware and temporary data storage as well as a 'DMA engine' that allows it unfettered access to memory in use by the user-installed operating system, meaning it can potentially subvert the program flow of Windows, Lunix or whatever OS you're using without any warning or indication. It also has it's own simple TCP stack which has been demonstrated to be insecure in the past; it has a hardcoded MAC address different to the standard NIC and is essentially able to relay through the NIC to forward requests to the internet or LAN. The ME engine itself is composed of the core firmware which is compressed, encrypted and obfuscated, only decoded on the fly to run commands, and modules and components stored on ROM chips, which cannot be dumped or accessed directly.
The original purpose of the AMT was to provide trust for the CPU itself; you may compile applications from source because you want to be able to see what it does before you 'trust' it enough to compile, but you then also need to be able to trust the compiler that builds it, and dependencies that get linked in and anything that runs below the application, ie. the operating system, drivers used and the like. You can continually move down the chain, checking source or watching applications' behaviour to verify they're working as advertised, but once you get to hardware, specifically the processor in this case, it's a black box - there's no way to directly view the source, so the only way to 'trust' that it's not compromised is through a third-party that can verify such. This begs the question of how you can verify that the third-party is trustworthy - you can't. One of it's popular uses now is to facilitate remote installations and administration functionality on behalf of sysadmins.
OTHER:
It would surprise me if some of the betabet agencies don't already have access to this - it may have even be among the exploits stolen from the NSA's archives, but hasn't been released because whoever released them publicly knew of it's value. Much of the system's code is stored on ROM chips and untouchable; it cannot be reflashed or updated meaning that if an exploit exists in it, there is nothing users can do to protect themselves - they'd literally need to buy a new processor once Intel gets around to patching or rewriting the AMT codebase.
Manufacturers have ostensibly worked with NSA contractors in the past, specifically in the case of harrdrive firmware exploits used to cache and transmit data - without co-operation from the major harddrive manufacturers, such an exploit would take years to develop per manufacturer, and there are around 10 of them.
COUNTERMEASURES:
At the moment, there's very little that can be done to mitigate your risk of being exploited because even if no exploit exists today, it will. Disabling the AMT platform causes the computer to shut down after a countdown, but it's been observed that if chunks of the WMT's firmware are erased or overwritten, it stays in the 'running' state but stops responding
You may be able to sniff TCP data to/from the platform by enhancementing by MAC address, but I'm not sure how possible it is to mask those requests. -
2017-05-18 at 10:17 PM UTC in Hats Off To LannyDicks up for spectral's middle school gym teacher.