User Controls

Using my graphics card in my PC for general processing power

  1. #1
    SBTlauien African Astronaut
    I built my PC some years ago and put a decent graphics card in it but I don't really play video games much. I feel like it's wasted power. Can I use it to assist my CPU's processing?
  2. #2
    Merlin Houston
    Only for highly specialized applications like crypto coin mining and data processing like tensor flow, rendering fractals, etc. You can write computer programs that specifically use GPU, but I imagine the learning curve is very steep and with not much value without a specific goal.

    Tensor flow is a machine learning python library made by Google which will automatically take advantage of the GPU if you have one. It's a lot easier to get started than you think: https://www.tensorflow.org/tutorials/mnist/beginners/
    You can basically ignore the math and just know you are shaping a function/line (or many functions/lines) to represent data points.
    The following users say it would be alright if the author of this post didn't die in a fire!
  3. #3
  4. #4
    SBTlauien African Astronaut
    Originally posted by Merlin Only for highly specialized applications like crypto coin mining and data processing like tensor flow, rendering fractals, etc. You can write computer programs that specifically use GPU, but I imagine the learning curve is very steep and with not much value without a specific goal.

    Tensor flow is a machine learning python library made by Google which will automatically take advantage of the GPU if you have one. It's a lot easier to get started than you think: https://www.tensorflow.org/tutorials/mnist/beginners/
    You can basically ignore the math and just know you are shaping a function/line (or many functions/lines) to represent data points.

    Interesting stuff. I wonder if that would work well for bypassing captchas, the tutorial only said digits though...

    I was thinking more of something small for Linux that I could just install and benefit from.
  5. #5
    aldra JIDF Controlled Opposition
    mostly just raw low-level math
  6. #6
    Lanny Bird of Courage
    Nope. SIMD processors (like GPUs) usually have much lower clock speeds than CPUs in the same general price range, they're fast because they're highly parallel in a very constrained way but unless the problem you're trying to solve can be expressed as a uniform transformation of variable data it won't benefit from GPU acceleration. Your computation also has to be of a certain size, GPU communication involves non-negligible transfer time so it's easy to end up in a place where you spend more time shuffling bytes back and forth between your GPU and main memory/CPU cache hierarchy than it would take to use a vector extension or just iterate. Graphics just happens to be a domain that's highly amenable to SIMD computation.
  7. #7
    -SpectraL coward [the spuriously bluish-lilac bushman]
    Some CMOS bios settings have an option to transfer RAM on the video card to system memory. Try F1 when posting and look for the setting.
Jump to Top