CUDA vs OpenCL

Discussion in 'Tech Discussion' started by Fi, May 16, 2018.

?

CUDA vs OpenCL

  1. CUDA

    2 vote(s)
    50.0%
  2. OpenCL

    0 vote(s)
    0.0%
  3. IDK...

    2 vote(s)
    50.0%
  1. Fi

    Fi Chosen One

    Joined:
    Mar 28, 2018
    Messages:
    225
    Likes Received:
    125
    Reading List:
    Link
    Well... I want to broaden my horizon. The intended use so far is for computational statistics and machine learning.

    So,
    CUDA:
    1. NVIDIA only.
    2. Propietary (So continuous support from NVIDIA is almost guaranteed, but vendor lock-in).
    3. Have better performance than OpenCL (from what I've read).

    OpenCL:
    1. ATI/AMD and NVIDIA graphic card can use this.
    2. Since ATI is an option, they say ATI card gives more "bang for the buck" than NVIDIA. I haven't researched about this.
    3. Open Source.

    To be honest, my heart lean to OpenCL than CUDA for my first time learning. Recommend me which one should I learn first?
     
  2. prongsjiisan

    prongsjiisan Apostle of Violence

    Joined:
    Dec 8, 2015
    Messages:
    3,071
    Likes Received:
    4,366
    Reading List:
    Link
    Okay You learn for?
     
  3. ashah1212

    ashah1212 Active Member

    Joined:
    Jan 11, 2017
    Messages:
    4
    Likes Received:
    3
    Reading List:
    Link
    If you are starting fresh, I would recommend CUDA. There are already many pre existing libraries. The bang for buck argument is no longer valid due to the mining craze.
     
    Fi likes this.
  4. Fi

    Fi Chosen One

    Joined:
    Mar 28, 2018
    Messages:
    225
    Likes Received:
    125
    Reading List:
    Link
    For fun right now. Not really fun, but I have some interest in this and want to see if this is worth pursuing. No work pressure or something like that.
     
  5. prongsjiisan

    prongsjiisan Apostle of Violence

    Joined:
    Dec 8, 2015
    Messages:
    3,071
    Likes Received:
    4,366
    Reading List:
    Link
    OpenCl then it didn't cofined to Nvidia. Can apply to amd, nvidia both.
     
    Fi likes this.
  6. gggo

    gggo Well-Known Member

    Joined:
    Aug 9, 2017
    Messages:
    513
    Likes Received:
    333
    Reading List:
    Link
    it's a question of practicality. if you're just starting to learn, consider what's available to you, which is, the course/books you're going to use, the language you're using, the hardware you currently have and how active the community is. It's not just about "performance" or being "proprietary". Because if most of your prospective companies are using these technologies, you're only going to screw yourself over if you use a divergent library. anyway, in the end, you can relearn how to use the corresponding modules/libraries, what matters is your foundation on the theories and application of the knowledge.
     
    Fi likes this.
  7. Truerror

    Truerror Well-Known Member

    Joined:
    Jan 10, 2016
    Messages:
    546
    Likes Received:
    292
    Reading List:
    Link
    I'd say CUDA (more resources for learning), but unfortunately I'm on a Radeon now...
     
    Fi likes this.