We are currently migrating Bugzilla to GitHub issues.
Any changes made to the bug tracker now will be lost, so please do not post new bugs or make changes to them.
When we're done, all bug URLs will redirect to their equivalent location on the new bug tracker.

Bug 2619

Summary: OpenGL Context creation on integrated GPU creates buggy experience for end users
Product: SDL Reporter: Anders Davallius <anders.davallius>
Component: *don't know*Assignee: Ryan C. Gordon <icculus>
Status: ASSIGNED --- QA Contact: Sam Lantinga <slouken>
Severity: major    
Priority: P2 CC: amaranth72, anders.davallius, gabomdq
Version: 2.0.3   
Hardware: x86_64   
OS: Windows (All)   

Description Anders Davallius 2014-07-01 16:59:09 UTC
We have encountered a problem where some of our consumers are using a PC with multiple GPUs, one main dedicated GPU and one integrated Intel GPU

Overview:
A user have a computer with a integrated Intel GPU and a dedicated GPU with updated drivers capable of compiling OpenGL 3.3 shaders, but run in to problems as the OpenGL context seem to often get created on the Intel GPU (we might only so far have experienced this when the dedicated GPU is an AMD GPU), which for many users either have outdated drivers or is incapable of compiling the 3.3 shaders, and in all the cases so far much slower than the dedicated GPU. This often results in crashes, and other times results in a sub-par experience ass the application will run much much slower than it should.

This have happened a lot of times on different hardware combinations so far, and is still happening and causing problems for a lot of user. Some are able to figure it out and force the application to run on the dedicated GPU, but this is not an okay solution, as most users won’t be able to figure that out.

We know that it is possible to influence what GPU to use when creating a OpenGL context in different ways, at least on Windows and Linux, not entirely sure about mac but I would be surprised if it was not possible there.

What we need to see in SDL in order to continue using SDL to create our OpenGL context is to be able to either set preferences for what kind of GPU you want to use, or maybe directly control what GPU to use by some kind of index and identifiers. I would greatly prefer the first alternative, as I would figure that you either want the most powerful GPU, or the most power efficient one, or don't really care (three alternatives and one settings).

We are currently looking in to ways to influence this and we know it should be possible with the _nv_gpu_affinity and _AMD_gpu_association extensions, which would probably be the best option. Other alternatives could be to maybe try and use NvAPI and ADL_SDK to insert settings into the drivers which might influence if the OGL context gets created on dedicated GPU instead of the integrated one. But as that would require external libraries, it might not be a good alternative.

We can't wait on this and need to get it working as soon as we can, so we will most likely hack something together working only for windows for now as that is the only platform our game currently supports. But if we can, we are willing to work together to some extent with others here to make sure that SDL can fully support this in the future. But as we are only two programmers and both of us are working almost day and night just to bug fix and support user of our game, we might not currently have that much time to spare.

Any insights into this that would help us getting a fix out to our user as soon as possible would be greatly appreciated!

We are using 2.0.1 version of SDL, but we can't seem to find that there would be any difference if we were to use the 2.0.3 version or any other version.
Comment 1 Anders Davallius 2014-07-01 17:02:45 UTC
Seems like the first paragraph there is a leftover after some editing I did on the text... please disregard it and start reading from the "overview" section... I also noticed that I don’t have any other "headers"... so, I'm sorry if it turned out a little bit messier than it should have...
Comment 2 Sam Lantinga 2014-07-08 04:49:02 UTC
It seems entirely reasonable to add an SDL hint to specify the kind or the exact GPU that the OpenGL context is created on. You should even be able to dynamically load the vendor specific DLL to get the API you need.

I'm not familiar with these extensions, nor do I have access to this configuration for testing. Feel free to submit a patch for review though!

Ryan, do you have a preference between an SDL hint and extending the OpenGL attributes?
Comment 3 Gabriel Jacobo 2014-07-08 13:18:32 UTC
Excuse my ignorance :) Isn't the GPU selection dependent on the window placement? I mean, if you have a multi GPU arrangement, each GPU gets a monitor, and depending on which monitor you place the window on it's the GPU that gets selected when you create the GL context.
Comment 4 Ryan C. Gordon 2015-02-19 06:04:37 UTC
This is hard and nasty to solve on Windows, fwiw:

https://www.opengl.org/discussion_boards/showthread.php/173030-How-to-use-OpenGL-with-a-device-chosen-by-you?p=1212623#post1212623

Things like GL_NV_gpu_affinity only help you pick the right Nvidia GPU from a multi-GPU system, and it's not useful if Windevenows didn't get Nvidia's GL implementation in the first place.

Linux is a wasteland for this right now (the attitude is that the user probably set this up for you the way they wanted it, though).

Mac OS X actually has an API that lets you say "I want the integrated GPU if possible because my app isn't doing hard work and/or I'd like to reduce battery usage and heat output" or "I want the fastest GPU you have," fwiw.

--ryan.
Comment 5 Alex Szpakowski 2015-02-19 07:05:16 UTC
For Windows systems that use nvidia Optimus, you can export a specific variable from your executable (and only the executable, so SDL2.dll can't do it even when linked to your executable) to tell nvidia to prefer the higher performance GPU.

http://developer.download.nvidia.com/devzone/devcenter/gamegraphics/files/OptimusRenderingPolicies.pdf