| Summary: | OpenGL Context creation on integrated GPU creates buggy experience for end users | ||
|---|---|---|---|
| Product: | SDL | Reporter: | Anders Davallius <anders.davallius> |
| Component: | *don't know* | Assignee: | Ryan C. Gordon <icculus> |
| Status: | ASSIGNED --- | QA Contact: | Sam Lantinga <slouken> |
| Severity: | major | ||
| Priority: | P2 | CC: | amaranth72, anders.davallius, gabomdq |
| Version: | 2.0.3 | ||
| Hardware: | x86_64 | ||
| OS: | Windows (All) | ||
|
Description
Anders Davallius
2014-07-01 16:59:09 UTC
Seems like the first paragraph there is a leftover after some editing I did on the text... please disregard it and start reading from the "overview" section... I also noticed that I don’t have any other "headers"... so, I'm sorry if it turned out a little bit messier than it should have... It seems entirely reasonable to add an SDL hint to specify the kind or the exact GPU that the OpenGL context is created on. You should even be able to dynamically load the vendor specific DLL to get the API you need. I'm not familiar with these extensions, nor do I have access to this configuration for testing. Feel free to submit a patch for review though! Ryan, do you have a preference between an SDL hint and extending the OpenGL attributes? Excuse my ignorance :) Isn't the GPU selection dependent on the window placement? I mean, if you have a multi GPU arrangement, each GPU gets a monitor, and depending on which monitor you place the window on it's the GPU that gets selected when you create the GL context. This is hard and nasty to solve on Windows, fwiw: https://www.opengl.org/discussion_boards/showthread.php/173030-How-to-use-OpenGL-with-a-device-chosen-by-you?p=1212623#post1212623 Things like GL_NV_gpu_affinity only help you pick the right Nvidia GPU from a multi-GPU system, and it's not useful if Windevenows didn't get Nvidia's GL implementation in the first place. Linux is a wasteland for this right now (the attitude is that the user probably set this up for you the way they wanted it, though). Mac OS X actually has an API that lets you say "I want the integrated GPU if possible because my app isn't doing hard work and/or I'd like to reduce battery usage and heat output" or "I want the fastest GPU you have," fwiw. --ryan. For Windows systems that use nvidia Optimus, you can export a specific variable from your executable (and only the executable, so SDL2.dll can't do it even when linked to your executable) to tell nvidia to prefer the higher performance GPU. http://developer.download.nvidia.com/devzone/devcenter/gamegraphics/files/OptimusRenderingPolicies.pdf |