Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SDL_CreateTextureFromSurface fails for OpenGL + Win XP 64 NVidia 285.58 with glTexImage2D(): GL_INVALID_ENUM #503

Closed
SDLBugzilla opened this issue Feb 10, 2021 · 0 comments

Comments

@SDLBugzilla
Copy link
Collaborator

This bug report was migrated from our old Bugzilla tracker.

Reported in version: HG 2.0
Reported for operating system, platform: Windows 7, x86

Comments on the original bug report:

On 2011-11-15 16:38:41 +0000, Ellie wrote:

SDL_CreateTextureFromSurface/SDL_CreateTexture fails for OpenGL + Win XP 64 NVidia 285.58 with glTexImage2D(): GL_INVALID_ENUM.

Win Vista 64 bit NVidia/OpenGL with recent drivers was also reported to me to be affected.

On the same configurations when picking a Direct3D renderer for creating the textures on, it works perfectly fine. Also, the OpenGL path works fine on other non-NVidia machines (intel 945gme crap chipset on x86 fedora 16, intel i5 on board graphics on win 7 64bit), it just fails on those NVidia configurations and it's OpenGL exclusively, Direct3D/Software works fine.

Since NVidia should most likely support the required OpenGL foundations in their recent drivers, I guess SDL devs should look into this and check out why this breaks on those configurations.

Also to be more explicit on this, I am just talking about the SDL 1.3 SDL_Texture interface with those new accelerated 2d renderers, not about manual OpenGL usage with SDL.

On 2011-11-25 01:49:24 +0000, Ellie wrote:

Sorry for putting the wrong OS, it should be Windows of course.

On 2011-12-21 12:43:53 +0000, Arnaud Geltzenlichter wrote:

Same problem with SDL_CreateTextureFromSurface() with SDL_Surface* coming from a PNG image, on Windows 7 x86_64 SP1 and driver Nvidia 285.62

D3D and Software renderer work fine.

Works fine on Ubuntu 10.10 2.6.38 with OpenGL 3.3 too.

On 2011-12-29 14:08:21 +0000, Sam Lantinga wrote:

Do you get this with the testsprite2 sample program? If not, can you attach a minimal test case that shows the issue?

I don't have an NVidia card here, so if anybody else wants to take a crack at this, please do!

On 2011-12-29 15:15:28 +0000, Arnaud Geltzenlichter wrote:

(In reply to comment # 3)

Do you get this with the testsprite2 sample program? If not, can you attach a
minimal test case that shows the issue?

I don't have an NVidia card here, so if anybody else wants to take a crack at
this, please do!

Yes, same problem with testsprite2 on Windows 7 with driver NVidia 285.58.

On 2011-12-29 15:37:17 +0000, Ellie wrote:

I attempted to make a minimal test case while Arnaud Geltzenlichter apparently checked out the testsprite2 program. It boils down to this:

#include "SDL.h"
#include <stdio.h>
#include <unistd.h>
#include <windows.h>
#include <string.h>

static SDL_Window* mainwindow;
static SDL_Renderer* mainrenderer;

int main(int argc, char** argv) {
//initialisation stuff
if (SDL_Init(SDL_INIT_VIDEO) < 0) {
return -1;
}
mainwindow = SDL_CreateWindow("SDL bug", 0,0, 512,512, 0);
if (!mainwindow) {
SDL_Quit();
return -1;
}

//find the opengl renderer
int rendererindex = -1;
int count = SDL_GetNumRenderDrivers();
int r = 0;
while (r < count) {
    SDL_RendererInfo info;
    SDL_GetRenderDriverInfo(r, &info);
    if (strcasecmp(info.name, "opengl") == 0) {
        rendererindex = r;
        break;
    }
    r++;
}

//create renderer
mainrenderer = SDL_CreateRenderer(mainwindow, rendererindex, SDL_RENDERER_ACCELERATED|SDL_RENDERER_PRESENTVSYNC);
if (!mainrenderer) {
    SDL_Quit();
    return -1;
}

//create texture
SDL_Texture* t = SDL_CreateTexture(mainrenderer, SDL_PIXELFORMAT_ABGR8888, SDL_TEXTUREACCESS_STREAMING, 512, 512);
if (!t) {
    printf("OpenGL NVIDIA issue appears to be triggered: %s\n",SDL_GetError());
    return 0;
}else{
    printf("OpenGL NVIDIA issue is apparently not triggered\n");
}
printf("Please press return to close\n");
fflush(stdout);

//wait for return key
int c = fgetc(stdin);
while (c != '\n') {
    c = fgetc(stdin);
}

SDL_Quit();

}

The output is:

OpenGL NVIDIA issue appears to be triggered: glTexImage2D(): GL_INVALID_ENUM
Please press return to close

Tested on Windows XP 64bit, driver version 285.58 aswell

On 2012-01-08 04:06:13 +0000, Arnaud Geltzenlichter wrote:

Sam, if you can lead me, I can try to find the problem inside the SDL 1.3 OpenGL code.

Do you have any idea about this issue ?

On 2012-01-08 10:23:37 +0000, Sam Lantinga wrote:

SDL's OpenGL renderer internally only supports one RGB format, and that's SDL_PIXELFORMAT_ARGB8888.

It maps this into the following set of OpenGL constants:
*internalFormat = GL_RGBA8;
*format = GL_BGRA;
*type = GL_UNSIGNED_INT_8_8_8_8_REV;

I'm guessing that the latest NVidia drivers don't support GL_UNSIGNED_INT_8_8_8_8_REV?

You can confirm this by changing convert_format() in src/render/opengl/SDL_render_gl.c

On 2012-01-09 13:20:55 +0000, Arnaud Geltzenlichter wrote:

Well, it works with GL_UNSIGNED_BYTE instead of GL_UNSIGNED_INT_8_8_8_8_REV.

GL_UNSIGNED_BYTE should work with format GL_RGBA, giving 8 bits per component.

I'm currently testing the opengl renderer blit speed with this type, will give results asap.

On 2012-01-09 13:54:18 +0000, Arnaud Geltzenlichter wrote:

Some additional information :

I confirm my previous comment but there is more.

The glTexImage2D() call seems to always return a GL_INVALID_ENUM error, even with the type set to GL_UNSIGNED_BYTE.

But, if I ignore the error check just after, all the rest of the code works well and the texture will be created without problem.

In src/render/opengl/SDL_render_gl.c in the function GL_CreateTexture():547 :

...

#endif
{
renderdata->glTexImage2D(data->type, 0, internalFormat, texture_w,
texture_h, 0, format, type, NULL);

    // Always return GL_INVALID_ENUM ... ??
}
renderdata->glDisable(data->type);
result = renderdata->glGetError();

/* if (result != GL_NO_ERROR) {
GL_SetError("glTexImage2D()", result);
return -1;
}*/

...

On 2012-01-09 18:14:56 +0000, Sam Lantinga wrote:

That's crazy!

On 2012-01-10 12:51:15 +0000, Arnaud Geltzenlichter wrote:

I got a FPS with opengl renderer being only one-tenth of the D3D one !

On 2012-01-10 14:54:49 +0000, Arnaud Geltzenlichter wrote:

Well, in reality, the GL_INVALID_ENUM error comes from each of the following calls :

renderdata->glTexParameteri(data->type, GL_TEXTURE_WRAP_S,
GL_CLAMP_TO_EDGE);

renderdata->glTexParameteri(data->type, GL_TEXTURE_WRAP_T,
GL_CLAMP_TO_EDGE);

This gl error was stored until the lines :

result = renderdata->glGetError();
if (result != GL_NO_ERROR) {
GL_SetError("glTexImage2D()", result);
return -1;
}

And so the if condition is true.

With GL_CLAMP, I got no GL error ...

But still have poor performances to draw textures with opengl renderer.

On 2012-01-10 18:01:35 +0000, Sam Lantinga wrote:

Good find. Can you try this fix out?
http://hg.libsdl.org/SDL/rev/15ff38383cb7

On 2012-01-10 18:02:25 +0000, Sam Lantinga wrote:

Out of curiousity, is your framerate 60? :)

On 2012-01-11 12:51:08 +0000, Arnaud Geltzenlichter wrote:

Sam,

Your patch doesn't work, you'r still using GL_CLAMP_TO_EDGE, is it a mistake on your part ? We fall true in the if condition, so the same code than the previous one is executed.

For the FPS, I do not limit the framerate.First, I try to get the maximum FPS for different texture size, then I check the CPU consumption for a fixed, given FPS.

My results :

Texture's size : D3D FPS: OpenGL FPS :
64x64 2808 1520
128x128 2650 1072
256x256 2383 710
512x512 2051 231
1024x1024 1478 108

I can't believe that OpenGL has so poor performances ..??

For a fixed 60 FPS, the Windows task manager shows 0 CPU consumption, while 3 for OpenGL ... 300 % more.

On 2012-01-11 12:52:07 +0000, Arnaud Geltzenlichter wrote:

0 CPU for D3D renderer of course ... :)

On 2012-02-01 08:19:38 +0000, Arnaud Geltzenlichter wrote:

Sam,

Is there a specific reason to use GL_CLAMP_TO_EDGE and not GL_CLAMP ?

Is is about texture's borders ?

On 2012-02-21 12:33:33 +0000, Arnaud Geltzenlichter wrote:

Problem still occurs with the new Nvidia driver 295.73.

On 2012-02-21 14:26:25 +0000, Arnaud Geltzenlichter wrote:

Well, in fact, it seems to be OK with the last SDL 2.0 snapshot 6302 and Driver 295.73 !

On 2014-05-12 21:46:19 +0000, Adam M. wrote:

Should this be closed now?

On 2014-06-25 09:19:29 +0000, Sam Lantinga wrote:

It sounds like this is now fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant