| Summary: | SDL_CreateTextureFromSurface fails for OpenGL + Win XP 64 NVidia 285.58 with glTexImage2D(): GL_INVALID_ENUM | ||
|---|---|---|---|
| Product: | SDL | Reporter: | Ellie <etc0de> |
| Component: | video | Assignee: | Sam Lantinga <slouken> |
| Status: | RESOLVED FIXED | QA Contact: | Sam Lantinga <slouken> |
| Severity: | major | ||
| Priority: | P2 | CC: | adam, arnaud.gel |
| Version: | HG 2.0 | ||
| Hardware: | x86 | ||
| OS: | Windows 7 | ||
|
Description
Ellie
2011-11-15 16:38:41 UTC
Sorry for putting the wrong OS, it should be Windows of course. Same problem with SDL_CreateTextureFromSurface() with SDL_Surface* coming from a PNG image, on Windows 7 x86_64 SP1 and driver Nvidia 285.62 D3D and Software renderer work fine. Works fine on Ubuntu 10.10 2.6.38 with OpenGL 3.3 too. Do you get this with the testsprite2 sample program? If not, can you attach a minimal test case that shows the issue? I don't have an NVidia card here, so if anybody else wants to take a crack at this, please do! (In reply to comment #3) > Do you get this with the testsprite2 sample program? If not, can you attach a > minimal test case that shows the issue? > > I don't have an NVidia card here, so if anybody else wants to take a crack at > this, please do! Yes, same problem with testsprite2 on Windows 7 with driver NVidia 285.58. I attempted to make a minimal test case while Arnaud Geltzenlichter apparently checked out the testsprite2 program. It boils down to this:
#include "SDL.h"
#include <stdio.h>
#include <unistd.h>
#include <windows.h>
#include <string.h>
static SDL_Window* mainwindow;
static SDL_Renderer* mainrenderer;
int main(int argc, char** argv) {
//initialisation stuff
if (SDL_Init(SDL_INIT_VIDEO) < 0) {
return -1;
}
mainwindow = SDL_CreateWindow("SDL bug", 0,0, 512,512, 0);
if (!mainwindow) {
SDL_Quit();
return -1;
}
//find the opengl renderer
int rendererindex = -1;
int count = SDL_GetNumRenderDrivers();
int r = 0;
while (r < count) {
SDL_RendererInfo info;
SDL_GetRenderDriverInfo(r, &info);
if (strcasecmp(info.name, "opengl") == 0) {
rendererindex = r;
break;
}
r++;
}
//create renderer
mainrenderer = SDL_CreateRenderer(mainwindow, rendererindex, SDL_RENDERER_ACCELERATED|SDL_RENDERER_PRESENTVSYNC);
if (!mainrenderer) {
SDL_Quit();
return -1;
}
//create texture
SDL_Texture* t = SDL_CreateTexture(mainrenderer, SDL_PIXELFORMAT_ABGR8888, SDL_TEXTUREACCESS_STREAMING, 512, 512);
if (!t) {
printf("OpenGL NVIDIA issue appears to be triggered: %s\n",SDL_GetError());
return 0;
}else{
printf("OpenGL NVIDIA issue is apparently not triggered\n");
}
printf("Please press return to close\n");
fflush(stdout);
//wait for return key
int c = fgetc(stdin);
while (c != '\n') {
c = fgetc(stdin);
}
SDL_Quit();
}
The output is:
OpenGL NVIDIA issue appears to be triggered: glTexImage2D(): GL_INVALID_ENUM
Please press return to close
Tested on Windows XP 64bit, driver version 285.58 aswell
Sam, if you can lead me, I can try to find the problem inside the SDL 1.3 OpenGL code. Do you have any idea about this issue ? SDL's OpenGL renderer internally only supports one RGB format, and that's SDL_PIXELFORMAT_ARGB8888.
It maps this into the following set of OpenGL constants:
*internalFormat = GL_RGBA8;
*format = GL_BGRA;
*type = GL_UNSIGNED_INT_8_8_8_8_REV;
I'm guessing that the latest NVidia drivers don't support GL_UNSIGNED_INT_8_8_8_8_REV?
You can confirm this by changing convert_format() in src/render/opengl/SDL_render_gl.c
Well, it works with GL_UNSIGNED_BYTE instead of GL_UNSIGNED_INT_8_8_8_8_REV. GL_UNSIGNED_BYTE should work with format GL_RGBA, giving 8 bits per component. I'm currently testing the opengl renderer blit speed with this type, will give results asap. Some additional information :
I confirm my previous comment but there is more.
The glTexImage2D() call seems to always return a GL_INVALID_ENUM error, even with the type set to GL_UNSIGNED_BYTE.
But, if I ignore the error check just after, all the rest of the code works well and the texture will be created without problem.
In src/render/opengl/SDL_render_gl.c in the function GL_CreateTexture():547 :
...
#endif
{
renderdata->glTexImage2D(data->type, 0, internalFormat, texture_w,
texture_h, 0, format, type, NULL);
// Always return GL_INVALID_ENUM ... ??
}
renderdata->glDisable(data->type);
result = renderdata->glGetError();
/* if (result != GL_NO_ERROR) {
GL_SetError("glTexImage2D()", result);
return -1;
}*/
...
That's crazy! I got a FPS with opengl renderer being only one-tenth of the D3D one ! Well, in reality, the GL_INVALID_ENUM error comes from each of the following calls :
renderdata->glTexParameteri(data->type, GL_TEXTURE_WRAP_S,
GL_CLAMP_TO_EDGE);
renderdata->glTexParameteri(data->type, GL_TEXTURE_WRAP_T,
GL_CLAMP_TO_EDGE);
This gl error was stored until the lines :
result = renderdata->glGetError();
if (result != GL_NO_ERROR) {
GL_SetError("glTexImage2D()", result);
return -1;
}
And so the if condition is true.
With GL_CLAMP, I got no GL error ...
But still have poor performances to draw textures with opengl renderer.
Good find. Can you try this fix out? http://hg.libsdl.org/SDL/rev/15ff38383cb7 Out of curiousity, is your framerate 60? :) Sam, Your patch doesn't work, you'r still using GL_CLAMP_TO_EDGE, is it a mistake on your part ? We fall true in the if condition, so the same code than the previous one is executed. For the FPS, I do not limit the framerate.First, I try to get the maximum FPS for different texture size, then I check the CPU consumption for a fixed, given FPS. My results : Texture's size : D3D FPS: OpenGL FPS : 64x64 2808 1520 128x128 2650 1072 256x256 2383 710 512x512 2051 231 1024x1024 1478 108 I can't believe that OpenGL has so poor performances ..?? For a fixed 60 FPS, the Windows task manager shows 0 CPU consumption, while 3 for OpenGL ... 300 % more. 0 CPU for D3D renderer of course ... :) Sam, Is there a specific reason to use GL_CLAMP_TO_EDGE and not GL_CLAMP ? Is is about texture's borders ? Problem still occurs with the new Nvidia driver 295.73. Well, in fact, it seems to be OK with the last SDL 2.0 snapshot 6302 and Driver 295.73 ! Should this be closed now? It sounds like this is now fixed. |