| Summary: | setting a white and then non-white texture color mod breaks the texture with software renderer | ||
|---|---|---|---|
| Product: | SDL | Reporter: | Adam M. <adam> |
| Component: | render | Assignee: | Sam Lantinga <slouken> |
| Status: | RESOLVED FIXED | QA Contact: | Sam Lantinga <slouken> |
| Severity: | normal | ||
| Priority: | P2 | ||
| Version: | 2.0.3 | ||
| Hardware: | x86_64 | ||
| OS: | Windows 7 | ||
| Attachments: | proposed patch (v1) | ||
|
Description
Adam M.
2015-06-18 23:52:12 UTC
(You do have to actually do a texture draw between the two color mod sets.) At a first glance, the problem seems to be caused by incorrect RLE decoding. Blitting with a white color mod causes the texture to be converted to RLE by the software renderer. Then, blitting with a non-white color mod causes the texture to be converted back from RLE, but when that happens the alpha channel is not reconstituted. The colors of the pixels are presumably correct, but the alpha channel is zero (e.g. a pixel of 0xFFFFFFFF gets converted into 0x00FFFFFF after the normal -> RLE -> unRLE conversions). This causes the texture to become invisible. Okay, here is the problem, I think. During the first blit, RLEAlphaSurface is called to do RLE conversion of the RGBA source into a format allowing it "to be quickly alpha-blittable onto dest". Since the destination is the screen, it has no alpha channel. RLEAlphaSurface calls copy_opaque(dst, src + runstart, len, sf, df) (where copy_opaque is copy_32), which has this code: SDL_RLEaccel.c:984: RGBA_FROM_8888(*src, sfmt, r, g, b, a); PIXEL_FROM_RGBA(*d, dfmt, r, g, b, a); On the first line, it reads the source pixel 0xFFFFFFFF. The second line drops the alpha value (because dfmt for the screen has no alpha channel) and writes 0x00FFFFFF. Later, when the RLE conversion is being undone, uncopy_32 is called, which has the following code: SDL_RLEaccel.c:1001: Uint32 pixel = *s++; RGB_FROM_PIXEL(pixel, sfmt, r, g, b); a = pixel >> 24; PIXEL_FROM_RGBA(*dst, dfmt, r, g, b, a); However, the the alpha channel has already been dropped by copy_opaque (= copy_32), so pixel = 0x00FFFFFF and 'a' becomes 0. Thus, all opaque pixels lose their alpha channel when being unRLE'd. (I don't know what happens to pixels with alpha from 1-254, but they should be checked too.) So, that seems to be the problem, but I'm not sure what the solution should be. Since opaque pixels have alpha == 255, I'm thinking to create another uncopy function for opaque pixels that simply uses 255 for alpha. However, there may be other problems here. For translucent pixels, uncopy_32 assumes the alpha channel is stored in the upper 8 bits, but copy_32 doesn't store it there. Instead, it stores it in whatever location is appropriate for the destination surface. Isn't one of their behaviors incorrect, given the other? I'm not sure which to change, however. For translucent pixels, it seems that the blit function uses do_blend, which is the BLIT_TRANSL_888 macro, which also assumes alpha is in top 8 bits. It has the comment "we have made sure the alpha is stored in the top 8 bits...", but it seems that's not true (copy_32 doesn't make sure the alpha goes there). Perhaps the correct fix is to make copy_32 put the alpha there, but then that seems to require that RLE conversion be limited to destination surfaces that don't use the upper 8 bits. However, looking further, it seems that has already been done: if (masksum != 0x00ffffff) return -1; /* requires unused high byte */ Created attachment 2188 [details]
proposed patch (v1)
I've attached a patch that fixes the problem in my scenario. Please check it and make sure that if there are any other problems related to the original bug that they are fixed also.
I'm tempted to undo your last patch and keep the original surface around so that we can use RLE when it's fast, and use non-RLE code paths when needed without the overhead of undoing RLE encoding. Thoughts? For future reference, the patch that would be affected by this is: https://hg.libsdl.org/SDL/rev/2f5a57f86e24 I must be missing something. copy_32 drops the alpha when copying to a surface that doesn't have an alpha channel, but then BLIT_TRANSL_888() assumes that the alpha is in the top 8 bits. I just ran a test where I tried RLE transparent blit to a surface without an alpha channel, and sure enough every pixel was untouched because the alpha was 0. Your patch fixes that, and I think is the correct fix, so I'm going to apply it. https://hg.libsdl.org/SDL/rev/fd7fe355f173 Thanks! This bug is now fixed. > I'm tempted to undo your last patch and keep the original surface around so
> that we can use RLE when it's fast, and use non-RLE code paths when needed
> without the overhead of undoing RLE encoding.
>
> Thoughts?
Since it would use almost double the memory, I guess it depends on just how much faster RLE is.
It seems like it primarily benefits surfaces that are almost entirely opaque or transparent, and that have a lot of empty space. Surfaces with little or no transparency would probably be faster with the regular blitters than the more complex RLE blitter, if the destination surface has a compatible format. I'm guessing that the speedup for textures full of translucent pixels, if any, is not that great, but I haven't looked at that code path much.
So I would actually implement it this way: When a static texture is updated, scan the new image and see whether it would actually benefit from RLE encoding. If so, enable RLE. Otherwise, disable it.
(That would be in addition to reverting my patch that disables RLE sometimes.) (Arg. And in addition to keeping the original surface around, I meant to say.) |