We are currently migrating Bugzilla to GitHub issues.
Any changes made to the bug tracker now will be lost, so please do not post new bugs or make changes to them.
When we're done, all bug URLs will redirect to their equivalent location on the new bug tracker.

Bug 4929 - Software renderer produces bugs when optimizations are turned on with Visual C++ 2019
Summary: Software renderer produces bugs when optimizations are turned on with Visual ...
Status: RESOLVED FIXED
Alias: None
Product: SDL
Classification: Unclassified
Component: render (show other bugs)
Version: HG 2.1
Hardware: x86_64 Windows 10
: P2 major
Assignee: Ryan C. Gordon
QA Contact: Sam Lantinga
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2020-01-04 22:49 UTC by Konrad
Modified: 2020-01-16 23:55 UTC (History)
1 user (show)

See Also:


Attachments
Fix (1.90 KB, patch)
2020-01-16 22:01 UTC, Konrad
Details | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Konrad 2020-01-04 22:49:48 UTC
I have recently switched from Visual C++ 2015 to Visual C++ 2019 and I did compile SDL2 library from HG with it. However, it seems like with the new version some aggressive optimizations were introduced. It seems to mess around with all blitters (including SDL_Blit_Slow).

This is how rendering looks like with software renderer as backend:

https://i.imgur.com/7FQFM2F.png

This is how it looks like with any hardware accelerated renderer (even if the library is compiled by Visual C++ 2019) and what it looks like on software, but when SDL2 is compiled under Visual C++ 2015:

https://i.imgur.com/93DYZtP.png

It seems somehow to be related to modulated textures (with either color or alpha). Non modulated textures seems to be rendered just fine and if I remove following two commands from my rendering pipeline:

SDL_SetTextureColorMod
SDL_SetTextureAlphaMod

Lack of color modulation results in the following:

https://i.imgur.com/vVO1AGa.png

So basically it renders fine without modulation.

The issue doesn't only seem to be within color modulation because my framebuffer (target texture) seems to be blank after rendering, even without color modulation.

SDL2 compiled with no optimizations on Visual C++ 2019 or with optimizations on Visual C++ 2015 will work fine with its software renderer.

My current solution is to compile SDL2 using Visual C++ 2015 and link the library within Visual C++ 2019 (they are binary compatible, so older libs do work in newer compiler).
Comment 1 Sam Lantinga 2020-01-06 15:03:52 UTC
Visual Studio 2019 has known aggressive optimization bugs and is not recommended for production code at this time.
Comment 2 Konrad 2020-01-06 15:12:56 UTC
Perhaps, but optimizations from Visual C++ 2019 are doing quite well to hardware renderers. I was hitting 1600 fps on direct3d11 backend while without any optimizations I get roughly about 800-900. Optimizations on Visual C++ 2015 do work fine though as I am able to hit about 1400 - 1500 fps on direct3d11 backend with them being enabled. Anyway, I choose to mix binaries until this issue is resolved.

If you have any ideas regarding what might be optimized away and causing this issue let me know and I will gladly check it.
Comment 3 Konrad 2020-01-07 14:31:19 UTC
Slow blitter seems to be working fine after all. I did assume incorrectly that it isn't since I did see bunch of calls to slow bitter while profiling the application with inefficient pixel format. Even after changing all texture format to ARGB888 which did speed up the application quite a lot compared to ABGR8888 I can still see SDL_Blit_Slow taking some time doing not sure what, exactly. This is quite interesting and I need to investigate it further more.

That being said, I did force the application to use slow bitter by removing entries from SDL_GeneratedBlitFuncTable and it seems to render everything just fine. That being the case I can only assume that only auto generated blitters are affected.

If I get some more free time I will investigate it more.
Comment 4 Konrad 2020-01-16 20:17:46 UTC
I had some time to look at it just now and I don't think fast blitters are the ones failing here. In my case when I blend semi-transparent ARGB texture into RGB the following blitter function is chosen:

SDL_Blit_ARGB8888_RGB888_Scale

This is definitely wrong as I believe SDL_Blit_ARGB8888_RGB888_Blend should be chosen for this job instead (SDL_BLENDMODE_BLEND is used for ARGB texture).

I immediately assumed that SDL_ChooseBlitFunc might be failing and in fact when I enclose that function within pragma directive of:

#pragma optimize("", off)
SDL_ChooseBlitFunc(...)
#pragma optimize("", on)

It is correctly choosing blitting function and everything works as expected.

At first sight I cannot possibly find anything wrong there with bitwise operators, so I will have to dig deeper. At least we know what is failing.
Comment 5 Konrad 2020-01-16 20:54:32 UTC
Another hint - removing flagcheck and adding its bitwise ORed flags content directly to all "ifs" within the loop of SDL_ChooseBlitFunc allows the function to properly choose blitter.

However, I don't think the issue is there as changing:

for (i = 0; entries[i].func; ++i) {

to

while (entries[i].func) {

And adding i++ before every continue does seem to do the trick as well.

Something strange is going on within that loop when optimizations are turned on.
Comment 6 Konrad 2020-01-16 21:44:04 UTC
That being the case, the easiest "solution" to fix this issue with optimizations turned on is to add volatile for:

int i, flagcheck;

=>

volatile int i, flagcheck;
Comment 7 Konrad 2020-01-16 22:01:37 UTC
Created attachment 4168 [details]
Fix

I do apologize for the spam, but since I cannot edit previous replies I'm afraid I have no choice.

I took the liberty of rewriting this function a bit as it seemed to be unnecessary extended with ifs regarding flags (we can check everything in one pass which seem to be the thing which confuses Visual C++ 2019 as well).

Also, I have made CPU features an int instead of uint because if we check it against flags which are all ints it might as well just be int (no signed/unsigned bitwise comparison).
Comment 8 Sam Lantinga 2020-01-16 23:54:51 UTC
This fix is in, thanks!
https://hg.libsdl.org/SDL/rev/7efbcfc8e3cc
Comment 9 Sam Lantinga 2020-01-16 23:55:06 UTC
Fixed!