Fix assumption that pixel alpha is in upper 8 bits

QZ_SetPortAlphaOpaque() now retrieves the mask for the alpha channel
from the surface pixel format instead of assuming the alpha component
is the upper 8 bits of the pixel.

NOTE: I doubt this function is even still needed. There's a comment
saying that the un-miniaturization animation overrides the window's
alpha components, which shouldn't be true on any OpenGL-based window
compositor system (i.e. any but the oldest OSX versions). If I make
the function a no-op, everything still works fine on my system. But
just in case, and since this is in upstream SDL code, I'm leaving it
there and fixing it instead of just removing it.

Bug: 38913
Change-Id: I15aa1e93c60590ba9f17f7625e55325ec8a36520
diff --git a/distrib/sdl-1.2.15/src/video/quartz/SDL_QuartzWindow.m b/distrib/sdl-1.2.15/src/video/quartz/SDL_QuartzWindow.m
index 375833f..078aebc 100644
--- a/distrib/sdl-1.2.15/src/video/quartz/SDL_QuartzWindow.m
+++ b/distrib/sdl-1.2.15/src/video/quartz/SDL_QuartzWindow.m
@@ -33,11 +33,10 @@
 static void QZ_SetPortAlphaOpaque () {
     
     SDL_Surface *surface = current_video->screen;
-    int bpp;
-    
-    bpp = surface->format->BitsPerPixel;
-    
-    if (bpp == 32) {
+    int bpp = surface->format->BitsPerPixel;
+    Uint32 amask = surface->format->Amask;
+
+    if (bpp == 32 && amask != 0) {
     
         Uint32    *pixels = (Uint32*) surface->pixels;
         Uint32    rowPixels = surface->pitch / 4;
@@ -46,7 +45,7 @@
         for (i = 0; i < surface->h; i++)
             for (j = 0; j < surface->w; j++) {
         
-                pixels[ (i * rowPixels) + j ] |= 0xFF000000;
+                pixels[ (i * rowPixels) + j ] |= amask;
             }
     }
 }