Not sure if this is a bug in GLSurfaceView or if there's some odd delay I need to do or some extra flag to throw, but there seems to be a 50% chance that when my program is run (when the exception of being run directly after a compile, which guarantees the lag) that there will be a 48FPS cap. Or at least something hogging cpu that drops my framerate to 48FPS. Using 1.5's GLSurfaceView and running a framerate indicator through Log.d every 10 seconds (adding the framerate into an avgFPS variable every second and / 10) in onDrawFrame and doing NOTHING else results in either 48FPS or 60FPS, randomly. Letting the program run for a period of time does not change this, only exiting and re-running.
I'm not sure if I'm the only one having this issue, or if I'm simply someone who noticed it because I need the maximum performance possible, but the 12FPS gap makes an enormous difference because of the overhead caused.
I've gone so far to look into the process going on when my program is run and exited through several tests. The results are as follows:
Capped init: http://pastebin.org/856
Uncapped desired init: http://pastebin.org/857
The highlighted lines are the ones that seem to have a pattern with the issue I have. From multiple tests the pattern I've noticed is that when the framerate goes at fullspeed and isn't
capped/hogged, the gpu surface is attained just before the display is started. And the time taken for display to init (or the init process as a whole, not sure what that time actually means) is always < 1000ms.
Whereas when it caps to 48FPS, it takes more than 1000ms and the process order for init seems random.
I'm not sure how it'd be possible for me to fix this unless I took the original GLSurfaceView and figured out how to handle it myself, but that's assuming that GLSurfaceView is even the problem in the first place. I'm not sure what's causing this, and I'm posting here to see if anyone has a clue.