Well, I couldn't figure out anything about the 'overclocking' voltage and clock speed fluctuations at the upper voltages so I gave up on tuning those for now. My best guess is still that it's GPU Boost 3.0 and it ignores my locking of certain voltages and clock speeds? (This Galax Card's shroud says "GPU Boost 2.0"...) I spent some time working on that spreadsheet (Sheet 2, "0.825V") at the set voltage 0.825V and seeing how the card performed at different clock speeds.
Using Heaven as a benchmark versus actual gameplay means it's hard to appreciate the score we have for "performance," but it's definitely more efficient than recording 30+ minutes of gameplay... anyways.
I went from the highest stable clock speed I found, 1721Mhz (@0.825V) and got the score 4392, using an average of 100W. I kept working my way down and recording my results, I made it all the way down to 1531Mhz and got the score 4176, using an average of 96W.
There were no drastic or unusual fluctuations as I made my way down, though I did test several times at 1709 and my scores did vary a good bit. This tells me there's definitely a margin of error. The only way to get more reliable data is to run more tests at given voltages, and that's honestly a lot of time and effort that I'm not sure I can afford. We have these rough numbers, though!
So, I took the Heaven score and divided it by my average GPU power usage (in Watts), and this "performance per watt" number varied from ~42.5 to ~44.5, so nothing particularly drastic. Given my tests tonight (and recognizing there's a margin of error), 1645Mhz and 1683Mhz are the two most efficient clock speeds I tested at 0.825V.
I'm new to all of this stuff, so any suggestions or recording, organizing, or sharing this data would be helpful. Are there any certain ratios that would be useful? Are there any graphs I should be making?