We are working with making Opera even faster on Windows machines using a technique named “PGO” (Profile Guided Optimizations”), and it’s time to show off the first results.
This is preview build for public testing. Note that auto update is disabled. If you like to check this build, just grab the installer.
What is PGO?
Most of Opera is written in C++ and it is the C++ compiler’s job to convert the C++ to machine code that a computer can run. Unfortunately it cannot try to make all the code fast because that would make the program huge (and slow) so instead it tries to find a balance where programs become reasonably fast.
With the help of Profile Guided Optimizations (PGO) we can do better. By selecting a number of important scenarios, the training set, we can teach the compiler what code is important and what is less important. There is for instance code to handle errors and rarely used web features which does not have to be extremely fast and can instead be made small and efficient. The same for code related to user interaction. It does not matter to a human if a click is processed in 2 milliseconds or 1 millisecond since humans are slow.
The results below are from a computer running Windows 7 x64 using an i7-6700 CPU locked at 3.4 GHz. In the startup tests Opera was stored on an SSD. We’ve compared x64 build number 43.0.2440.0 compiled with and without PGO.
We see improvements on a lot of different tasks so we think we have trained the compiler well. It now seems to do a good job optimizing the most important parts of the browser. Below are some numbers we have collected but note that this is still work in progress and this is just snapshot data.
We see a 13% faster startup in our testing using the computer described above (SSD and enough RAM to avoid paging).
Web page library speed
The Speedometer benchmark is 5% faster with PGO.
The Sunspider benchmark is 2.4% faster with PGO.
The Octane benchmark is 1% faster with PGO.
We are still tuning the builds to get the largest impact. We want to have the largest improvements where it matters the most and I don’t think we’ve reached the optimal training set yet.