Chrome VS Firefox VS Opera on Android – Which Browser Consumes Less Energy in Android?

In this year’s Green Lab course, we formed a team named Blue Lab, which consists of the members Dean Hsu, Ibrahim Kanj, Tung Nguyen and Mike Trieu. The goal of our team is to plan and conduct an experiment evaluating three web-browser applications (Chrome, Firefox, Opera) on mobile devices regarding their energy consumption and performance. We touch upon this subject because using mobile devices to browse the web is a common occurrence nowadays. In addition, surveys show that battery life is considered one of the most important features when using mobile devices [1][2].

Therefore, in this article we present a methodology to evaluate energy consumption and performance of web-browser applications for Android. The scope of our experiment is specifically Android devices, because they have over 50% of the market share in the mobile devices market [3]. The three web-browsers were chosen because they are commonly used on Android devices. We collect the data under varying network conditions, using various tools which helps us automate the process. By analyzing the gathered data, we then will be able to conclude whether the web-browser applications differ in energy consumption and performance, or not.

The remainder of this article is structured as follows: First, we describe the planning, the design, and the execution of the experiment. Next, we describe the analysis methodology we applied on the collected data. Finally, we conclude with the results of our analysis.

Experiment Planning

We defined our experiment using the Goal-Question-Metrics Paradigm described by Basili [4], and we defined our goal as follows:


Goal
Analyze Mobile web-browsers for the purpose of evaluation with respect to their energy consumption and performance from the point of view of Android users in the context of the Android platform.

The goal led to two questions of which we seek answers to. The first questions is how does using different web-browses impact the energy consumed for loading web apps under different network conditions. The second question is how does using different web-browsers impact the performance of for loading web apps under different network conditions. We divided the latter question into three sub-questions, as performance can be measured in various ways. We measured performance as page load time, CPU load, and memory usage.

In order to answer our questions, we formulated four hypotheses, each corresponding to a (sub-)question.


Hypotheses

Hypothesis 1:

  • Null: There is no significant difference in energy consumption between the web-browsers
  • Alternative: There is a significant difference in energy consumption between at least two of three considered web-browsers

Hypothesis 2:

  • There is no significant difference in page loading time between the web-browsers
  • Alternative: There is a significant difference in page loading time between at least two of three considered web-browsers

Hypothesis 3:

  • Null: There is no significant difference in CPU load between the web-browsers
  • Alternative: There is a significant difference in CPU load between at least two of three considered web-browsers

Hypothesis 4:

  • Null: There is no significant difference in memory usage between the web-browsers
  • Alternative: There is a significant difference in memory usage between at least two of three considered web-browsers

Experiment Design

We identified two factors for our experiment: the web-browser application (Chrome, Firefox, Opera) and the network condition (2G, Wi-Fi). We decided on 150 websites as subject, and a Latin square experiment design. The choice was made with taking into consideration the time it takes to the experiment, and the total time available to us for executing the experiment. It includes taking into account replication of measurements in order to reduce variance in the data.

The design resulted in six experiment configurations, which is shown in Table 1. The total time needed to conduct the experiment was approximately 37.5 hours

Experiment Design

Table 1: Experiment Design

Experiment Execution

Our experiment setup consists of three devices. One computer is setup as a proxy using Fiddler, which stores and replay traffic. This way we mitigate the possibility of influences from random internet traffic during the experiment. It is also the tool for simulating the network condition. Fiddler also allows us to inject JavaScript code, which is needed for enabling logging when a web-browsers starts loading a web-page and when it finishes loading.

A high-end Android device is prepared by installing the three web-browsers and the energy/performance measurement application Trepn. Then we connect the device to the above mentioned proxy. We also apply some common energy measurement best practices, such as having the screen brightness on the lowest and disable charging via USB.

The second computer is setup with Android-runner, which is a Python based experiment automation framework created by a VU University student. The Android device is connected to this computer through USB. Android-runner performs a by us defined sequence, and stores the log data from the Android device. After a experiment trial finishes, Android-runner gathers the Trepn data. We replicate the trial five times, and then Android-runner automatically moves on to the next website. Figure 1 shows a visual representation of our setup, with numbers indicating the experiment sequence. Using this setup, we can easily change the network condition, web-browser, and websites by changing the configurations on Fiddler and Android-runner.

Experiment Environment

Figure 1: Experiment Environment

Analysis

Our experiment has two factors with more than two treatments, so we naturally opted for using ANOVA in order to verify our hypotheses. Unfortunately, the data collected is all but normally distributed. There were no other tests that could be applied on two factors with more than two treatments. Thus, we decided to block the network condition factor and split it into 2G and Wi-Fi separately. We apply the appropriate test (ANOVA, Kruskal-Wallis) based on the distribution of the data. Doing this means we have to correct our p-values using Holm’s correction. The results of the statistical tests was that we found no significant differences between the web-browsers in either energy consumption, page loading time, CPU load or memory usage.

Conclusion

An observation we made from analyzing the collected data, was that data under the 2G network condition was more skew than the data under the Wi-Fi network condition. Page loading time is longer, energy consumption is higher, and performance is worse on the 2G network. However, by looking at the analysis of the data one could see that the network condition has no impact on the actual differences in energy consumption and performance between the web-browsers, so it does not matter which web-browser application you use on an Android, as we found no significant differences between the web-browsers in either energy consumption, page loading time, CPU load or memory usage.

Link to the report

References

[1] P. Daniel. Survey shows battery life to be the single main gripe of today’s mobile phone user. https://www.phonearena.com/news/ Survey-shows-battery-life-to-be-the-single-maingripe-of-todays-mobile-phone-user id49818. Accessed: 2017-10-21.

[2] Anne Pilon. Smartphone Battery Survey: Battery Life Considered Important. https://aytm.com/blogmarket-pulse-research/ smartphone-battery-survey/. Accessed: 2017-10-20

[3] NetMarketShare. Mobile/Tablet Operating System Market Share. https://www.netmarketshare.com/operating-systemmarket-share.aspx?qprid=8&qpcustomd=1. Accessed: 2017-10-20.

[4] Victor R Basili. Software modeling and measurement: the Goal/Question/Metric paradigm, 1992.