My Article
This week I have chosen an article called Measuring Mobile Phone Use: Self-Report versus Log Data, written by Jeffrey Boase and Rich Ling. The article was published in the Journal of Computer-Mediated Communication, volume 18, issue 4. This particular volume was released in June 2013. The article is formulated as an investigation on how accurate users’ perception of mobile phone use is compared to the actual log data of their use. They also want to investigate if there are differences in use between different demographics. The methodology for when people are asked to estimate their own use is in the article called “self-report”. Also worth noting is that when they talk about the concept of “use” they do in this case mean frequency of use.
The methodology that the authors uses themselves consists of both a collection of self-report data from a big set of mobile phone users, and a collection of log data from the same people’s mobile phones. The idea is then basically to map these two sets of data against each other and analyse what degree of accuracy the data provided by the self-reportings truthfully possess.
The self-report data was generated through a survey that was communicated to the consumers via Internet (and sometimes phone for those who did not have Internet access). The survey was conducted for Norwegian citizens between October and November in the year of 2008. Except for person profile questions, the survey consisted of two versions of a question regarding the users frequency of use. Firstly how they believed their activity was “yesterday” and secondly how they believed their activity was over longer periods of time, e.g. “weekly”. The survey had a total of 1382 respondents where server log data existed for 613 of these respondents. However, “only” 426 granted access to their log data so this was the number of respondents that the authors could work with in the comparison. A bit less than 1382, but still definitely enough to draw quantitative conclusions in my opinion!
What they later do in their analysis are different types of statistical analytic methods of the data. These provide the insight if the comparisons are statistically significant or not. They conduct a t-test in order to see if the comparison is statistically significant. For the demographic analysis they use both a simple regression and a logistic regression analysis of the data. An analytic method that I would probably have used myself in such a situation.
I cannot seem to find much problems with the methodology in this particular article. I believe that this is mainly because of the issue that they are investigating does not require many parameters or different answers. It is just a fair and square comparison. The demographic composition of the respondents also seems well distributed. I think the clearness and “crispness” of the methodology makes the paper very easily understandable and that the analysis becomes reliable as a result. One thing that you might argue for as a problem is of course that the results cannot be translated into any market, or any type of user, as the research was solely conducted at a certain time and place (Norway). The users were also all customers of a certain mobile operator, which could have influenced the result in a certain way. That, however, is not really a question of chosen methodology in this case, in my opinion. I cannot say that I learned much from the paper purely methodology wise. I have myself been using similar survey methods and different types of statistical analysis.
The Article by Bälter et al.
The purpose of the article is to investigate the relationships between physical activity level, perceived stress, and incidence of self-reported upper respiratory tract infection (URTI). They conducted a cohort study of 1509 Swedish men and women aged 20-60 years during a period of 4 months. A Web-based survey was made to collect information from the participants on how they lived, their disease status, their physical activity and their perceived stress. After analyzing the data, key conclusions were that people with moderate to high physical activity had a lower risk of URTI and that highly stressed people might benefit from physical activity.
Quantitative vs Qualitative Methodology
Quantitative studies, like the survey methodology used in the articles presented above, is a great thing when you want to analyse big sets of data and spot patterns in a big perspective. To use online based questionnaires is also an easy way of getting many people to answer your questions. Statistical analysis is generally also considered more reliable than arbitrary interpretations of smaller sets of data. A limitation when trying to spot these patterns is however the parameters that may affect the respondents responses or even what type of respondents that tend to answer such questionnaires. The selection of respondents and the formulation of the questions is something you will have to analyse thoroughly before you analyse the result’s patterns. Qualitative methods do however allow a more deep and personal answers to a questions. However, such methods are hard to generalize.
This week I have chosen an article called Measuring Mobile Phone Use: Self-Report versus Log Data, written by Jeffrey Boase and Rich Ling. The article was published in the Journal of Computer-Mediated Communication, volume 18, issue 4. This particular volume was released in June 2013. The article is formulated as an investigation on how accurate users’ perception of mobile phone use is compared to the actual log data of their use. They also want to investigate if there are differences in use between different demographics. The methodology for when people are asked to estimate their own use is in the article called “self-report”. Also worth noting is that when they talk about the concept of “use” they do in this case mean frequency of use.
The methodology that the authors uses themselves consists of both a collection of self-report data from a big set of mobile phone users, and a collection of log data from the same people’s mobile phones. The idea is then basically to map these two sets of data against each other and analyse what degree of accuracy the data provided by the self-reportings truthfully possess.
The self-report data was generated through a survey that was communicated to the consumers via Internet (and sometimes phone for those who did not have Internet access). The survey was conducted for Norwegian citizens between October and November in the year of 2008. Except for person profile questions, the survey consisted of two versions of a question regarding the users frequency of use. Firstly how they believed their activity was “yesterday” and secondly how they believed their activity was over longer periods of time, e.g. “weekly”. The survey had a total of 1382 respondents where server log data existed for 613 of these respondents. However, “only” 426 granted access to their log data so this was the number of respondents that the authors could work with in the comparison. A bit less than 1382, but still definitely enough to draw quantitative conclusions in my opinion!
What they later do in their analysis are different types of statistical analytic methods of the data. These provide the insight if the comparisons are statistically significant or not. They conduct a t-test in order to see if the comparison is statistically significant. For the demographic analysis they use both a simple regression and a logistic regression analysis of the data. An analytic method that I would probably have used myself in such a situation.
I cannot seem to find much problems with the methodology in this particular article. I believe that this is mainly because of the issue that they are investigating does not require many parameters or different answers. It is just a fair and square comparison. The demographic composition of the respondents also seems well distributed. I think the clearness and “crispness” of the methodology makes the paper very easily understandable and that the analysis becomes reliable as a result. One thing that you might argue for as a problem is of course that the results cannot be translated into any market, or any type of user, as the research was solely conducted at a certain time and place (Norway). The users were also all customers of a certain mobile operator, which could have influenced the result in a certain way. That, however, is not really a question of chosen methodology in this case, in my opinion. I cannot say that I learned much from the paper purely methodology wise. I have myself been using similar survey methods and different types of statistical analysis.
The Article by Bälter et al.
The purpose of the article is to investigate the relationships between physical activity level, perceived stress, and incidence of self-reported upper respiratory tract infection (URTI). They conducted a cohort study of 1509 Swedish men and women aged 20-60 years during a period of 4 months. A Web-based survey was made to collect information from the participants on how they lived, their disease status, their physical activity and their perceived stress. After analyzing the data, key conclusions were that people with moderate to high physical activity had a lower risk of URTI and that highly stressed people might benefit from physical activity.
Quantitative vs Qualitative Methodology
Quantitative studies, like the survey methodology used in the articles presented above, is a great thing when you want to analyse big sets of data and spot patterns in a big perspective. To use online based questionnaires is also an easy way of getting many people to answer your questions. Statistical analysis is generally also considered more reliable than arbitrary interpretations of smaller sets of data. A limitation when trying to spot these patterns is however the parameters that may affect the respondents responses or even what type of respondents that tend to answer such questionnaires. The selection of respondents and the formulation of the questions is something you will have to analyse thoroughly before you analyse the result’s patterns. Qualitative methods do however allow a more deep and personal answers to a questions. However, such methods are hard to generalize.