I have chosen to take a closer look at the article “From
echo chamber to persuasive device? Rethinking the role of the Internet in
campaigns” by Cristian
Vaccari from the journal New Media & Society (IF: 1,824), published in the
issue of February 2013.
The paper is questioning the claim most e-campaigning
literature does; that Internet can’t change political attributes, just
reinforce them. The author looks at how e-campaigns has changed through time,
how people use political websites in taking in information and also if the
people afterwards share or publish something connected to what they have read.
This he has investigated through the
receive-accept-sample (RAS) theory. Which means that the viewer needs to
receive the message and accept it to be able to change it’s own opinion.
I am a bit confused on what is a quantitative method
and not. The definition of quantitative research is “The quantitative
researcher asks a specific, narrow question and collects a sample of numerical
data from participants to answer the question. The researcher analyzes the data
with the help of statistics. The researcher is hoping the numbers will yield an
unbiased result that can be generalized to some larger population.” This paper
uses mainly two different methods in finding data. The author conduct 31
different interviews with people connected to different online political
campaigns in the US. The author says that it was qualitative research but one
could argue, based on the amount of interviews (31) and that the research area
was quite narrow (responsible for e-campaigning in the biggest political
parties in the US) that it was a quantitative method too. The interviews form
the base of some of the statistics and numbers in the paper. This is why I’m
not sure if this count as a quantitative method or not. But if it is, it is a
good one since he got relevant data and statistics to his research and he
enabled to connect the data to the theory.
The other method the author uses is collecting data
from another research instance, summing it up and calculating some of the data
making it comparable and usable with the data in the paper. I guess this does
not count as a quantitative method since the author has not conducted the data
gathering by himself. But he has on the other hand looked it up, transformed it
and published it in a way making it relevant for his research.
The hardest part using quantitative methods is to know
if the responders can represent the mass of the people the researcher want to
have facts about. But the same problem one can have with qualitative methods
since it is hard to know if the interviews will represent the people
categorized in that group if the group is not very little.
In the
paper named: “Physical Activity, Stress, and Self-Reported
Upper Respiratory Tract Infection" it showed the importance of describing the method
used when doing a quantitative method on who has taken part of the survey and
answered the questions. It is hard to take all opinions into account when
sorting out those who don’t have an email account and those who work in one
place since they can’t send their replies in the questionnaire. In the specific
research done in the paper the self-reporting of the URTI felt like a
factor that could, if not done correctly determined for the respondent, be
interpreted different depending on who answered. But the fact that they have
taken all this measures to make the result as trustworthy as possible makes it
a better quantitative research than it would have been not using or discussion
those factors. Qualitative methods are good when it comes to information from
people that might needs to be dug up. In interviews it is possible to get more
precise information and one can use follow-up questions, which is not that
often possible to do when using quantitative methods, especially if the
respondents are anonymous.