The Drawbacks of the Digital Transition of Marketing Research: Implications for Decision Makers and the Industry

The primary aim of this paper is to draw practitioners’ attention to lesser-known risks of digital marketing research: while it enables quick and low-cost results, quality and reliability are not guaranteed. The paper also surfaces broader consequences of transitioning from traditional research, based on offline investigations and face-to-face interviews carried out by professionals, to digital research. The paper presents the results of a survey on a cohort of 200 freelance interviewers working for Italy’s main research institutions, conducted through a self-administered questionnaire. Recently online marketing research, especially through panels, has gained meaningful traction. As demand for traditional marketing research contracts, professional interviewers are experiencing a material drop in requests for their in-field services and a worsening working environment. In return, this affects the quality of on field research they can provide. This is the first study, to the best of the author’s knowledge, where issues and limitations of digital research are studied from the perspective of professional interviewers. This study enables managers and organisations that commission marketing research to make more informed decisions when facing the trade-offs between traditional and digital methods. Furthermore, it provides a view on how such choices may impact the future of professional interviewers and their services.


Introduction
Internet-based technologies have rapidly transformed the relationships between players of the marketing research industry. In particular, the changes have impacted the step where interviewers connect research companies and their clients with the data sources (respondents). This is a crucial part of the marketing research process because interviewers are the interface between those who have specific information needs and those who have the knowledge to fulfil them (Kumar, Leone, Aaker, & Day, 2020). They contact potential respondents, motivate them to participate in the research, and interact with them to collect the necessary information to answer the questions that the client has addressed to the research company.
Online research, conducted through panels and communities, had a growing impact on the data collection process of marketing research (Vocino, Polonsky, & Dolnicar, 2015). According to a recent ASSIRM (Note 1) investigation, between 2016 and 2017 Italy has recorded a heavy drop in the adoption of traditional research methodologies based on telephone (-15.2%) and face-to-face (-16.0%) interviews, while online research tools have increased by 7.8%. (ASSIRM, 2017). More specifically, between 2012 and 2019 the percentage weight of the cost of telephone interviews has halved from 14% to 7%, in person interviews dropped by 6%, while the share of expenditure for online quantitative marketing research has grown from 9% to 13% (Table 1).  (2013,2014,2015,2016,2017,2018,2019,2020).
Marketing research companies offer their clients low-cost tools that allow them to administer surveys to respondents that subscribe to panels or are members of online communities, without engaging interviewers. This way of obtaining information is leading to the substitutions of traditional research methodologies in favour of online research. This is driven by the advantages that they offer both to marketing research companies and their clients in terms of lower costs and shorter waiting times (Göritz, Reinhold, & Batinic, 2002;Bowers, 1998).
Nevertheless, it is essential to consider if the success of online marketing research comes with any drawbacks or implications that have not been thoroughly discussed. In other words, it is appropriate to reflect on whether behind obvious cost and time savings lie hidden flaws such as less reliable results. These observations are relevant for clients who pay for the research and should be informed on the quality of the service they receive.
This study is based on the results of an investigation conducted on a group of professional interviewers specialised in on field quantitative and qualitative research. Their role is central to the marketing research industry, as the trustworthiness of the data collected, and hence the reliability of the research, heavily depends on their skills. As online research started denting the market for traditional research, services offered by professional interviewers have taken a hit. An in-depth analysis of trade union claims is beyond the scope of this study. However, since one of the main objectives of this article is to reflect on the less obvious consequences of the growth of online marketing research, it is impossible not to mention that it has led to a decrease in the average compensation and satisfaction of the professionals who operate in this field. This is also relevant as, in return, this has the potential to affect the quality of their services.
The article aims to address the following questions: -RQ1: what can be learned from professional interviewers about the implications on the quality and reliability of marketing research that employs new technologies?
-RQ2: how have the new digital technologies changed the working conditions of interviewers who are involved in the last mile of marketing research?

Overview of Previous Studies
Online research has landed on the global research market in the late 1990s. Initially, it involved 'research conducted over the Internet, including electronic mail surveys, Web-browser-based surveys and concept tests, online interviews and focus groups' (Miller & Dickson, 2001, p. 139), followed by research conducted via online communities (Kozinets, 2002) and panels (Callegaro, Baker, Bethlehem, Goritz, Krosnick, & Lavrakas, 2015). When online surveys debuted in the field of marketing research, they generated a lot of interest thanks to their advantages over telephone and face-to-face interviews (Duffy, Smith, Terhanian, & Bremer, 2005), achieving high participation rates and impressive response times (McCullough, 2011;Taylor, 2000). Surveys filled in and submitted online in fact notably reduce manual labour associated with sending and receiving the questionnaires as well as data entry (Iyer, 1996). Furthermore, since the cost of sending and designing digital surveys is meaningfully lower than shipping and printing traditional ones (Weible & Wallace, 1998) the cost per response is dramatically reduced. Additionally, web-based surveys guarantee higher data quality because they avoid errors attributable to the interviewers (Healy & Malhotra, 2014) and leverage inbuilt automatic error detection systems (Roster, 2004). Online survey research continues to evolve from a mere data collection technology to a full-fledged research mode (Couper, 2000) and the substantive advances in online research methodologies over the past few years have led not only to a virtual transformation of the marketing research process, but also to substantial changes in the marketing research industry. Indeed, online research has become the dominant technology for some marketing research firms (McDevitt & Small, 2002).
However, already in 2003 Wilson and Laskey published an essay investigating a question that is still very relevant today 'Internet based marketing research: a serious alternative to traditional research methods?', while the 'Dirty little secrets of online panels' also started to emerge (GreyMatter Research Consulting, 2009. Online research has therefore been the focus of a number of heated debates in the past 20 years. According to some scholars, 'online research is more than just a new research modality. It represents a cultural and technological change in the way marketing research is done' (Miller & Dickson, 2001, p. 139), while others have raised doubts over their validity (Akubulut, 2015) and more commonly over the reliability of data collected online (Ilieva, Baron, & Healey, 2002). Specifically, when web access penetration was far from current levels, concerns were raised over the ability of online research to reach a representative sample of the population, in particular with regards to the exclusion of demographics other than the young, educated and high-income population (Mentha & Sivadas, 1995;Oppermann, 1995). A further limitation, highlighted in the initial phase of online research development, was the absence of centralised respondent email address databases (Litvin & Kar, 2001) that prevented the creation of sampling lists. Moreover, even when those lists were available, they quickly became obsolete as users changed email providers (Dommeyer & Moriarty, 2000).
Online research slowed down when internet was no longer a novelty and as users became accustomed to emails and inbox traffic began to rise, potential participants grew more irritated by invites to online surveys (Bachmann, Elfrink, & Vazzan, 1996). Today, the majority of online research is conducted via online panels. This methodology in theory promises high response rates, tight control on sampling, detailed information on respondents and the ability to reach harder to find audiences. Panels are populated by individuals who subscribe to digital platforms, often monitored by traditional research companies, in order to take part in online surveys (Göritz, 2004). However, even panel based online research is not immune to criticism. Among panel members there may be 'respondents by profession' (Denis, 2001), determined to participate driven by incentives, regardless of their interest in the topic of the survey (Bang, Youn, Rowean, Jennings, & Austin, 2018). Respondents by profession sign up to more panels simultaneously to maximise their ability to collect points and enter prize competitions. They typically fill in the same survey multiple times accessing it through different accounts to evade identity checks (Pecáková, 2016). Furthermore, because of anonymity and the lack of direct and personal contact with the interviewer, it is impossible to determine with certainty who is really submitting the responses and panellists 'may dedicate the minimal cognitive effort required for providing plausible responses' (Paas & Morren, 2018, p. 13). While professional respondents are on the rise, well intentioned ones are less and less represented in samples gathered through traditional methods. The risk is to ground research results on individuals that have no knowledge, direct experience or interest in the topic of the investigation but nevertheless participate to get the reward. Verifying the identity of participants is therefore a crucial problem for online marketing research (Khansa, Liginlal, & Kim, 2015). Without authentic respondents that provide genuine answers, based on real life experience and direct knowledge of the researched product, brand or service, there can be no marketing research. Regardless, today the online marketing research offer is very fragmented with extremely variable operating standards as 'there are limited or no laws, no quality standards from our industry […]. One extremely unfortunate result is that there are tons of junk out there' (Lipner, 2007, p. 142).

The Good and the Bad of Transitioning to Digital
Internet based solutions have meaningfully reduced operating costs for research companies thanks to the ability to contact respondents through digital channels (email, SMS, social media, etc.) and panels. The arrival on the market of this type of research was warmly welcomed by the clients of research companies. Only at a later stage, as mentioned in the previous paragraph, doubts on their methodology have started rising. These have regarded in particular concerns over recruitment procedures for participants (Downes-Le Guin, Baker, Mechling, & Ruyle, 2012; Comley & Beaumont, 2011) and the distortion effects that could stem from submissions of individuals that surfed the web more than typical consumers (Roster, Albaum, & Smith, 2017;Fulgoni, 2014). However, according to the author, the following areas have not been sufficiently investigated: i) the implications of online marketing research on the companies that operate in the research sector and ii) the consequences on the quality and reliability of results.
The substitution of in-person interviews with online surveys is not the only consequence of the digital transition of marketing research. A further significant change surrounds the way surveys are administered and responses are recorded. In the span of only a few years, interviewers have gone from paper surveys to computers, tablets, smartphones, etc., even when carrying out in-person interviews. This new way of collecting information has raised some doubts (Wilson & Laskey, 2003). First of all, are electronic devices better than paper-based surveys, or do they come with limitations that might influence the interview, and therefore, the quality of the data? The outcome of an interview in fact is not just determined by the way in which the interviewer asks the questions. If the interview requires filling in a paper survey, the analysts that elaborate the results pay a lot of attention to the notes that interviewers write on the form. These annotations can originate either from the respondent or the interviewer and regard all sorts of content. The notes coming from respondents consist of: i) spontaneous additions to clarify their responses or statements, ii) details on how they interpreted the question, iii) requests for guidance. Notes taken by interviewers are: i) observations on questions that respondents find difficult to understand, ii) flags for original or unexpected answers, or iii) considerations on the personality of the respondent, useful for researchers to understand how reliable the answers are (especially for open ended questions). This additional information can offer researchers a new perspective to interpret data or to enrich the analysis, and hence elevate the quality of the research. Generally, this opportunity is not offered by surveys served on electronic devices or online panels, and when available this is often limited and impersonal.
The online research field, therefore, is at the heart of changes that have implications on the way that the collection of data is ideated, organized and carried out. These changes involve all the actors of the industry ( Figure 1). Client companies committed to balance their financial costs reduce investments in research by requesting low cost and quicker investigations. Research companies, to keep their offering competitive, go along with their clients requests and try to compensate for the drop in revenue by being more efficient. In order to achieve this, as highlighted by ASSIRM (2017) and ESOMAR (2017ESOMAR ( , 2018, they increasingly tap into digital technologies to cut costs. For the professionals in the last mile of marketing research the consequences are a reduction in job opportunities, compensation and budget for learning and development, as well as delays in receiving payments. The more qualified and experienced interviewers are led to only accept the most lucrative and attractive jobs, leaving the less appealing offers to colleagues that have lower or no qualifications and are less picky. This determines a meaningful deterioration in the quality of the data collected and hence on the reliability of the results. The output of any research depends on the quality of raw materials on which the investigation is based on, in other words the reliability, comprehensiveness and relevance of the data collected (Malhotra, Birks, & Wills, 2017). Primary data, in general, are held by third parties-potential or actual customers, suppliers, employees of the client company, sellers, competitors etc. The way in which participants are recruited and enticed to participate, and the approach taken in formulating the questions and recording the answers can influence the comprehensiveness and accuracy of the research (Hair, Bush, & Ortinau, 2009). In this phase of the process the control exerted by research companies is indirect since their relationship with the data sources-for example in face-to-face interviews-is handled by interviewers. Interviewers hence carry out a central role because the quality of the results is determined by their professionalism: errors in this part of the study can compromise the final results of the research.

Methodology
This study is based on data collected through 200 interviews conducted via a 16-question structured survey administered to a sample of the target population composed of about 3000 professional interviewers (Note 2). Additionally, 10 out of the respondents who expressed their availability to be contacted for the purpose of the research were selected at random to conduct personal phone interviews. The questionnaire is divided in 4 sections, dedicated to the following topics: the changes in the working environment for the last mile operators, the reliability of the new research tools compared to traditional ones, the implication of the recent developments on the quality of the collected data and finally demographic information on the interviewers. Interviews were conducted between the 15th of May and the 15th of September 2019. Respondents have been recruited through an email invitation sent to a list of 242 addresses of professional interviewers working in Italy. The list was compiled aggregating information sourced from the research institutes that have collaborated in this study. Out of the 242 invites, 32 individuals have declined because they no longer worked as interviewers, and 10 did not reply. In total, 200 correctly filled in surveys were collected, representing a participation rate of over 80% (82.6%). 30 days after the questionnaires were administered, once the data elaboration was complete, 10 of the respondents who gave their availability for further questions were chosen at random. They were contacted via email and invited to participate in a discussion on the main findings of the research. The selected professional interviewers all accepted to collaborate and were sent charts and descriptive analysis summarizing the data collected in the survey. 6 of the in-depth interviews were conducted on the phone, 3 in person and 1 on Skype and lasted between 40 and 60 minutes. Interviews were recorded and transcribed to facilitate the extraction of key themes and quotes to corroborate and expand on the findings from the surveys. As the personal interviews were conducted with a subset of the original sample, which as described was sourced directly from research institutes, the validity and reliability of the audience is high. However, it is worth highlighting that they represent a subset of the subset of respondents, therefore likely to be less representative of the population than the survey sample.
The analysis unit is therefore composed by independent professionals that are regularly involved with the most value adding tasks that research institutes assign to interviewers: i) interviews (face-to-face, in depth, telephone, etc.), ii) mystery shopping, iii) recruitment of participants or mediation of focus groups. Call centre workers and interviewers without specific training or who work sporadically are excluded from the study (the sampling list only included professionals). 92% of respondents have been in the job for over 3 years and 81% for over 10 years, meaning that they have experienced the traditional methods for long enough to provide valid responses. The opinions of the professionals that were interviewed are influential because on the one hand they have a tight relationship with research institutes, and on the other they have direct contacts with the sources of data.
Even though professional interviewers are assigned a crucial part of the research process-the collection of primary data-they themselves have rarely been the subjects of research. Previous studies have focused on attempting to understand the number of professionals in the field and their contractual work conditions based on declarations from interviewers. In particular, the tight bond between the work of these professionals and the reliability of the data collected, and therefore the quality of the research, has not been given enough attention.
Among the small number of exceptions, the contribution of Bronner and Kuijlen (2007).

Results and Discussion
Overall, 200 completed surveys were collected (Table 2), 23 submitted by males (11.5%) and 177 by females (88.5%). The prevailing female component is explained by the fact that in Italy this profession is mainly practiced by females.

The Decrease in Demand for Traditional Research Services
Participants were asked to evaluate the trend in demand for the professional services they offer in the two years before the interview (2016−2017), relative to the 13 most requested services by research companies (Table 3). For 12 out of 13 services, over 50% of respondents reports a fall in demand. For 7 services a decrease in demand was noted by over 70% of interviewers. Mystery client was the only service that benefited from an increase in demand, witnessed by over half of the respondents (53.4%).
Results highlight that while the drop in demand for face-to-face interviews of all types (at shop exits at home, in companies, etc.) was observed by 70% of respondents, the decrease for telephone interviews (to consumers and companies) has been flagged by a meaningfully lower proportion of interviewers, 52.6% and 50.5% respectively. At the same time, a portion of respondents claim that demand for both kinds of telephone interviews has been on the rise. This is attributable to research institutions substituting face-to-face interviews-more expensive and organization intensive-with more convenient and cheaper alternatives in order to be more efficient. Research institutes are substituting complex and expensive services with less demanding ones and mystery shopping is the only activity that the majority of respondents (53.4%) say is on the rise-this makes sense since it requires the physical presence of the professional interviewer.

Changes in Tools
Noteworthy changes took place in the tools used by interviewers on the field (Table 4). The decrease in adoption of paper surveys-those offering the possibility of taking notes and those that do not support this-was noted by 86.8% and 76.5% of respondents respectively. Over 80% (81.1%) of respondents flag the increased use of tablets and computers, while about 45% (44.3%) of interviewers claim that there has been a rise in the simultaneous use of paper surveys and electronic devices. 39.4% of respondents say that they have seen scanners being used more (to read barcodes and QR codes) and 36.4% state that the same has happened with video cameras and audio recorders. Collecting data, therefore, is becoming more thorough and in some regards more complex, employing the use of tools that require interviewers to acquire new skills.

Paper vs Digital Surveys
Participants were asked to express their degree of agreement or disagreement on a number of statements regarding paper and digital surveys. The statements on paper-based surveys that reached greater consensus among respondents regarded the possibility to "annotate information that complement the answer" (65.0%), "come back to questions to correct or expand on them" (64.3%) and finally to "take notes in the margins to aid interpretation of the answers or provide additional information" (61.5%). Interviewers therefore value paper surveys for their flexibility. This allows them to collect more comprehensive data thanks to the option to add notes and comments in the margins and to adapt the order of questions if needed (Table 5). Making the participant feel more involved and persuade them to take part 39.5 40.0 20.5 Note. (*) Each statement was evaluated on a Likert 5 level scale from "Strongly agree" to "Strongly disagree".
When it comes to digital surveys (Table 6) greater consensus is reached on critical statements. 65% of interviewers agree that using digital surveys can result in "loss of data due to software downtime". Over half of the interviewers agreed that with digital surveys: i) interview times are inflated by "errors due to an excessive number of questions" (57.1%). ii) answers are more approximate because of fatigue induced by too many questions using scale-based responses (54.5%). With digital surveys the order of the questions is strictly predetermined by how they are set up in the software. According to interviewers, this makes interviews more rigid and may affect the quality of the data collected and reduce the degree of comprehensiveness and precision of answers. Digital surveys typically do not support impromptu jumping between survey sections and often do not cater to adding notes in the margins. The valuable additional comments that may occur spontaneously during the interview are lost. Furthermore, the fact that the interviewer is unable to adapt the order of the questions may affect the level of engagement of the respondent and decrease the ability to detect incoherent answers. Finally, digital surveys are sent from the research institute to the interviewer's device (tablet, smartphone, computer). Software used in this process does not support any interviewer intervention. The interviewer has no choice but to carry on with the interview even if there are mistakes in the question or something goes wrong in the way answers are collected. In the words of a professional interviewer with 12 years of experience, working mainly in North-West Italy: "Once I could not go ahead with the survey because the respondent gave an answer that was not present in the options provided. I was forced to select a random one among those available. But it was certainly not what the respondent would have chosen…".  Collecting imprecise answers because of excessive use of scale-based questions 54.5 28.0 17.5 Note. (*) Each statement was evaluated on a Likert 5 level scale (from "Strongly agree" to "Strongly disagree".

The Rise in Complexity and Drop in Quality of Research
The guiding principle of research companies and the reason why marketing research is conducted is to ensure that the quality of the data collected enables to draw reliable, up-to date, accurate and thorough insights (Parasuraman, Grewal, & Krishnan, 2004). However, the evidence collected in this study highlights that in Italy over the past few years it has become increasingly challenging for research companies to stay true to this principle. To survive on the market, many research organizations have adopted new technological solutions that have worsened the employment outlook for the last mile operators. Interviewers agree that while the complexity of their job has increased, this has not been reflected in their compensation. On the contrary, their working environment has worsened over the past few years and this has driven a meaningful drop in job satisfaction and a deterioration of their performance (Table 7). Over 57% (57.3%) of professional interviews stated that they now run into "more difficulties in executing their tasks" compared to the past. In particular, there has been an increase in the "complexity of the job" (56.3%) and they agree that the use of tools (tablet, computer, video cameras) used to collect data has increased (60.6%). About 77% of respondents (76.8%) flags a decrease in the "amount of work" offered by research companies "per interview compensation" (86.6%), "job continuity" and "turnover" (82.8%). On top of worsening economic conditions, interviewers agree that "the time to carry out an assignment" is also reduced (71.2%). Client companies during difficult economic times not only cut their research budgets but also expect quicker results. Furthermore, about half of the respondents' state that quality of work has generally gone down. Compressing the time available to complete the assignment is one of the main aspects that results in poorer research. According to the interviewers, some research institutes incentivise faster data collection times by assigning more work to professionals who have turned around tasks more quickly in the past. The result of this is that interviewers are pushed into being sloppier as they worry about collecting the greatest number of interviews possible in the short time interval given.
"If I want to speed, interviews can take just 6 minutes, but it is impossible, it would only be to make the numbers look good. At first, I strongly believed in the value of my job, now I am not so sure having seen people working in an approximate way that managed to get many assignments from research institutes, while I have always conducted my job to the highest standard and have been put aside by a number of research companies". [Interviewer with 15 years of experience working mainly in the central regions of Italy].
"Some research companies prefer that we carry out a sloppy job, desperate to collect the data rapidly (…). The most important is to conclude the research". [Interviewer with over 20 years of experience working mainly in the South of Italy and on the Islands].

The Initial Briefing
The briefings held by the research companies for interviewers (sometimes the client company would also be present) was an important step conducted before the beginning of the investigation. This involved a presentation of the objectives of the research and the survey. Today the details of the assignment are communicated via written paper documents by phone or email. Interviewers agree that the lack of a kick-off meeting and the superficial way in which they are conducted today when they do take place are a major factor leading to the deterioration of the quality of on field work. "A piece of paper with instructions will never be as detailed as a face-to-face meeting. In my experience I have noticed that the research projects that did not hold a briefing are the ones that suffer from more mistakes" [Interviewer with 18 years of experience working mainly in the North-East regions of Italy].
The lack of an opportunity for analysts from the research company to meet and compare ideas with interviewers, as a further drawback, may result in hiring professionals that lack the required skills, impacting the quality of the research.

Quality Control on Field Work
Adhering to research protocols is fundamental to ensure that output quality standards are met. According to the respondents, today checks are run superficially and lack shared principles. Research companies in fact adopt different control systems depending on their clients and the type of investigation. When uncertainty is high, rigorous checks are carried out, but these come with a high price tag. However, such stringent checks are needed when data collection is put in the hands of professionals that are not sufficiently skilled, underpaid, or when there are no training or briefing sessions. It would be better for all the players of the research industry to incentivise quality work instead of strengthening checks that-according to the interviewers-could be easily circumvented.

Concluding Remarks and Managerial Implications
Internet based technologies are changing marketing research at its core. Scholars, in the initial phase of the rise of digital marketing research, have focused on their advantages. However, to gain a better understanding of this trend, other aspects need to be taken into consideration. This paper contributes to the existing theoretical literature by expanding the understanding of the key consequences of the spread of online methodologies on the last mile of marketing research. Despite their relevance, these issues received limited coverage so far, while this paper tackles them from the perspective of professional interviewers-an angle that, to the best of the author, has not been explored before.
Surveying 200 professional interviewers we learned that a drop in demand for on field research services has occurred along with a meaningful worsening in employment outlook. Interviewers voiced concerns over the quality of data collected with on field research and the risk of losing a wealth of skills and a body of qualified professionals. New digital technologies therefore have a profound effect on the work conditions of those who operate in the last mile of marketing research (RQ2). Such changes are experienced with a great deal of preoccupation on behalf of the professional interviewers. The demand for marketing research is extremely polarized: on the one hand, there is an increased drive towards fast and cheaper investigations, on the other traditional research is less sought after. Part of this behaviour is explained by a vast proportion of research companies' clients being attracted by low-cost research without being aware of the implications on the quality and reliability of the data (Evans & Mathur, 2018;Fulgoni, 2014).
Combining the findings of the survey and discussions with research companies (that have asked to remain ijms.ccsenet.org International Journal of Marketing Studies Vol. 13, No. 3;2021 anonymous) it appears that research conducted with modern technologies, in particular through online panels, is very limited in terms of reliability. The limits mainly consist of the inability to verify the identity of participants, their purchasing behaviour, knowledge of the products and services being researched and their motivations to take part in online surveys (RQ1). Substituting traditional marketing research with online services, uncritically accepting the risks and consequences of taking decisions off the back of information coming from unknown or unverifiable sources, goes against the interests of both research companies and their clients. Pursuing short-term advantages to the detriment of the quality of the data collected produces negative consequences that affect all the parties-in the first-place clients, whose savings through low-cost research might turn out to be just an illusion.
The development of new technologies has led marketing research companies to a critical phase that requires them to adapt how they conduct their business at every step. Research companies need to react to false respondents, to the disillusion that followed the recognition of the real value brought by Big Data and to concerns over the spread of AI technologies that, according to some experts, could tangibly reduce the demand for marketing research (GRIT Report, 2018). The automation of data analysis can answer who, what, where and how but lacks the creative and emotive layer required to understand the why driving consumer behavior -the key question of most marketing managers.
Traditional marketing research, based on data collection carried out by professional interviewers can, therefore, still play a role in the future because of its ability to uncover the why. This is an essential element of research that constitutes the foundation of consumer behavior understanding. Traditional research is the only answer to gaining this kind of insight and it cannot be currently substituted by automated data gathering and algorithms.

Limitations and Future Developments
This article is based on the answers provided by a collective of professional interviewers who have agreed to take part in this study and therefore is not a representative sample. Since there is no registry for such professionals, it is challenging to size the target-population (professional interviewers in Italy). Information coming from different sources converged on an estimate of about 3000 professional interviewers. Furthermore, it would have been beneficial to engage more closely with research companies, but this was made challenging by their reluctance to discuss the topic.