Impact factor (WEB OF SCIENCE - Clarivate)

2 year: 7.2 | 5 year: 6.6

Articles

What are the characteristics of participatory surveillance systems for influenza-like-illness?

Nadege Atkins1,2*, Mandara Harikar1,2*, Kirsten Duggan1,2, Agnieszka Zawiejska1,2, Vaishali Vardhan1,2, Laura Vokey1,2, Marshall Dozier1,2, Emma F de los Godos1,2†, Emilie Mcswiggan1,2, Ruth Mcquillan1,2, Evropi Theodoratou1,2, Ting Shi1

1 Center for Population Health Sciences, Usher Institute, University of Edinburgh, Scotland, UK
2 UNCOVER (Usher Network for COVID-19 Evidence Reviews) Usher Institute, University of Edinburgh, Edinburgh, UK
* Joint first authorship.
† Equal contribution.

DOI: 10.7189/jogh.13.04130

Share:

Facebook
Twitter
LinkedIn
Abstract

Background

Seasonal influenza causes significant morbidity and mortality, with an estimated 9.4 million hospitalisations and 290 000-650 000 respiratory related-deaths globally each year. Influenza can also cause mild illness, which is why not all symptomatic persons might necessarily be tested for influenza. To monitor influenza activity, healthcare facility-based syndromic surveillance for influenza-like illness is often implemented. Participatory surveillance systems for influenza-like illness (ILI) play an important role in influenza surveillance and can complement traditional facility-based surveillance systems to provide real-time estimates of influenza-like illness activity. However, such systems differ in designs between countries and contexts, making it necessary to identify their characteristics to better understand how they fit traditional surveillance systems. Consequently, we aimed to investigate the performance of participatory surveillance systems for ILI worldwide.

Methods

We systematically searched four databases for relevant articles on influenza participatory surveillance systems for ILI. We extracted data from the included, eligible studies and assessed their quality using the Joanna Briggs Critical Appraisal Tools. We then synthesised the findings using narrative synthesis.

Results

We included 39 out of 3797 retrieved articles for analysis. We identified 26 participatory surveillance systems, most of which sought to capture the burden and trends of influenza-like illness and acute respiratory infections among cohorts with risk factors for influenza-like illness. Of all the surveillance system attributes assessed, 52% reported on correlation with other surveillance systems, 27% on representativeness, and 21% on acceptability. Among studies that reported these attributes, all systems were rated highly in terms of simplicity, flexibility, sensitivity, utility, and timeliness. Most systems (87.5%) were also well accepted by users, though participation rates varied widely. However, despite their potential for greater reach and accessibility, most systems (90%) fared poorly in terms of representativeness of the population. Stability was a concern for some systems (60%), as was completeness (50%).

Conclusions

The analysis of participatory surveillance system attributes showed their potential in providing timely and reliable influenza data, especially in combination with traditional hospital- and laboratory led-surveillance systems. Further research is needed to design future systems with greater uptake and utility.

Print Friendly, PDF & Email

Seasonal influenza causes high rates of illness and mortality around the world. Estimates of yearly respiratory-related deaths range from around 290 000 to nearly 650 000 [1], with over 9.4 million people hospitalised with influenza-related lower respiratory infections in 2017 [2].

Influenza surveillance is defined as “the collection, compilation and analysis of information on influenza activity in a defined population”; its major objective is to lessen the disease’s effects by giving public health authorities meaningful data so they may more effectively plan suitable control and intervention measures, allocate resources to healthcare, and provide case management suggestions [3].

Participatory surveillance systems are emerging alongside more traditional forms of disease surveillance. They typically involve people reporting their own health information in real-time using tools such as apps or hotlines [4]. Unlike traditional surveillance systems, which rely on reports from health professionals or laboratory testing based on specific case definitions, participatory surveillance involves individuals sharing information, typically about symptoms rather than diagnoses [4]. As such, they are more sensitive but less specific than traditional surveillance systems and can provide timely information about disease within a population [4,5]. Research has indicated that, in view of influence surveillance, participatory surveillance systems can act as reliable complements to current sentinel surveillance systems [6]. However, these systems have some key challenges, such as representativeness (who chooses to participate), accessibility (availability of internet or smartphone access), and health literacy [4].

Participatory surveillance systems have been recognised as having the potential to play an important role in population-level disease surveillance [4,5]. Research Recommendation 1.1.2 of the World Health Organization (WHO) Public Health Research Agenda for Influenza is to identify the reliability of complementary influenza surveillance systems, such as participatory surveillance, for providing real-time estimates of influenza activity.

There is considerable variation and creativity in participatory surveillance system design worldwide. While research has begun to show that participatory surveillance systems are useful as a complement to other forms of disease surveillance [4,6], there is no synthesis available on this topic. Thus, we aimed to identify the purpose and attributes of participatory surveillance systems for influenza-like illness (ILI) to provide information to decision-makers and organisations (such as WHO) interested in implementing, establishing or enhancing their own participatory surveillance systems.

METHODS

Prior to conducting this review, we developed a study protocol based on the Preferred Reporting Items for Systematic Reviews and Meta-Analysis for Protocols 2015 (PRISMA-P 2015) guidelines (Appendix 1 in the Online Supplementary Document) and later followed the PRISMA 2020 guidelines in conducting the study.

We systematically searched EMBASE, Global Health, MEDLINE, and medRxiv on 14 December 2022 to identify relevant studies evaluating influenza and ILI participatory surveillance systems. We previously created comprehensive search strategies for each database using keywords and alternative terms derived from literature scoping searches (eg, “influenza” and “surveillance” (Appendix 2 in the Online Supplementary Document). We imported the search results from Embase, Global Health, and MEDLINE into Covidence (Covidence, Melbourne, Australia) and those from medRxiv into Mendeley, version 1.19.4. (Elsevier, Amsterdam, Netherlands), as Covidence does not support medRxiv citations. We then conducted automatic deduplication, followed by manual removal of any remaining duplicates any duplicates missed by the software.

Two reviewers then independently screened the titles and abstracts of retrieved article followed by full-texts of potentially eligible studies, according to pre-developed eligibility criteria (Table 1). Conflicts were resolved by a third independent reviewer. A list of the excluded studies and the reasons for their exclusion during the full-text reading stage is provided in Appendix 3 in the Online Supplementary Document.

Table 1.  Eligibility criteria used for the review

WordPress Data Table

ILI – influenza-like illness

Two reviewers then independently extracted data from each eligible study using a standard data extraction form, previously developed based on the Cochrane guidelines [7]. In an iterative process, the form was first piloted on three studies and adapted to ensure that all relevant data were extracted. We extracted data regarding category (studies reporting results from participatory surveillance systems or studies evaluating participatory surveillance systems), study location, study aim, definition of influenza definition used, and objectives and attributes of the participatory surveillance system.

We performed quality assessment of the included studies with the Joanna Briggs Institute (JBI) Critical Appraisal tool [8], composed of questions related to selection of study participants, measurement of exposures and outcomes, and adjustment for potential confounders. Each question was answered with either “Yes”, “No”, “Unclear” or “Not applicable”. We calculated the percentage of “Yes” responses among all questions to attain comparable quality scores among the selected studies, so the overall quality score for each study ranged from 0 to 100 (80-100 = high quality, 50-80 = moderate quality, <50 = low quality). Due to time constraints, each study’s quality was assessed by a single reviewer.

We conducted a narrative synthesis following guidance from Popay et al. [9] and the Centre for Reviews and Dissemination [10].

RESULTS

The search retrieved 4976 studies, 1179 of which were discarded as duplicates. Two reviewers independently screened the titles and abstracts of the remaining 3797 studies, with a third reviewer resolving conflicts. After excluding 3718 studies, two independent reviewers read the full texts of the remaining 100 articles, 61 of which were excluded as they covered non-participatory surveillance systems (n = 35), lacked information on features of surveillances systems (n = 18), had a wrong study design (n = 5), or did not have a full-text available (n = 3). We finally included 39 studies meeting the eligibility criteria (Figure 1). These studies encompassed 18 geographical locations; eighteen were conducted in the European Region, twelve in the Region of Americas, and nine in the Western Pacific Region (Table S1 in the Online Supplementary Document). We included three cohort studies, which were rated as high quality, with average score of 85%, and 36 cross-sectional/observational case studies, which were all of moderate quality, with an average score of 72% (Tables S2-S3 in the Online Supplementary Document).

Figure 1.  PRISMA flow diagram of screening process.

ILI participatory surveillance systems and their objectives

The objectives of 18 of the 26 participatory surveillance systems we identified were explicitly stated by the authors (Table 2).

Table 2.  ILI participatory surveillance systems and objectives

WordPress Data Table

FNY – Flu Near You, IVR – Interactive voice response, GN – Grippe Net, GGNET – Grossesse-GrippeNet (Pregnancy-GrippeNet), ILI – influenza-like illness, VA TT – Veterans Affair Telephone Triage, ENVIRH – Environment and Health in children day care centers, GIS – Great Influenza System, GP – general practitioner

Attributes of participatory surveillance systems

The most frequently reported system property was the correlation/concurrence with other surveillance systems. Almost all studies reported a statistically significant, moderate, or strong correlation between ILI incidences captured by the participatory systems and trends recorded by the comparators [1127]. Only Prieto et al. [28] found a negative correlation (r = -0.3) between the number of ILI cases reported to their Mi Gripe system and the number of reported, laboratory-confirmed cases of influenza published by the Pan American Health Organization.

Fifteen studies analysed the utility of the participatory system and reported multiple uses, apart from collecting epidemiological data on the ILI incidence. The systems allowed access to specific populations, such as children [19,29,30], pregnant women [31], and healthcare workers [21], and access to information on healthcare-seeking behaviour [16,32,33]. Participatory systems like FluWatchers or FluTracking were used to collect information about vaccination uptake and effectiveness [16,34], while the School Health Surveillance System (SHSS) system was designed to provide data on school absenteeism [30].

Eleven studies explored timeliness. Most reported prompt detection of changes in baseline ILI activities by participatory systems, ahead of the pattern changes detected by the reference systems [16,20,27]. Most participatory systems’ users disclosed information about their health status within three days from the weekly reminders being distributed [17,28,35,36] or within three days from the symptoms’ onset [17].

Ten studies reported on the systems’ representativeness. Several studies reported the bias towards female users [1416,23,35,37] and better educated populations. One study reported bias towards younger users [23], while two studies noted that users were older compared to the general population [14,38]. Despite these inconsistencies, the extremities of age groups were the most underrepresented, except in the case of Kim et al., whose study was designed to target the paediatric population [16,19,22,35].

Eight studies provided data on participatory systems’ acceptability. Except for the study by Vandendijck et al. on a Belgian cohort of the Great Influenza System (GIS) users [22], all studies consistently reported high acceptability expressed as high response (74%-78%) and retention rates (79%-80%) for FluWatchers. Australian studies on FluTracking also noted increasing number of participants in the subsequent rounds of recruitment [37,39].

Only a few studies analysed systems’ flexibility, reporting the ability to either expand a system’s design to capture more data over the study period [20,27] or perform adjustments in response to the users’ changing needs [28,40].

There were few data on the systems’ sensitivity; a few studies reported that participatory systems were able to capture epidemic peaks up to three weeks before they were detected by the reference surveillance systems [19,24,35,38,41].

A few studies discussed the systems’ simplicity, mostly reporting various ways of facilitating participants’ contribution, like the availability of a dedicated, free-of-charge phone line for parents [29], an easily accessible web-based portal, or applications compatible with operating systems commonly used on mobile devices [21]. The studies highlighted that intuitive software and automated maintenance of the system improved researchers’ experience with the National Health Service (NHS) Direct and Mi Gripe, respectively [28,42].

Six studies analysed system stability, reporting on disruptions in the systems’ operation taking up to two months for the FluMob, which resulted in data loss during the 2007 influenza outbreak [21]. Meanwhile, the stable Japanese SHSS benefited from the established system of school absence reporting, while NHS Direct was protected from data loss by regular service of the software and data transmission modalities [30,42].

Only two studies monitored completeness of the data, returning inconsistent observations. While more than half of Mi Gripe application did not send reports, data circulated within the NHS Direct surveillance system were complete [28,42].

Approaches to collecting and storing data

Web-based data collection systems included online questionnaires, study-specific websites, dedicated mobile applications, and email prompts with links to the survey. Other conventional and time-tested technologies, such as telephonic interviews, interactive voice response systems, and text messages were also used. While most systems relied on the self-reporting of symptoms, smart thermometers directly sent temperature readings via an accompanying mobile application.

Only two studies [19,21] reported on data storage. Fever coach data was stored in real time databases that were updated daily, while FluMob data was also stored similarly in an accessible central database updated in real time (Table S5 in the Online Supplementary Document).

Study approval

Less than one third of the studies evaluating participatory surveillance systems confirmed seeking participants’ consent for their participation in the study (n/N = 11/39 (28.2%)), and less than half (n/N = 15/39 (38.46%)) reported seeking prior approval by an Ethics Board (Table S6 in the Online Supplementary Document).

Recruitment and retention of participants

Of 22 systems that reported on recruitment of participants, only three (13.63%) reported using a sampling method for data collection. Most other systems randomly recruited participants via email, text messages, push-notifications on mobile phones, and social media platforms. Press releases, television advertisements, website notifications, and posters were also used to publicise the system and invite participants. Only a few systems (n/N = 8/26 (30.76%)) reported efforts taken to retain participants in the system by sending weekly reminders through SMS, app notifications, or email newsletters (Table 3).

Table 3.  ILI surveillance systems’ recruitment and retention of participants strategies

WordPress Data Table

FNY – Flu Near You, IVR – Interactive Voice Response, BRFSS – Behavioral Risk Factor Surveillance System, GN – Grippe Net, GIS – Great influenza Survey, GGNET – Grossesse-GrippeNet (Pregnancy-GrippeNet), ENVIRNH – Environment and Health in children day care centers, IMS – Internet-based monitoring system, MoSAIC – Mobile Surveillance for Acute Respiratory Infections and Influenza-Like Illness in the Community, ILI – influenza-like illness

Adjustment for potential bias and confounders

Fifteen studies reported on adjusting for the potential bias, while only five adjusted for confounders mostly sex, age, or localisation [14,25,33,40,45] (Table S7 in the Online Supplementary Document).

DISCUSSION

Through this rapid review, we summarised the characteristics of participatory surveillance systems for influenza worldwide based on 39 eligible studies from 18 countries on 26 participatory surveillance systems. Twenty-one (80%) of the included studies reported on participatory surveillance systems’ correlation with other surveillance systems, 10 (38%) on acceptability, 16 (62%) on representativeness, 15 (58%) on timeliness and utility, five (19%) on flexibility, and four (15%) simplicity and stability. There was limited data available on the completeness of the systems with only two (8%) studies reporting on this attribute.

Participatory influenza surveillance systems were often found to be comparable [32,40] or, in a few instances, superior to existing sentinel surveillance systems [29,35] in providing advanced warning of seasonal influenza activity. However, while these systems were reported to have high sensitivity for influenza detection, their specificity may be variable depending on the syndromic definition of ILI used.

The systems’ ability to adapt to changing user demographics, data requirements, and improved user experience suggests the ability of participatory surveillance systems in adjusting to the changing demands of a public health threat. However, only a few reported on this attribute, necessitating more research.

Most systems used a web-based portal for data collection (n/N = 14/26), five used telephone surveys, three used mobile apps, and two reported mixed approaches comprising mobile and web-based platforms or telephone surveys and laboratory sample data. The data storage facilities were recorded for only two systems which used mobile apps for data collection – Flu mob [21] and Fever coach [19]. Most systems did not attain ethical approval from the regional Ethical committee for data collection. Of 22 systems that reported on recruitment of participants, only three reported using a sampling method for data collection, while the remaining ones mostly randomly recruited participants by sending invitations via telephone, text messages, or web-based platforms. Recruitment was bolstered by posters, website notifications, and television advertisements. Less than a third of systems reported efforts taken to retain participants via weekly reminders sent through SMS, app notifications, or email newsletters. Data on adjustment for bias were reported for 13 systems and data on adjustment for confounders for four systems.

This rapid review confirms the sustainability of the systems that were set up almost two decades ago and have been operating for more than a decade, like the GIS [22,46]. Thus, it adds to the existing body of evidence that participatory systems evolved into a stable form of surveillance on the national, regional and global levels [6]. Next, participatory systems have become a valid source of data contributing to modelling and simulating surveillance systems in a wider interdisciplinary context, involving a broader range of stakeholders. Importantly, studies on the Australian FluTracking system also evaluated its operation, which has been identified by the recent review as an important aspect for informing modern participatory systems [16].

Fifteen out of 39 studies selected for the review reported seeking ethical clearance for the systems, while even a smaller proportion (n = 11) reported obtaining subjects’ approval to participate in the study. These weaknesses are also discussed by a review or research ethics within the Influenzanet consortium, which confirmed that most of its member countries sought for the approvals from the Research Ethics Committees and protected personal data [49]. However, the low proportion of identified studies satisfying ethical requirements and subjects’ rights may reflect the complexity and differentiated awareness of biomedical ethics regarding large-scale public health interventions in the digital era. These concerns are also comprehensively addressed in recent literature, which offers an ethical framework developed specifically for participatory surveillance systems designed for human health [49].

We also identified studies re-using routinely available medical data collected alongside triaging the patients within the emergency or out-patient facilities in attempts to use this information for influenza surveillance [41,42,50]. This approach explores options to double-use information collected within healthcare systems both for medical purposes and to support and enhance surveillance. Although capable of addressing surveillance aspects of healthcare systems and reported as non-traditional surveillance systems elsewhere [6], this approach should be taken with caution and carefully examined to ensure that aims specific for public health are also satisfied.

Strengths and limitations

The strengths of this review are its robust systematic review methodology, development of a detailed protocol to ensure transparency, creation of comprehensive search strategies, and the use of a wide range of databases. We also included unpublished literature to reduce the impact of publication bias. Another strength is the broad eligibility criteria — rather than restricting the search to studies that explicitly claimed to be “participatory”, we screened all reports on influenza surveillance systems and selected those that were participatory. This helped include systems like the Behavioral Risk Factor Surveillance System (BRFSS) or NHS Direct that are also used for non-ILI-related surveillance.

Among the included studies, we found systems to vary widely in their objectives, attributes, and harboured technology, making their evaluation in this review useful for a wide range of surveillance scenarios. We also looked at the lesser-known aspects of these systems,such as whether they acquired ethical approval and how they collected and stored data, which can be useful in the development of future systems.

However, this study also has some limitations. First, we could only conduct a narrative syntheses, which are inherently prone to bias based on reviewers’ interpretations [51]. Second, the assessment of study quality was subjective and challenging due to considerable heterogeneity among the included studies. Further, owing to time constraints, the quality assessment was only performed by a single reviewer. The choice of appropriate quality appraisal tool initially proved to be challenging, considering the diversity of study designs of included studies and their research questions, and the fact that most studies did not explicitly state their study design. We thus assessed those reported as cohort studies using the JBI tool for cohort studies and the remaining ones using the JBI tool for cross-sectional studies. Given that most studies examining participatory surveillance systems did not use a valid method to identify the condition (such as laboratory-confirmed diagnosis of influenza) or measure the condition reliably, they were rated to be moderate in quality. Nevertheless, this is a limitation of participatory surveillance systems in general, and save for the above two characteristics, most studies were otherwise scientifically robust.

We only included studies published in English, possibly introducing language bias. Finally, we excluded studies that focused exclusively on the use of participatory surveillance systems for vaccine monitoring and not for tracking ILI or severe acute respiratory infections (SARI). Further studies on the use of participatory surveillance systems in assessing vaccine uptake and effectiveness will be useful in discovering wider applications of these systems.

Our findings can support further research in participatory surveillance systems and provide information for public health policy makers looking to establish additional surveillance. Participatory surveillance can be a useful complement to existing sentinel surveillance systems. However, future research could gather data from more representative samples of the population to establish acceptability of any participatory system, especially due to their ability to reach populations which are less likely to be included in traditional surveillance. Here we demonstrated the need for acceptability, correlation, and timeliness of participatory surveillance, and the potential effects of unstable systems. As a cost-benefit analysis would also be of interest to policy makers, future analyses should consider the aspects of cost and simplicity of running and maintaining any system.

CONCLUSIONS

Participatory surveillance systems possess considerable potential in providing timely and reliable influenza surveillance data. Certain limitations, such as poor representativeness of the user population, unavailability of complete data, disparate access to the surveillance system, and inadequate ethical clearance may prevent participatory systems from substituting physician-, hospital-, and laboratory-led surveillance systems. Nevertheless, they may be valuable in complementing traditional surveillance systems, especially for vulnerable populations who may not seek timely care, for remote and underserved regions, or for periods when traditional systems are overburdened or lacking. Moreover, participatory surveillance systems can aid in understanding healthcare seeking behaviour.

While we highlighted the characteristics of existing participatory surveillance systems in this review, further research on what makes participatory surveillance systems successful will help design future systems with greater uptake and utility for influenza surveillance. Given that influenza threatens global public health security, timely and accurate surveillance is of critical importance to protect those most at risk.

Additional material

Online Supplementary Document

Acknowledgements

We thank members of UNCOVER, and WHO Global Influenza Programme team members for their support.

[1] Funding: This article is part of WHO Global Influenza Programme funded by the WHO.

[2] Authorship contributions: NA, MH, KD, AZ, VV, LV contributed substantially to the conception, writing, and reviewing of the manuscript. MD, EG, EM, RM, ET, TS contributed to the conception and reviewing of the manuscript. All authors contributed to the final version of the article.

[3] Disclosure of interest: The authors completed the ICMJE Disclosure of Interest Form (available upon request from the corresponding author) and disclose no relevant interests.

references

[1] World Health Organization. Burden of disease. 2023. Available: https://www.who.int/teams/global-influenza-programme/surveillance-and-monitoring/burden-of-disease. Accessed: 15 June 2023.

[2] . Mortality, morbidity, and hospitalisations due to influenza lower respiratory tract infections, 2017: an analysis for the Global Burden of Disease Study 2017. Lancet Respir Med. 2019;7:69-89. DOI: 10.1016/S2213-2600(18)30496-X. [PMID:30553848]

[3] World Health Organization. Global Epidemiological Surveillance Standards for Influenza. Geneva: World Health Organization; 2013.

[4] MS Smolinski, AW Crawley, and JM Olsen. Finding outbreaks faster. Health Secur. 2017;15:215-20. DOI: 10.1089/hs.2016.0069. [PMID:28384035]

[5] OP Wójcik, JS Brownstein, R Chunara, and MA Johansson. Public health for the people: participatory infectious disease surveillance in the digital age. Emerg Themes Epidemiol. 2014;11:7 DOI: 10.1186/1742-7622-11-7. [PMID:24991229]

[6] A Hammond, JJ Kim, H Sadler, and K Vandemaele. Influenza surveillance systems using traditional and alternative sources of data: A scoping review. Influenza Other Respir Viruses. 2022;16:965-74. DOI: 10.1111/irv.13037. [PMID:36073312]

[7] Cochrane Collaboration. Cochrane handbook for systematic reviews of interventions. London: Cochrane Collaboration; 2022.

[8] Joanna Briggs Institute. Critical appraisal tools. 2023. Available: https://jbi.global/critical-appraisal-tools. Accessed: 1 March 2023.

[9] Popay J, Roberts H, Sowden A, Petticrew M, Arai L, Rodgers M, et al. Guidance on the conduct of narrative synthesis in systematic reviews: A product from the ESRC methods programme. np: np; 2006.

[10] University of York. Centre for Reviews and Dissemination. 2023. Available: https://www.york.ac.uk/crd/. Accessed: 15 June 2023.

[11] K Baltrusaitis, M Santillana, AW Crawley, R Chunara, M Smolinski, and JS Brownstein. Determinants of Participants’ Follow-Up and Characterization of Representativeness in Flu Near You, A Participatory Disease Surveillance System. JMIR Public Health Surveill. 2017;3:e18. DOI: 10.2196/publichealth.7304. [PMID:28389417]

[12] K Baltrusaitis, A Vespignani, R Rosenfeld, J Gray, D Raymond, and M Santillana. Differences in regional patterns of influenza activity across surveillance systems in the United States: Comparative evaluation. JMIR Public Health Surveill. 2019;5:e13403. DOI: 10.2196/13403. [PMID:31579019]

[13] R Chunara, E Goldstein, O Patterson-Lomba, and JS Brownstein. Estimating influenza attack rates in the United States using a participatory cohort. Sci Rep. 2015;5:9540 DOI: 10.1038/srep09540. [PMID:25835538]

[14] C Guerrisi, C Turbelin, C Souty, C Poletto, T Blanchon, and T Hanslik. The potential value of crowdsourced surveillance systems in supplementing sentinel influenza networks: The case of France. Euro Surveill. 2018;23:1700337. DOI: 10.2807/1560-7917.ES.2018.23.25.1700337. [PMID:29945696]

[15] M Desroches, L Lee, S Mukhi, and C Bancej. Representativeness of the FluWatchers Participatory Disease Surveillance Program 2015-2016 to 2018-2019: How do participants compare with the Canadian population? Can Commun Dis Rep. 2021;47:364-372. DOI: 10.14745/ccdr.v47i09a03. [PMID:34650333]

[16] L Lee, M Desroches, S Mukhi, and C Bancej. FluWatchers: Evaluation of a crowdsourced influenza-like illness surveillance application for Canadian influenza seasons 2015-2016 to 2018-2019. Can Commun Dis Rep. 2021;47:357-63. DOI: 10.14745/ccdr.v47i09a02. [PMID:34650332]

[17] K Fujibayashi, H Takahashi, M Tanei, Y Uehara, H Yokokawa, and T Naito. A new influenza-tracking smartphone app (flu-report) based on a self-administered questionnaire: Cross-sectional study. JMIR Mhealth Uhealth. 2018;6:e136. DOI: 10.2196/mhealth.9834. [PMID:29875082]

[18] L Kamimoto, GL Euler, PJ Lu, A Reingold, J Hadler, and K Gershman. Seasonal influenza morbidity estimates obtained from telephone surveys, 2007. Am J Public Health. 2013;103:755-63. DOI: 10.2105/AJPH.2012.300799. [PMID:23237164]

[19] M Kim, S Yune, S Chang, Y Jung, SO Sa, and HW Han. The Fever Coach Mobile App for Participatory Influenza Surveillance in Children: Usability Study. JMIR Mhealth Uhealth. 2019;7:e14276. DOI: 10.2196/14276. [PMID:31625946]

[20] C Lucero-Obusan, CA Winston, PL Schirmer, G Oda, and M Holodniy. Enhanced influenza surveillance using telephone triage and electronic syndromic surveillance in the department of Veterans Affairs, 2011–2015. Public Health Rep. 2017;132:16S DOI: 10.1177/0033354917709779. [PMID:28692402]

[21] MO Lwin, J Lu, A Sheldenkar, C Panchapakesan, YR Tan, and P Yap. Effectiveness of a mobile-based influenza-like illness surveillance system (FluMob) among health care workers: Longitudinal study. JMIR Mhealth Uhealth. 2020;8:e19712. DOI: 10.2196/19712. [PMID:33284126]

[22] Y Vandendijck, C Faes, and N Hens. Eight years of the Great Influenza Survey to monitor influenza-like illness in Flanders. PLoS One. 2013;8:e64156. DOI: 10.1371/journal.pone.0064156. [PMID:23691162]

[23] C Cawley, F Bergey, A Mehl, A Finckh, and A Gilsdorf. Novel methods in the surveillance of influenza-like illness in Germany using data from a symptom assessment app (Ada): Observational case study. JMIR Public Health Surveill. 2021;7:e26523. DOI: 10.2196/26523. [PMID:34734836]

[24] AC Miller, I Singh, E Koehler, and PM Polgreen. A smartphone-driven thermometer application for real-time population- and individual-level influenza surveillance. Clin Infect Dis. 2018;67:388 DOI: 10.1093/cid/ciy073. [PMID:29432526]

[25] SF Ackley, S Pilewski, VS Petrovic, L Worden, E Murray, and TC Porco. Assessing the utility of a smart thermometer and mobile application as a surveillance tool for influenza and influenza-like illness. Health Informatics J. 2020;26:2148 DOI: 10.1177/1460458219897152. [PMID:31969046]

[26] NL Tilston, KTD Eames, D Paolotti, T Ealden, and WJ Edmunds. Internet-based surveillance of Influenza-like-illness in the UK during the 2009 H1N1 influenza pandemic. BMC Public Health. 2010;10:650 DOI: 10.1186/1471-2458-10-650. [PMID:20979640]

[27] SP van Noort, CT Codeço, CE Koppeschaar, M van Ranst, D Paolotti, and MG Gomes. Ten-year performance of Influenzanet: ILI time series, risks, vaccine effects, and care-seeking behaviour. Epidemics. 2015;13:28-36. DOI: 10.1016/j.epidem.2015.05.001. [PMID:26616039]

[28] JT Prieto, JH Jara, JP Alvis, LR Furlan, CT Murray, and J Garcia. Will Participatory Syndromic Surveillance Work in Latin America? Piloting a Mobile Approach to Crowdsource Influenza-Like Illness Data in Guatemala. JMIR Public Health Surveill. 2017;3:e87. DOI: 10.2196/publichealth.8610. [PMID:29138128]

[29] P Paixão, C Piedade, A Papoila, I Caires, C Pedro, and M Santos. Improving influenza surveillance in Portuguese preschool children by parents’ report. Eur J Pediatr. 2014;173:1059 DOI: 10.1007/s00431-014-2285-7. [PMID:24599798]

[30] H Takahashi, H Fujii, N Shindo, and K Taniguchi. Evaluation of the Japanese school health surveillance system for influenza. Jpn J Infect Dis. 2001;54:27-30. [PMID:11326126]

[31] P Loubet, C Guerrisi, C Turbelin, B Blondel, O Launay, and M Bardou. First nationwide web-based surveillance system for influenza-like illness in pregnant women: participation and representativeness of the French G-GrippeNet cohort. BMC Public Health. 2016;16:253 DOI: 10.1186/s12889-016-2899-y. [PMID:26969654]

[32] AJ Elliot, A Bermingham, A Charlett, A Lackenby, J Ellis, and C Sadler. Self-sampling for community respiratory illness: A new tool for national virological surveillance. Euro Surveill. 2015;20:21058 DOI: 10.2807/1560-7917.ES2015.20.10.21058. [PMID:25788252]

[33] V Marmara, D Marmara, P McMenemy, and A Kleczkowski. Cross-sectional telephone surveys as a tool to study epidemiological factors and monitor seasonal influenza activity in Malta. BMC Public Health. 2021;21:1828 DOI: 10.1186/s12889-021-11862-x. [PMID:34627201]

[34] CB Dalton, SJ Carlson, L McCallum, MT Butler, J Fejsa, and E Elvidge. Flutracking weekly online community survey of influenza-like illness: 2013 and 2014. Commun Dis Intell Q Rep. 2015;39:E361-8. [PMID:26620350]

[35] C Kjelsø, M Galle, H Bang, S Ethelberg, and TG Krause. Influmeter – an online tool for self-reporting of influenza-like illness in Denmark. Infect Dis (Lond). 2016;48:322-7. DOI: 10.3109/23744235.2015.1122224. [PMID:26654752]

[36] MS Stockwell, C Reed, CY Vargas, S Camargo, AF Garretson, and LR Alba. Mosaic: Mobile surveillance for acute respiratory infections and influenza-like illness in the community. Am J Epidemiol. 2014;180:1196 DOI: 10.1093/aje/kwu303. [PMID:25416593]

[37] SJ Carlson, CB Dalton, MT Butler, J Fejsa, E Elvidge, and DN Durrheim. Flutracking weekly online community survey of influenza-like illness annual report 2011 and 2012. Commun Dis Intell Q Rep. 2013;37:E398. [PMID:24882237]

[38] SS Lee and NS Wong. Respiratory symptoms in households as an effective marker for influenza-like illness surveillance in the community. Int J Infect Dis. 2014;23:44-6. DOI: 10.1016/j.ijid.2014.02.010. [PMID:24680819]

[39] C Dalton, D Durrheim, J Fejsa, L Francis, S Carlson, and ET d’Espaignet. Flutracking: a weekly Australian community online survey of influenza-like illness in 2006, 2007 and 2008. Commun Dis Intell Q Rep. 2009;33:316-22. [PMID:20043602]

[40] C Bexelius, H Merk, S Sandin, O Nyrén, S Kühlmann-Berenzon, and A Linde. Interactive voice response and web-based questionnaires for population-based infectious disease reporting. Eur J Epidemiol. 2010;25:693-702. DOI: 10.1007/s10654-010-9484-y. [PMID:20596884]

[41] DL Cooper, NQ Verlander, AJ Elliot, CA Joseph, and GE Smith. Can syndromic thresholds provide early warning of national influenza outbreaks? J Public Health (Oxf). 2009;31:17 DOI: 10.1093/pubmed/fdm068. [PMID:18032426]

[42] A Doroshenko, D Cooper, G Smith, E Gerard, F Chinemana, and N Verlander. Evaluation of syndromic surveillance based on National Health Service Direct derived data – England and Wales. MMWR Suppl. 2005;54:117 [PMID:16177702]

[43] M Biggerstaff, M Jhung, L Kamimoto, L Balluz, and L Finelli. Self-reported influenza-like illness and receipt of influenza antiviral drugs during the 2009 pandemic, United States, 2009–2010. Am J Public Health. 2012;102:e21. DOI: 10.2105/AJPH.2012.300651. [PMID:22897525]

[44] CB Dalton, SJ Carlson, MT Butler, J Feisa, E Elvidge, and DN Durrheim. Flutracking weekly online community survey of influenza-like illness annual report, 2010. Commun Dis Intell Q Rep. 2011;35:288-93. [PMID:22624489]

[45] M Debin, C Turbelin, T Blanchon, I Bonmarin, A Falchi, and T Hanslik. Evaluating the feasibility and participants’ representativeness of an online nationwide surveillance system for influenza in France. PLoS One. 2013;8:e73675. DOI: 10.1371/journal.pone.0073675. [PMID:24040020]

[46] MM de Lange, A Meijer, IH Friesema, GA Donker, CE Koppeschaar, and M Hooiveld. Comparison of five influenza surveillance systems during the 2009 pandemic and their association with media attention. BMC Public Health. 2013;13:881 DOI: 10.1186/1471-2458-13-881. [PMID:24063523]

[47] D Perrotta, A Bella, C Rizzo, and D Paolotti. Participatory Online Surveillance as a Supplementary Tool to Sentinel Doctors for Influenza-Like Illness Surveillance in Italy. PLoS One. 2017;12:e0169801. DOI: 10.1371/journal.pone.0169801. [PMID:28076411]

[48] M Rehn, A Carnahan, H Merk, S Kühlmann-Berenzon, I Galanis, and A Linde. Evaluation of an Internet-based monitoring system for influenza-like illness in Sweden. PLoS One. 2014;9:e96740. DOI: 10.1371/journal.pone.0096740. [PMID:24824806]

[49] LD Geneviève, T Wangmo, D Dietrich, O Woolley-Meza, A Flahault, and BS Elger. Research ethics in the European Influenzanet consortium: Scoping review. JMIR Public Health Surveill. 2018;4:e67. DOI: 10.2196/publichealth.9616. [PMID:30305258]

[50] T Ma, H Englund, P Bjelkmar, A Wallensten, and A Hulth. Syndromic surveillance of influenza activity in Sweden: An evaluation of three tools. Epidemiol Infect. 2015;143:2390 DOI: 10.1017/S0950268814003240. [PMID:25471689]

[51] McKenzie JE, Brennan SE. Chapter 12 Synthesizing and presenting findings using other methods. 2023In: Higgins JPT, Thomas J, editors. Cochrane Handbook for Systematic Reviews of Interventions version 6.4 (updated August 2023). UK: Cochrane; 2023.

Correspondence to:
Nadege Atkins
UNCOVER (Usher Network for COVID-19 Evidence Reviews), Usher Institute, University of Edinburgh
Old College, South Bridge, Edinburgh EH8 9YL
UK
[email protected]
Mandara Harikar
UNCOVER (Usher Network for COVID-19 Evidence Reviews), Usher Institute, University of Edinburgh
Old College, South Bridge, Edinburgh EH8 9YL
UK
[email protected]