Computer assisted telephone interviewing (CATI) for health surveys in public health surveillance:

Methodological issues and challenges ahead

Vol. 25 No. 2, 2004

Bernard CK Choi

Abstract

This article describes methodological issues, challenges, and a vision for using computer assisted telephone interviewing (CATI) in a comprehensive public health surveillance system in the 21st century. Methodological issues include funding of surveys, survey frequency, sample size considerations, response rates, and types of bias to be considered in questionnaire design. Challenges include the recognition of the merits and limitations of CATI, and the potential for greater use in surveillance of public health issues in health regions requiring rapid and regular data. The vision of a CATI survey-based, rapid, flexible, cost-effective public health surveillance system is described. It is concluded that further discussion and views on improvements with regard to CATI methodological and practical issues will help build a better CATI survey-based public health surveillance system for the future.

Key words: CATI; computer assisted telephone interviewing; methodology; public health surveillance; questionnaire bias; survey

Introduction

In the development of methods for comprehensive public health surveillance in the 21st century,1-3 health surveys using computer assisted telephone interviewing (CATI) play an important role.4 CATI is a technique based on a combined use of long-standing methods, such as interview and surveys, and modern technologies, such as the telephone and the computer.5 It is becoming the data collection method of choice in an increasing number of health surveys.6-8 However, CATI health surveys raise methodological issues.9-11 The question is how to move forward with CATI health surveys in public health surveillance.

This article reviews a number of methodological issues and proposes a number of strategies to tackle the challenges ahead. It describes the usefulness of the CATI method for surveys as a component of public health surveillance system. Two important concepts are proposed to stimulate further discussion and debate on a vision of a CATI survey-based surveillance: the importance of local CATI surveys in public health surveillance and the ability to generate rapid data.

Methodological issues

1. Sources of funding

An important question concerning health surveys is: Who pays for the survey?

Frequency distributions of the funding sponsors for the 67 Canadian health surveys carried out between 1950 and 1997, derived from the results of Kendall et al.,12 indicate that the major sources of funding were the federal government (75% of the surveys), followed by the provincial government (54%) and research institutes, foundations or societies (15%) (Table 1).

Many federal government departments besides Health Canada also funded health surveys (see the “Other” category in Table 1). Local government has traditionally not played a role in funding health surveys (0%). In terms of data collection, the provincial government played a more important role (42% of the surveys) than the federal government (30%), universities (21%) and survey contractors (6%). The management role of surveys was shared by the provincial government (42% of the surveys), the federal government (34%) and the universities (24%).

It is important to identify stakeholders and funding sources for CATI surveys. Local governments are a potential funding source that traditionally has not been tapped. Methods may be developed to better coordinate the funding efforts of all levels of government and stakeholders.

2. Frequency of surveys

Another question concerning health surveys is: How frequently should a survey be conducted?

The frequency of health surveys in Canada from 1950 to 1997 ranged from four times a year (2% of the surveys) to only once (the majority, or 61%) (Table 2). Most of the surveys collected data only once (50% of the 30 national health surveys and 70% of the 37 provincial surveys). In general, national health surveys tended to be more frequent (14% were done every year) than provincial health surveys (16% were done every five years).

TABLE 1
Canadian health surveys, 1950-1997, by sponsor, data collector and management, based on analysis of data by Kendall et al.12
  Survey sponsor Data collector Management
Federal government 50 (75%) 20 (30%) 23 (34%)
Health Canada 40 (60%) 1 (2%) 6 (9%)
Statistics Canada 9 (13%) 19 (28%) 14 (21%)
Other* 12 (18%) 0 (0%) 3 (5%)
Provincial government 36 (54%) 28 (42%) 28 (42%)
Local government 0 (0%) 1 (2%) 0 (0%)
Universities 2 (3%) 14 (21%) 16 (24%)
Research institutes, foundations, societies 10 (15%) 3 (5%) 6 (9%)
Industries 2 (3%) 0 (0%) 0 (0%)
Survey contractor 0 (0%) 4 (6%) 0 (0%)
Total** 67 (100%) 67 (100%) 67 (100%)

* Examples include: Agriculture, Citizenship Culture & Recreation, Communications, Consumers & Corporate Affairs, Fitness, Indian Affairs, Mortgage & Housing, National Defence, Secretary of State, Veterans Affairs.
** Numbers do not add to total because categories are not mutually exclusive, e.g., some surveys have multiple sponsors

Extreme survey frequency leads to the concept of continuous survey. For example, the annual sample can be spread evenly over 12 months to keep the survey going continuously. In other words, 100 interviews per month can be carried out instead of 1,200 interviews per year in a survey region. Indeed, in Cycle 1.1 of the 2000 Canadian Community Health Survey (CCHS) data were collected on a monthly basis.13 There are many advantages of a continuous survey, such as better data quality, timeliness of data, ability to detect seasonal trends, and continuous employment of staff and interviewers, which means consistency of the data. In addition, there is no increased cost; in fact, continuous surveys may lead to a decreased expenditures through elimination of the cost of advertising positions, hiring staff, and training interviewers, etc., every time the survey is restarted.

The concept of "repeated sampling of the same population" was first described by Cochran in 1963.14 With a dynamic population, a large survey at infrequent intervals is of limited use; a series of small samples at shorter intervals is more informative, and may be published regularly. With a population in which time changes are slow, on the other hand, an annual average taken over 12 monthly samples or four quarterly samples may be adequate. With just a slight change in the sample methodology, the survey becomes so much more versatile and informative.

TABLE 2
Canadian health surveys, 1950-1997, by type and frequency, based on analysis of data by Kendall et al.12
Frequency of survey National health surveys (1950-1997) Provincial health surveys (1977-1997) Total
Once 15 (50%) 26 (70%) 41 (61%)
Twice 3 (10%) 4 (11%) 7 (10%)
Occasional 2 (7%) 0 (0%) 2 (3%)
Every 5 years 1 (3%) 6 (16%) 7 (10%)
Every 4 years 1 (3%) 0 (0%) 1 (2%)
Every 3 years 1 (3%) 0 (0%) 1 (2%)
Every 2 years 2 (7%) 1 (3%) 3 (4%)
Every year 4 (14%) 0 (0%) 4 (6%)
4 times a year 1 (3%) 0 (0%) 1 (2%)
Total* 30 (100%) 37 (100%) 67 (100%)
* Each survey was counted only once; duplicate entries in the original data were eliminated.

3. Sample size

The next question is: What sample size is adequate?

In Canada, national health surveys have bigger sample sizes than provincial health surveys, e.g., 30% with samples of 10,000 to 19,999, versus 33% with samples of 2,000 to 2,999 (Table 3). However, given the fact that there are 10 provinces in Canada, the provincial health surveys may provide a larger coverage of the population than national health surveys.

Samples sizes are strongly affected by available funding. In Australia, the 1989 South Australia Health Omnibus Survey conducted 3,000 interviews/year15 and the 1997-1998 New South Wales Health Survey conducted 17,000 (1,000/year per Area Health Service).16 In the US, the 1998 US Behavioral Risk Factor Surveillance System (BRFSS) involved 120,000 participants per year (100 to 400/month per state).17 In Canada, the 1994 National Population Health Survey (NPHS) longitudinal panel had a sample size of 17,27618 and the 2000 CCHS had 130,000 respondents (2,000 to 42,000/year per province).13

It is easy to calculate sample size if one hypothesis is tested. However, it is difficult to perform sample size calculations for a CATI survey because there are usually many items of interest in the survey. Survey experience indicates that for most purposes a minimum sample size of 100 interviews per month per survey region is probably adequate.

TABLE 3
Canadian health surveys, 1950-1997, by type and sample size, based on analysis of data by Kendall et al.12
Sample size National health surveys (1950-1997) Provincial health surveys (1977-1997)
150,000-199,999 1 (3%) 0 (0%)
100,000-149,999 1 (3%) 0 (0%)
50,000-99,999 2 (7% 1 (3%)
40,000-49,999 0 (0%) 1 (3%)
30,000-39,999 3 (10%) 1 (3%)
20,000-29,999 5 (16%) 0 (0%)
10,000-19,999 9 (30%) 2 (5%)
5,000-9,999 2 (7%) 3 (8%)
4,000-4,999 2 (7%) 2 (5%)
3,000-3,999 1 (3%) 7 (19%)
2,000-2,999 2 (7%) 12 (33%)
1,000-1,999 2 (7%) 3 (8%)
Less than 1,000 0 (0%) 2 (5%)
Not reported 0 (0%) 3 (8%)
Total 30 (100%) 37 (100%)

4. Response rate

What response rate is adequate?

Most health surveys in Canada from 1950 to 1997 achieved a high response rate of 70% to 100% (Table 4). National health surveys in general had a higher response rate than provincial surveys. It has been suggested that the target response rate should be 80% or more.19 In Canada, the 1994 NPHS had a response rate of 96%,20 and in 1996 a response rate of 94%.21 The 2000 CCHS had a response rate of 85%.22

TABLE 4
Canadian health surveys, 1950-1997, by type and response rate, based on analysis of data by Kendall et al.12
Response rate National health surveys (1950-1997) Provincial health surveys (1977-1997)
90%-100% 3 (10%) 1 (3%)
80%-90% 8 (27%) 11 (30%)
70%-80% 7 (23%) 13 (35%)
60%-70% 2 (7%) 3 (8%)
50%-60% 0 (0%) 0 (0%)
40%-50% 1 (3%) 0 (0%)
30%-40% 0 (0%) 0 (0%)
20%-30% 0 (0%) 0 (0%)
10%-20% 0 (0%) 0 (0%)
0%-10% 0 (0%) 0 (0%)
Not reported 9 (30%) 9 (24%)
Total 30 (100%) 37 (100%)

There are ways to promote high response rates, such as by using trained interviewers and voices that encourage compliance (e.g., female), paying attention to speech pace and rhythm, and selecting the appropriate design and length of the questionnaire. There is usually a lower response to interviewers with "foreign" accents and young male interviewers. Questionnaires must be pretested for wording, order and misinterpretation of questions.

5. Questionnaire design

Choi and Pak provided a catalogue of 109 epidemiologic biases in biostatistics: literature review four, study design 31, study execution three, data collection 46, analysis 15, interpretation seven, and publication three.23 Most of the biases (46/109 or 42%) occur in the data collection stage.

There are a number of questionnaire biases that specifically affect telephone interviews. For example, complex and lengthy questions should be avoided in a telephone interview. When the precise wording of a question changes from survey to survey, or the measurement scale for a quantity changes over surveys, the results may not be comparable. Some questions may be framed in such a manner that respondents are misled to a wrong choice (framing bias). The mindset of the respondents can affect their perception of questions, and therefore their answers. Questionnaires that are too long can induce fatigue in respondents and result in uniform and inaccurate answers (response fatigue bias). In a lengthy interview, e.g., a telephone interview that lasts for more than an hour, interviewees are unable to concentrate and give correct answers, and tend to say "yes" so as to quickly get through the interview (yes-saying bias).

Characteristics of CATI

In order to further advance the technique of CATI for public health surveillance, the merits and limitations must be fully appreciated (Table 5).

1. Merits of CATI

Interview

The interview is an efficient way to obtain a wide variety of information (compared with record searching, dosimeter reading, laboratory testing, etc.). It is the only way to obtain opinion and information on attitudes and, in many cases, it can collect information not available from other sources. An interview can target specific subgroups in the population (e.g., using surnames to identify ethnic groups).24

Telephone

Use of the telephone is a cost-effective method that can provide access to large samples (compared with personal interview) and instant results (compared with mailed surveys). Random digit dialing (RDD) can select reasonably random samples, by reaching unlisted numbers and recent connections. Telephone interviews can facilitate responses to sensitive questions (e.g., about sexual behaviours) as compared with personal interviews and are less likely to lead to socially desirable answers.

Computer assisted

CATI can link with a Survey Management System (SMS) to log interviewer activity, schedule repeat calls, select interviewees randomly, remove numbers from the call queue, produce operational reports, and perform automatic dialing. CATI can help interviewers to do their job more efficiently by eliminating tedious paperwork and by providing on-screen instructions. For example, the screen may display previously provided names of family members so that the interviewer can refer to them by name in follow-up questions. It can facilitate data collection by randomizing the order of questions and response options, by programming edits and consistency checks into the questionnaire and by providing automatic question skips for complex questions. The CATI technique may be especially useful if the question structure is complicated or there are many possible responses. This was the case in a survey that required information about asthma medication in terms of 486 possible combinations of drug, dose and delivery systems.25 Furthermore, checks can be built in and an immediate warning given if a reply lies outside an acceptable range or is inconsistent with previous replies.

TABLE 5
Merits and limitations of computer assisted telephone interviewing (CATI)
  Merits Limitations

Interview

  1. Efficient way to obtain information
  2. Only way to obtain opinion, information on attitudes
  3. Can target specific subgroups in the population
  1. Self-report is always a problem
  2. The ability of persons to understand the questions
  3. The ability to identify their status
  4. Willingness to report their status
  5. Need for data to be validated, e.g., by record searching

Telephone

  1. Cost-effective
  2. Provides large samples
  3. Provides instant results
  4. Random digit dialling can provide reasonably random sample
  5. Facilitates sensitive questions
  1. Not applicable if the percentage of household owning telephone is low
  2. Tendency for people who stay at home to be interviewed
  3. Random digit dialling may reach invalid numbers
  4. Attention span problem
  5. Cannot show visual aids
  6. People are tired of telephone market surveys

Computer assisted

  1. Can link with a SMS
  2. Helps interviewers (on-screen instructions)
  3. Facilitates data collection (randomizes questions, edits and consistency checks, question skips)
  4. Facilitates data entry (direct entry, programmed coding)
  1. Large initial cost
  2. Requires specific software
  3. More time to develop and test questionnaire
  4. More time to train interviewers
  5. Possible data entry errors (typing slips)

CATI permits direct data entry in an electronic format (reducing processing time and costs) and therefore quick data turnaround. Coding procedures can be programmed into the computer. A number of alternative coding methods are available, ranging from a simple pick list (shown on the screen) to more sophisticated coding tools that guide the selection of an appropriate code according to pre-specified rules and which require only a few letters to be typed. This not only reduces the costs of office coding, but also allows for better data quality.

2. Limitations of CATI

Interview

Self-report in any survey is a problem. First, respondents may not understand the questions. Second, they may not be able to identify their status. For example, up to 50% of people with diabetes do not know they have diabetes.26 Third, they may not want to report their status (e.g., sexual behaviours). Data collected from interview still need to be occasionally validated by record searching.

Telephone

A telephone survey is not applicable if the percentage of households owning a telephone is low, because it can only reach those who can afford a telephone.27 People who normally stay at home (e.g., older or retired people, or housewives) are the ones who tend to be interviewed. RDD may reach disconnected, non-household, fax or other invalid numbers. In general, respondents' concentration and patience are shorter over the telephone than in personal interview. Therefore, long blocks of text should be avoided, concepts should be relatively simple, rating scales straightforward, response alternatives short (maximum of five or six), and the interview normally should be limited to 20 minutes. Interviewers cannot show cards, life history calendars, or other tools that they need to show the respondents during the interview. A telephone survey loses personal touch and rapport compared with a personal interview. Finally, people nowadays are tired of telephone interviews, as there are already too many consumer and market surveys.

Computer assisted

CATI requires a large cost for initial set-up (hardware, software and personnel). It requires software specifically developed for the questionnaire, and more time and effort to develop and test the questionnaire. Automatic question skips can cause problems if not fully tested. More effort is needed to train interviewers to use computers. There is a possibility of data entry errors, e.g., typing slips when entering the answers. It has been found that 2.0% of recorded responses were erroneous, as compared with 1.1% when a paper-and-pencil technique was used.28

Discussion

To practise public health, we need information that is timely, appropriate, affordable, and accurate. There is great potential for CATI survey technology to address surveillance needs within a public health context. In this regard, the vision and framework of a rapid, flexible, cost-effective, and CATI survey-based public health surveillance system needs to be further developed and tested.29 The feasibility of using on-going local CATI sample surveys to supplement national health surveys is worth further exploring.

Traditionally, health surveys are funded by the federal and, to a lesser extent, the provincial governments. It is important to recognize that there are shared and separate responsibilities of federal, provincial, regional and local agencies and organizations. Recent experience in Canada in the development of a Rapid Risk Factor Surveillance System (RRFSS) indicates that regional and local governments are perhaps interested in providing funding for a nation-wide local CATI-survey based surveillance system that can address regional and local needs.30 Local CATI surveys have enormous potential within public health and health surveillance. Further exposition of their roles and their advantages would better serve that potential. In the US, work is underway to further develop a local and state level CATI survey-based surveillance system, the State and Local Area Integrated Telephone Survey, to supplement national CATI surveys.31

It is also necessary to place the use of local CATI surveys such as RRFSS in the context of the expensive and ambitious National Health Survey program which includes the CCHS being jointly funded by Statistics Canada, Health Canada, and the Canadian Institute for Health Information. For example, what is the added value of rapid health region data, given that official national data are now also available at the health region level but after a certain time lag? Are specialized CATI survey surveillance units within individual health regions necessarily the answer to obtaining critical information?

There is a question of CATI survey capacity at the local and regional level, even when local CATI surveys are justified. Given the complexity of setting up CATI surveys, is it practicable for regional health units to conduct their own CATI surveys? Or is it more cost-effective to employ specialized contractors to conduct local CATI surveys through a national or provincial coordination scheme?

At the local health region level, many health units do not have analytical or data processing resources. An example is the recent CCHS that collected data at the health region level. The CATI survey has great amounts of data that might be relevant to health regions, but many of them cannot access and analyze the data independently because of a lack of staff and resources, including hardware and software. However, these types of problems can be worked out through communications with the parties that have an interest in the data. Automated standard data analysis software, which produces a standard set of statistics, tables and graphs, may be helpful to the health regions in this regard.1,29

One must stress the importance of having an analytical plan for the data that are collected in the local CATI surveys. There should be a clear understanding of what descriptive statistics should be compiled quickly and then a plan should be in place for more detailed analysis.1 If the data are timely, there is a great sense of immediacy if important changes in population health are detected. Standard statistics and standard analysis protocols help to maintain the saliency and therefore the policy value of the information.

In modern society, the type of surveillance information that might be critical to the infectious disease patterns of a population is more likely to come from networks of clinicians rather than through surveys of the general client population, as in the case, for example, of West Nile virus, E. coli contamination of the Walkerton water supply and severe acute respiratory syndrome. Of course, this does not entirely diminish the importance of more general survey surveillance information. But does this mean that the local and national CATI surveys should be geared more towards chronic diseases and their risk factors?

Many national health surveys are conducted every year or every two years. In many situations, monthly data are shown to be desirable. For example, the CCHS, which started in 2000, is administered on a monthly basis.13 The BRFSS has been a continuous surveillance system in the US for some years.17 In Australia, NSW Health has converted its annual risk factor survey to a continuous collection system.16 Despite the ongoing debate about whether data collected more frequently than annually (e.g., monthly) are needed for chronic diseases, monthly and seasonal trends in chronic disease risk factors, e.g., physical inactivity, or low intake of fruits and vegetables, are indeed observed, according to initial analysis of the 2000 CCHS and 1999 RRFSS pilot results. Monthly data can allow better and statistically more powerful detection and evaluation of the success of public health prevention and control programs and campaigns. Further data from the monthly CCHS and RRFSS should be able to clarifiy the true benefit of monthly data compared with the traditional annual or biennial data.

Sample size is a statistical question. There are formal procedures to calculate what sample size is needed to allow the reliable detection of differences from administration to administration (or between sample subgroups within a single administration).32 However, it is very difficult to statistically estimate a sample size for CATI surveys, because there are many items of interest in the questionnaire. CATI survey sample sizes are better estimated empirically. For example, on the basis of the experience of the 1997-1998 New South Wales Health Survey,16 the 1998 US BRFSS17 and the 2000 Canadian CCHS13 it seems that for most purposes a sample size of 100 interviews per month per survey region is adequate.

The complex nature of survey design and the various issues associated with surveys, such as methods to increase response rate, questionnaire design, and data collection biases, are difficult to address in a short paper like this. Moser's textbook on Survey Methods in Social Investigation is over 350 pages long.33 Interested readers are referred to a number of recent textbooks on survey methods.34-36 The important point is that one has to be careful when conducting survey operations, especially CATI.

The use of CATI has the advantage of built-in data quality feedback. However, there is a problem of data entry errors. This implies the need to design data quality checks at the entry level. The complexity of setting up CATI surveys, including extensive programming requirements and testing, needs to be carefully tackled.

Telephone surveys have a number of difficulties. Non-operating telephone banks or telephone numbers that are assigned to businesses are one problem that leads to difficulties in obtaining samples. The issue of cellular phones is emerging as a potential problem in CATI survey design because there are cases in which every member of a family has his or her own cellular phone and there may not be a conventional telephone in the household. The existence of multiple telephone numbers relating to the same household may influence the likelihood that a household would be selected to participate in the survey. Cellular phone numbers are excluded in current survey sample frames. That implies a possibility of bias in telephone samples.

CATI health surveys also face increasing difficulties in competing with telemarketing surveys for interview time and cooperation from the general public. With the increasing number of daily CATI surveys, including health and telemarketing, the response rate to any specific CATI survey will drop. Are we nearing the end of the useful life of the telephone, and therefore up CATI survey technology?

Besides CATI surveys, there are other vehicles for obtaining health information, such as routine records, registry data, and research databases. It may be time to explore, discover, or invent new data sources to supplement CATI data with regard to health information for public health and health surveillance.

CATI technology has only been in use for several decades. It has a number of advantages to be utilized, and a number of disadvantages to be minimized. Resolving the CATI methodological issues and building a CATI survey-based public health surveillance system will help us in charting a path forward in the 21st century.

Acknowledgements

The author gratefully thanks the members of the 1999 (RRFSS) Steering Committee, and, in particular, Dr. Philippa Holowaty and Dr. Eric Holowaty, for providing relevant data from the 1999 Durham CATI health survey pilot project and technical assistance. Dr. David Mowat contributed significantly to the vision of a rapid, flexible, cost-effective, survey-based public health surveillance system.

This paper is based on an invited keynote speech at the "CATI (Computer Assisted Telephone Interviewing) Health Surveys in Australia - the Challenges Ahead" Forum, November 28, 2001, Sydney, Australia.

References

  1. Choi BCK. Perspectives on epidemiologic surveillance in the 21st century. Chron Dis Can 1998;19:145-1.
  2. Thacker SB. Historical development. In: Teutsch SM, Churchill RE (eds). Principles and practice of public health surveillance, 2nd ed. New York: Oxford University Press, 2000:1-16.
  3. Choi BCK, Pak AWP. Lessons for surveillance in the 21st Century: a historical perspective from the past 5 millennia. Soc Prev Med 001;46:361-8.
  4. Wilson D, Taylor A, Chittleborough C. The second Computer Assisted Telephone Interviewing (CATI) Forum: the state of play of CATI survey methods in Australia. Aust N Z J Public Health 2001;25:272-4.
  5. Marcus AC, Crane LA. Telephone surveys in public health research. Med Care 1986;24:97-112.
  6. Uitenbroek DG, McQueen DV. Trends in Scottish cigarette smoking by gender, age and occupational status, 1984-1991. Scott Med J 1993;38:12-5.
  7. Anie KA, Jones PW, Hilton SR, Anderson HR. A Computer assisted telephone interview technique for assessment of asthma morbidity and drug use in adult asthma. J Clin Epidemiol 1996;49:653-6.
  8. Ketola E, Klockars M. Computer Assisted Telephone Interviewing (CATI) in primary care. Fam Pract 1999;16:179-83.
  9. Groves RM, Mathiowetz NA. Computer Assisted Telephone Interviewing: effects on interviewers and respondents. Public Opin Q 1984;48:356-69.
  10. Davis PB, Yee RL, Chetwynd J, McMillan N. The New Zealand partner relations survey: methodological results of a national telephone survey. AIDS 1993;7:1509-16.
  11. Slade GD, Brennan D, Spencer AJ. Methodological aspects of a Computer assisted telephone interview survey of oral health. Aust Dent J 1995;40:306-10.
  12. Kendall O, Lipskie T, MacEachern S. Canadian health surveys, 1950-1997. Chron Dis Can 1997;18:70-90.
  13. Statistics Canada. The Canadian Community Health Survey (CCHS) - Cycle 1.1. Internet information.
  14. Cochran WG. Sampling techniques. New York: John Wiley & Sons, Inc., 1963.
  15. Taylor A. CATI health surveys in South Australia. Presentation at the Computer Assisted Telephone Interviewing (CATI) Population Health Survey Forum, Melbourne, Australia, October 26, 1998. URL: http://hna.ffh.vic.gov.au/phb9811056/at.htm
  16. Baker D. NSW health survey program -Epidemiology and Surveillance Branch, NSW Health Department. Presentation at the Computer Assisted Telephone Interviewing (CATI) Population Health Survey Forum, Melbourne, Australia, October 26, 1998. URL: http://hna.ffh.vic.gov.au/phb9811056/dbaker.htm
  17. Centers for Disease Control and Prevention. Health risks in America: gaining insight from the Behavioral Risk Factor Surveillance System. Atlanta: US Department of Health and Human Services, 1997.
  18. Statistics Canada. National Population Health Survey: cycle 2. Ottawa: Statistics Canada, May 1998. Catalogue No. 11-001E.
  19. Quint T. A consultant's perspective on conducting a computer assisted telephone interviewing (CATI) population health survey. Presentation at the Computer Assisted Telephone Interviewing (CATI) Population Health Survey Forum, Melbourne, Australia, October 26, 1998. URL: http://hna.ffh.vic.gov.au/phb9811056/dbaker.htm
  20. Statistics Canada. National Population Health Survey Overview 1996/1997.
    Ottawa: Statistics Canada, 1998. Catalogue No. 82-567-XPB.
  21. Swain L, Catlin G. The National Population Health Survey: its longitudinal nature. Proceedings of the Joint IASS/IAOS Conference, September 1998.
  22. Chen J. Age at diagnosis of smoking-related disease. Health Rep 2003;14:9-19.
  23. Choi BCK, Pak AWP. Bias, overview. In: Armitage P, Colton T, eds. Encyclopaedia of biostatistics. Chichester, Sussex, UK: John Wiley & Sons Ltd, 1998;1:331-8.
  24. Choi BCK, Hanley AJG, Holowaty EJ, Dale D. Use of surnames to identify individuals of Chinese ancestry. Am J Epidemiol1993;138:723-34.
  25. Anie KA, Jones PW, Hilton SR, Anderson HR. A Computer assisted telephone technique for assessment of asthma morbidity and drug use in adult asthma. J Clin Epidemiol 1996;49:653-7.
  26. Choi BCK, Shi F. Risk factors for diabetes mellitus by age and sex: results of the National Population Health Survey. Diabetologia 2001;44:1221-31.
  27. Centers for Disease Control. Guidelines for evaluating surveillance systems MMWR 1988;37(No. S-5):1-18.
  28. Birkett NJ. Computer-aided personal interviewing: a new technique for data collection in epidemiologic surveys. Am J Epidemiol 1988;127:684-8.
  29. Choi BCK, Mowat D. Vision of a rapid, flexible, cost effective, survey-based public health surveillance system. J Epidemiol Community Health 2001;55:612. URL: http://jech.bmjjournals.com/cgi/reprint/55/9/612.pdf
  30. Rapid Risk Factor Surveillance System (RRFSS).
  31. National Center for Health Statistics. State and Local Area Integrated Telephone Survey (SLAITS).
  32. Fleiss JL. Statistical methods for rates and proportions. New York: John Wiley & Sons, 1981.
  33. Moser CA. Survey methods in social investigation. London: William Heinemann Ltd, 1966.
  34. Fowler FJ Jr. Survey research methods. Thousand Oaks, Ca: SAGE Publications, Inc., 2002.
  35. Rea LM, Parker RA. Designing and conducting survey research. San Francisco: Jossey-Bass Publishers, 1997.
  36. Aday LA. Designing and conducting health surveys. San Francisco: Jossey-Bass Publishers, 1996.

Author References

Bernard CK Choi, Centre for Chronic Disease Prevention and Control, Population and Public Health Branch, Health Canada, Ottawa, Ontario, Canada and Department of Public Health Sciences, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada and Department of Epidemiology and Community Medicine, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada Correspondence: Bernard CK Choi, Evidence and Information for Chronic Disease Policy Division, Centre for Chronic Disease Prevention and Control, Health Canada, PL 6701A,120 Colonnade Road, Ottawa, Ontario, Canada K1A 1B4; Fax: (613) 941-2633; E-mail: Bernard_Choi@hc-sc.gc.ca


Page details

Date modified: