13.0 Emerging Use of New Types of Survey Data
13.1 Stated-Response Surveys
Introduction
Definition
The survey techniques and procedures described in other sections of this manual are oriented towards surveys designed to collect data describing actual travel behavior. This type of data is often referred to as Revealed-Preference (RP) data since decision makers reveal their preferences through the choices they actually make in the marketplace. Another type of data that is being used in transportation planning with increasing frequency is based on Stated Responses (SR). This type of data is based on statements made by decision makers on how they would respond in a hypothetical situation.
Lee Gosselin has described a number of techniques that can be included under the general term Stated Response. He has developed a taxonomy of four classes of SR approaches based on whether constraints and/or behavioral outcomes are either predefined or elicited in the survey instruments. These four classes of techniques are summarized in Table 13.1 and described briefly below:
-
Stated-Preference (SP) Techniques included in this class focus on choices or tradeoffs among predetermined alternatives in the face of given sets of constraints. A formal experimental design is used to define alternatives in terms of specific combinations of attributes (i.e., travel time, travel cost, etc.) and attribute levels to insure that the influence of each attribute on choice can be inferred. As shown in Table 13.1, both behavioral outcomes and constraints are mostly given.
-
“Given the levels of attributes in these alternatives, which one would you choose?”
-
“Given the levels of attributes in these alternatives, please rank these alternatives in order of preference.”
-
“Given the levels of attributes in these alternatives, how would you rate each alternative?”
Of the four classes of SR techniques, SP surveys are the most important source of data for developing choice models to represent traveler decisions when faced with new travel alternatives and transportation policy actions.
-
Stated-Tolerance (ST) Techniques included in this class do not ask respondents to respond to alternative behavioral outcomes represented by specific attributes and attribute levels. Instead, respondents are asked to identify the conditions under which they would take a particular action or accept a particular behavioral outcome. The basic type of information sought are responses to questions such as: “Under what circumstances could you imagine yourself doing the following?” This class of techniques have not received much attention in transportation planning.
-
Stated-Adaptation (SA) Techniques included in this class ask respondents to indicate in a relatively open-ended manner how they would respond when faced with a particular set of constraints. The basic type of information sought are responses to questions such as: “What would you do differently if you were faced with the following specific constraints?”
-
Stated-Prospect (Spro) With these techniques, neither the list of possible behavioral outcomes nor a detailed set of constraints is predetermined. Instead, respondents are typically presented with some sort of general scenario (e.g., energy shortage) as a way of initiating the process of eliciting behavioral outcomes and constraints. Measurement methods for these techniques involve the use of simulation gaming techniques. The basic type of information sought are responses to questions such as: “Under what circumstances would you be likely to change your travel behavior and how would you go about it?”
To date, most of the application experience in transportation has been with stated-preference techniques. As a result, the remainder of this section will focus on this class of techniques. A number of references are available for more information regarding the other three classes of SR techniques described above., , ,
Applications
Historically, travel forecasting has been based on actual behavior (i.e., revealed preferences).
Stated-preference techniques have been used extensively in the private sector since the mid-1970s to support product design, pricing, targeting, and marketing decisions for new products and services. In addition, SP techniques have been applied as a means of simulating product demand in order to avoid costly market testing.
Initial applications of SP in the area of transportation date back to the early 1980s. However, SP techniques have only recently begun to be accepted among transportation planning professionals in the United States. This could be due to the historical reliance on revealed-preference data (i.e., data based on observed behavior) for travel forecasting and concerns about the reliability of stated-preferences. In particular, there are concerns that what people say they will do under a specific set of circumstances may be different from what they would do if actually faced with these circumstances.
However, there can also be problems associated with the use of RP data. These include the following:
-
In some cases explanatory variables may be highly correlated (e.g., travel time and travel cost), making it difficult (and in some cases impossible) to estimate the effects of these variables;
-
Observed behavior may be caused primarily by variables that are not of direct interest, while the variables that are of interest may be “swamped” by these other factors; and
-
In situations involving new products, services or policies, there is no observed behavior.
The use of stated-preference techniques overcomes many of these problems.
Transportation applications: things that cannot be represented using RP:
-
New services: high-speed rail, toll road facilities, Intelligent Transportation Systems products and services, etc.; and
-
Changes in attributes of existing services (fare changes, congestion pricing, etc.).
Design of Stated-Preference Exercises
The design of stated-preference exercises involves the following:
-
Developing an experimental design, including the selection of attributes and attribute levels;
-
Designing the instrument;
-
Defining the context for the exercise; and
-
Designing the sampling plan.
Experimental Design
Stated-preference techniques typically make use of an experimental design to determine which combinations of attribute levels should be presented to respondents. The objective of the experimental design is to insure that the attributes presented to respondents are varied independently from one another so that the effect of each attribute on preferences can be identified. Such a design is said to be “orthogonal.”
In developing an experimental design, the first step is to specify the attributes and attribute levels to be included in the analysis. As an example, the experimental design used to develop toll road diversion models is presented in Table 13.2. As shown, this experimental design included three attributes:
In general, a minimum of three attributes is usually needed to provide a realistic context for the stated-preference exercise. In general, the attributes associated with a particular stated-preference exercise should represent those factors that are important in the choice process. Experience suggests that the number of attributes presented to a respondent should be limited to six or seven. Presenting respondents with more attributes makes the exercise increasingly difficult for respondents to deal with and may in some instances limit the usefulness of the data. (It should be noted that while it may be necessary to limit the number of attributes presented to any one respondent, the overall design can include additional attributes. The technique for doing this is discussed in a later section.)
Three levels were defined for each of the attributes described in the example presented in Table 13.2. While it is possible to use two attribute levels, a minimum of three levels is required to detect non-linear relationships between attributes and preferences. Therefore when non-linear relationships are thought to exist, at least three levels should be used.
A key design issue in setting values for attribute levels is that these values appear realistic to the respondent. If possible, attribute values should be tailored to be consistent with the alternatives that they would actually be faced with. For example, in the experimental design for the toll road pricing study, travel time differences and toll levels were tailored to the distance that the respondent would actually travel on the proposed facility, which in turn was based on the respondent’s home and employment locations.
A “full-factorial” experimental design for this example would include every possible combination of attribute levels. The number of combinations is the result of the number of levels raised to the power of the number of attributes. In this case, three attributes raised to the power of three gives 27 possible combinations. These are presented in Table 13.2.
Each of these 27 combinations of attribute levels represents a toll road alternative that respondents would be asked to evaluate. Experience has
Insert Table 13.2
shown, however, that respondents can quickly become fatigued when faced with a large number of alternatives to evaluate. This in turn can lead to significant response errors. Some researchers have suggested that a range of between 9 and 16 options is acceptable, depending on the complexity of the exercise10. Therefore, while the stated-preference design for the toll road example is not very complicated, it was nonetheless desirable to reduce the number of alternatives to be presented.
There are several ways to reduce the number of alternatives. These include the following:
-
Use “fractional-factorial” designs;
-
Remove options that will “dominate” or be “dominated” by all other options in the choice set;
-
Separate the alternatives into “blocks,” so that the full choice set is completed by groups of respondents, each responding to a different sub-set of options; and
-
Carry out a series of experiments with each individual, offering different attributes, but with at least one attribute common to all.
Fractional-Factorial Design As stated earlier, the experimental design presented in Table 13.3 represents a “full-factorial” design. This type of design includes all possible combinations of attribute levels, making it possible to independently estimate the effects of each attribute on response. The most common way of reducing the number of combinations or alternatives that need to be presented is through the use of a “fractional-factorial” design. These designs use only a portion (i.e., a fraction) of all possible combinations. This approach assumes that some or all of any interactions between attributes, in the way they influence response, are negligible. A fractional-factorial design for the toll road example is presented in Table 13.3. As shown, the number of alternatives is reduced from 27 to 9.
While this approach can significantly reduce the number of alternatives needed for a stated-preference exercise, it does so by ignoring some or all interaction effects. If interactions among attributes are, in fact, significant, their effects will be loaded onto the individual main effects, while it will bias the estimate of the relative importance of individual attributes on response. The degree of bias will depend on the significance of the interaction effects. If this bias occurs, the main effects are said to be “confounded” with interaction effects.
Insert Table 13.3
There are stages by which a full factorial design can be reduced which allow the investigation of some, but not all, interactions effects. There are a number of catalogues available to assist in the design of fractional-factorial designs such as these. In addition, micro-computer-based systems are also available.
Removing Dominant/Dominated Options This approach applies primarily to stated-preference exercises presented as choice experiments. With this approach, those alternatives that dominate or are dominated in each attribute by every other alternative included in the choice set can be excluded. For example, referring back to the experimental design presented in Table 13.2, 12 of the 27 alternatives could be eliminated because the toll road alternative is less desirable than the non-tolled route. For example, in alternative 25, in addition to the toll, both the travel time and likelihood of delays on the toll road are greater than on the non-tolled route. Further, even those alternatives for which travel time and likelihood of delays are the same, the presence of the toll would make the toll road option less attractive. The only potential drawback with this approach is that any respondents choosing alternatives at random or illogically will not be easily identified based on an analysis of their responses.
Block Design This third approach involves dividing the total number of alternatives included in an experimental design into sub-sets (or blocks). The sample of respondents is divided into groups, with each group receiving a different block. The success of this approach depends on the similarity of preferences between the different groups of respondents.
Common Attributes With this approach the attributes to be evaluated are divided among two or more experimental designs. At least one common attribute must appear in each design to allow comparison of relative preferences over all the attributes included.
Instrument Design
Unless the stated-preference exercise is very simple, some sort of visual presentation of the alternatives and attribute levels will be necessary in order to allow respondents to understand and comprehend what is being presented to them. This is particularly true for choice and rating exercise, in which the respondent must compare two or more alternatives. This would limit the usefulness of telephone interviews, unless the respondent has received survey materials in advance.
The format and layout of the instrument used for the exercise will depend to some extent on the type of response sought (i.e., choice, ranking or rating). For choice exercises, respondents will be comparing two or more alternatives at the same time. The alternatives comprising the choice set should appear together on a card, sheet of paper or computer screen. For ranking exercises, having each alternative on a separate card is very useful, since this approach allows the respondent to spread them out and physically arrange them in their order of preference. With rating data, it is usually only necessary to consider one alternative at a time independently from other alternatives. Therefore, a wide range of layouts are possible for these responses.
It is always useful and in some cases essential (e.g., when respondents are expected to complete the exercises on their own) to provide materials describing the alternatives, attributes, and attribute levels included in the exercise. This could include drawings or pictures of new travel modes (e.g., high-speed trains) or sample schedules and route maps for new transit services.
Context Definition
A key objective in the design of stated-preference exercises is to establish as much realism as possible. The following points noted by Jones are particularly relevant to building realism into the context of the exercise, the options that are presented and the responses that are permitted:
-
Focus on very specific rather than general behavior – i.e., ask respondents how they would respond to a particular product or service under a specific set of conditions rather than in general;
-
Use a realistic choice context that respondents have actually experienced or one that they feel they could be placed into;
-
Use existing or realistic levels of attributes within the experimental design so that the alternatives are built around these levels;
-
Limit the range over which attribute levels are varied to those values that respondents perceive to be possible;
-
Wherever possible, incorporate checks on the answers given;
-
Allow for the effect of day-to-day variability on choices;
-
Make sure that all variables relevant to the choice process are included in the analysis;
-
Where possible, simplify the presentation of choice exercises (e.g., by highlighting the attribute levels that are different between alternatives);
-
Make sure that constraints on choice are taken into account (e.g., fixed arrival times at work); and
-
Allow respondents to opt for a response outside the set of the experimental alternatives (e.g., in all alternatives in a mode choice exercise are too expensive, the respondent may choose not to make the trip, so “neither” should be included as a possible response).
Sample Design
The same sampling issues associated with revealed-preference data that were discussed in Chapter 5.0 also apply to stated-preference data. The difference with stated-preference surveys is that each respondent typically provides responses to more than one choice exercise. For example, if 50 respondents each complete 5 choice exercises, this would result in 250 data records. It is important to note that even with 250 responses, the sample size from the standpoint of assessing statistical precision is still 50. The fact that there are five data records for each respondent (i.e., five “repeated measures”) provides more information about each respondent, but not necessarily more about the population as a whole. Only an adequately sized random sample can do this.
Administration of Stated-Preference Exercises
Key issues in designing a method for administering stated-preference exercises include:
There are three primary means for administering stated-preference exercises:
Self-administered surveys offer little opportunity for interviewer interaction. While a toll-free ‘help’ telephone number can be provided, it is not likely that many respondents would go to the trouble of calling. Self-administered survey instruments must be designed very carefully and subjected to rigorous pre-testing. Written material is required to communicate to the respondent the context in which the exercises are to be completed and to define the attributes and attribute levels used in the exercise. If the distribution of the survey instrument can be controlled (by mailing to certain ZIP codes, handing out at toll facilities, etc.), it may be possible to tailor the attribute levels to the respondent’s situation. The primary advantage of this method is that it is lower in cost relative to other methods for administering stated-preference exercises.
With telephone/mail/telephone surveys, an initial recruiting call is made to obtain the cooperation of the respondent. This initial recruiting call also provides an opportunity to obtain information that can be used to tailor the exercise to the respondent’s situation. The stated-preference exercises are then mailed to respondents. The exercises are then administered as part of a follow-up telephone interview. This provides an opportunity for the interviewer to explain the exercise and answer any questions the respondent may have. This method is more expensive than self-administered, but less than in-person interviews, especially if a broad geographic representation is desired.
In-person interviews provide the greatest degree of interaction between the interviewer and the respondent. It is also one of the more expensive methods for administering stated-preference exercises. In recent years microcomputers have been used to administer choice exercises as part of an in-person interview. Computer-assisted personal interviewing (CAPI) provides an excellent opportunity for tailoring choice experiments based on responses given to preliminary questions. There are several software packages available for designing and administering stated-preference exercises.
Validity of Stated-Preference Results
A concern often voiced about the use of stated-preference data is that people do not necessarily do what they say they will do. Therefore a key issue associated with stated-preference data is validity. Pearmain, et al. have reviewed a number of studies in which the validity of predictions of choice behavior based on stated-preference techniques was investigated. Based on this review, they concluded that the results of most of these studies seemed encouraging, suggesting that stated-preference techniques can predict choice behavior for the sample being studied with a reasonable
degree of accuracy. However, they noted that most of the reported studies of validity had the following shortcomings:
-
The research was not done in a systematic way;
-
The research was carried out as a by-product of a practically-oriented study;
-
Some of the studies were based on incorrectly applied prediction methods; and
-
Typically the reported research only concerned the reproduction of existing behavior of the sample being studied; few studies deal with the generalization of predictions to entire populations, and very few look at the ability to predict behavioral changes in response to changed circumstances.
They concluded that additional systematic validity research is needed before definitive findings and general guidelines can be given.
Combining Stated – and Revealed-Preference Data
The results of choice-oriented stated-preference techniques is analogous to revealed-preference choice data collected as part of travel surveys. This gives rise to the possibility of combining these two types of data for model development and forecasting. One approach would be simply to pool these two types of data. It has been shown, however, that this naive pooling of stated-preference and revealed-preference choice data can lead to seriously biased models. The key problem, noted by Bates, Bradley and Kroes and others is that these two types of data are subject to different types of errors, making it unlikely that they share a common distribution of unobservables.
A number of approaches have been developed to combine stated-preference data and revealed-preference data for model estimation in a way that accounts for differences in error components. A sequential estimation procedure, described in Ben-Akiva and Morikawa, can be carried out using readily available software. A more statistically efficient simultaneous approach has been developed which requires specialized software. This simultaneous approach has been adapted to use a form of nested logit estimation possible with existing software packages.
Travel Survey Manual 13-1
13.2 Longitudinal Surveys
Nearly all household travel surveys in the U.S. have been one-time “snapshots” of travel behavior in a region. These cross-sectional surveys, even in areas where several surveys have been conducted, have been performed independently, with separate random samples. Therefore, the travel demand models developed for U.S. urban areas capture cross-sectional variation; i.e., variation among individual respondents, but do not capture longitudinal variation, or changes to individual behavior over time. In other words, the changes in travel behavior that might occur when a household obtains a new automobile are modeled by comparing the behavior of households with the original number of vehicles to households with one more vehicle.
Longitudinal analysis is necessary for consideration of several factors affecting travel behavior, including:
-
Time lags between an occurrence which changes behavior and the change itself;
-
Gaining information about travel conditions;
-
Habitual behavior; and
-
Experimentation and learning.
To obtain the information necessary for such analyses, a longitudinal, or panel, survey can be conducted. A panel survey consists of several “waves,” or repeated surveys performed over time on the same sample. For a household survey, this means that the same set of households is asked to complete a travel survey periodically, say every year or two.
While panel surveys are common in market research outside the transportation field, they are rare within the field. In theory, panel surveys could be developed for many of the survey types described in this manual. However, the difficulties with being able to continually contact respondents over several years has, for practical reasons, limited the use of panels to household travel surveys (discussed in Chapter 6.0). This section deals with longitudinal household surveys.
Because each wave of a panel survey is a cross-sectional survey which is very similar to a one-time household travel survey, panel survey designers should be familiar with the design of and issues concerning household surveys as described in Chapter 6.0. The remainder of this section deals with those aspects of household surveys that are unique to panel surveys. This section should be used only as a supplement to Chapter 6.0.
The only household panel survey that has been conducted for a U.S. urban area is the Puget Sound Transportation Panel (PSTP) in the Seattle area. Because of the uniqueness of this survey effort, it is cited extensively in this section and is described briefly at the end.
Survey Design
The design of the individual cross-sectional travel survey for each wave of a panel survey follows the guidelines provided in Section 6.3. The main additional issues for panel surveys are the number of waves and the time between waves. Ideally, there would be no set number of waves, and the panel would continue indefinitely so that continuing information could be provided. As a practical matter, it will be impossible to guarantee that a panel can continue indefinitely as public agency budgets are not known years in advance.
Because there are few examples of panel surveys, there is no accepted “best” period between survey waves. There are, however, some obvious tradeoffs between longer and shorter intervals. Having longer periods of several years between waves can reduce costs and can provide sufficient “lag” time for various transportation system and other changes to have an effect on travel behavior. Also, the burden on respondents is lower if waves are less frequent. On the other hand, attrition is likely to be higher if the period between waves is long. Not only will respondents lose interest, but more of them will move out of the region or become otherwise ineligible to continue on the panel. The Puget Sound Transportation Panel used waves that were one or two years apart.
Sampling
The development of the initial sample (i.e., Wave 1) for a household panel survey is virtually identical to the development of a sample for a one-time cross-sectional household travel survey, as described in Section 6.5. The only real difference is that respondents are recruited for the continuing survey (all waves) than for a single survey period. There are, however, additional issues concerning the continuation of the panel in subsequent waves, including the following:
-
New households must be recruited in subsequent waves to replace households who drop out.
-
Households may relocate. If they move out of the study region, they must be replaced; if they move within the region, they should be tracked.
-
Households are not static entities; they may merge (e.g., marriage), split-up (e.g., adult children leaving), or add or lose members (births and deaths).
-
Households may move into the study region between waves, requiring a mechanism for allowing them to enter the panel to maintain a good representation of the population.
The simplest method for adding new households to the panel is to draw a new random sample of the required number of households, using the same method used to draw the original sample. There are two major problems with this method:
-
If dropouts are correlated with certain household or travel behavior characteristics, the new sample will be biased; and
-
Changing characteristics of the overall population will not be taken into account.
One way of dealing with the first problem is to classify the households according to important characteristics affecting or related to travel behavior, such as:
-
Area type (CBD, suburb, etc.);
-
Household size and income;
-
Number of autos; and
-
Typical travel modes (drive alone, carpool, transit, etc.).
Assuming that the original sample is believed to be representative of the overall population, the new sample should be drawn so that the characteristics of replacement households match those of the dropouts. This can be extended to deal with the second problem as well, assuming information on how the population has changed since the last wave can be obtained.
Another way of dealing with these issues is to assign weights to the households. As discussed in Section 6.12, weights are commonly used in survey data expansion. Depending on the distribution of characteristics of the households, a new set of weights is computed for each wave.
An alternative method for adding new sample households is to base the sample on dwelling units rather than households. In this method, the geographic address continues as the sample element. If households change in composition, they are retained (although any members who move out do not remain in the panel). If a household moves, then the household that replaces them in the dwelling unit is recruited. The main advantages are simplicity, slightly lower costs, and the ability to maintain the sample representation according to dwelling unit-based characteristics (area type, accessibility, income, household size, etc.). It has several disadvantages, however, including the following:
-
No information is provided on the cause or effect of the residential move, which could be associated with changes in household composition or income.
-
A new household in a dwelling unit is more likely to decline participation in the survey than a continuing household.
Data Items
All data items associated with a cross-sectional household survey should be collected for a household panel survey. The panel survey, however, provides an opportunity to collect additional information on household characteristics that change over time. For example, a household that purchases a new car can report information about the type of vehicle, the type of vehicle it replaced (if any), purchase cost, and perhaps even the reasons for the purchase. Similar questions could be asked about vehicles which are disposed of by the household. Collecting such information would allow the development of a decision-oriented auto ownership model that considers how changing household characteristics affect the number and type of cars a household chooses to own.
Attrition
Probably the most critical issue that is unique to panel surveys is sample attrition. Attrition is defined as households dropping out of the panel after responding to at least one wave of the survey. A household is considered to have dropped out if:
-
The household cannot be located for the next wave;
-
The household declines to participate in the next wave; or
-
The household provides incomplete or unusable responses to the next wave.
Attrition is a serious problem because even if a lost household is replaced, there is no time series information on either the original or replacement household between the previous and subsequent waves, which is the primary motivation for longitudinal surveys. In addition, there is no guarantee that attrition will be random; households with certain characteristics may be more likely to drop out than others.
It is essential that attrition be minimized in panel surveys. This should be done by:
-
Maintaining contact with households between waves;
-
Tracing households that may move between waves;
-
Following up with non-respondents who have agreed to participate; and
-
Providing incentives to survey participation, where appropriate.
There are several objectives in maintaining contact with the survey households. It is important to make respondents, who are committing a substantial amount of time and effort, feel that they are appreciated, and that their information is important to transportation planning in their area. This can be done by sending holiday cards, letters of appreciation, and reports on the information collected during the survey. Continuing contact with survey households also provides “early warning” of households who are moving and makes it easier to trace them so that they can continue to participate. In addition, regular communication, which can include providing information about the survey, can maintain interest, especially if there are long periods between waves.
It is obvious that there are benefits to taking time to trace participating households who move between waves. The main advantage, of course, is to allow these households to continue to participate in the panel. Keeping these households is even more critical when one considers that residential location decisions are critical to transportation choices and that there are few if any other sources for data on households who move. Besides maintaining regular contact to provide advance notice of impending or completed moves, another technique to help trace households is to request the name and phone number of someone outside the household who would always know where the panel member was.
Hensher points out that the term “tracing” includes not only locating a household but also obtaining item responses. Since it is critical to maintain households in the sample, it is more critical in panel surveys than in cross-sectional surveys to follow up with respondents who provide incomplete or unusable responses, especially if the respondent has participated in previous waves.
The issue of incentives for household surveys is discussed in detail in Section 6.3. It is raised again here because of the experience of the Puget Sound Transportation Panel. In this survey, the first wave used three different types of incentives in Wave 1: $1.00 pre-survey per person, $10.00 post-survey per household, and no financial incentive. For subsequent waves, a $2.00 bill was provided for each person, paid prior to survey completion. In Wave 1, less than half of the households who received no incentive completed the survey, but over 60 percent of those receiving incentives did. However, the attrition rate for Wave 2 was actually lower for the households who received no incentive in Wave 1 (16 percent) than for those who did (20 percent). The point to be made is that response rate, as opposed to the number of respondents, is more critical for a panel survey because of the attrition problem, and that any measures that could improve the response rate should be carefully considered.
Despite the efforts to minimize attrition, it is inevitable that there will be some attrition in any panel survey. Households will move out of the area or break up, and some will simply stop participating. The critical issue in dealing with attrition is to ensure that replacement households are chosen in a manner that does not bias the sample, as described above in the “Sampling” section.
Puget Sound Transportation Panel
The Puget Sound Transportation Panel (PSTP), begun in 1989, is the only longitudinal household travel survey in the United States. Each wave consists of a cross-sectional survey in which respondents are asked to fill out a two-day travel diary containing information on all trips. The surveys were conducted using a telephone contact/mailout/mailback method. The estimated cost of the first four waves is $627,000.
The sample for the survey is stratified by county of residence and by usual travel mode of household members (drive alone, carpool, transit). The initial sample was recruited using random digit dialing, with additional choice-based sampling performed from onboard solicitation of bus riders. In total, transit households comprise about 20 percent of the sample and 10 percent of carpool households. The incentive program, which currently pays each member of a participating household $2.00, is described in the previous section.
Waves in the PSTP have been conducted in the autumns of 1989, 1990, 1992, 1993 and 1994. The original sample size was 1,713 households. For each wave, households who dropped out were replaced; there have been 1,600 to 2,000 households in each wave. Of the households who participated in the original survey (Wave 1), 54 percent completed the survey in Wave 4.
The PSTP takes specific steps to minimize attrition. Regular contact (about twice per year, excluding mailing of the diaries) is maintained with participating households, including notices of diary mailings, summaries of survey results, and holiday cards. In addition, surveys of perceptions, needs, and attitudes have also been conducted three times through 1993.
Summary
The one time snapshot survey of travel is well suited to the task of measuring existing travel consumption in a region. However, when it comes to using the data to anticipate future travel demand or travel demand under changed conditions of infrastructure and/or policies, there is a need for dynamic models of travel behavior calibrated to actual measurements of change in the travel behavior of households. Cross-sectional analysis of change or even change measured by independent household surveys at two points in time lacks the ability to pinpoint change in travel behavior at the household level.
The panel survey, by measuring the change of travel behavior at the individual household level, provides a potentially superior basis on which to calibrate travel models. The survey can be used in a before and after mode to determine travel response to specific change in the transportation system, policies, traffic control measures, and socioeconomic trends. Panels can be used to monitor travel consumption and travel behavior longitudinally.
Controlling for attrition, bias, non response, and representativeness is critical to the success of the panel approach. In undertaking a panel survey, survey designers should thoroughly review the literature of the topic and enlist the advice and support of practitioners with panel expertise and experience.
Travel Survey Manual 13-1