Friday, February 27, 2009

Week 8: Eth-Know-Graphies (EKG)

Sorry for the title, I was just having some fun mispronouncing ethnographies... and a EKG reference... Enjoy!

Blog Question: What distinguishes ethnographies from case studies, how does “triangulation” impact data collection and analysis, and what must ethnographers do to ensure their work is both reliable and valid?

To answer the blog question on the basic level quickly, ethnographies differ from cases studies mostly based on the time of the study, with ethnographies observing over a greater length of time, and observing the multiple facets within an environment. In addition, a hypothesis of also generated within the ethnography. Ethnographies require multiple sources of data and the combination of the data is triangulation. This combination of data is supposed to provide a “rich account” that could not otherwise be obtained via experimentation. Ethnographers have a great number of potential threats to both their reliability and validity. Laura and Asher use Sadler’s list of ten threats ranging from the misinterpretation of cause and effect (No. 10), uneven reliability (No. 6), or pitfalls in confidence in judgment (No .2) (p. 46-7). With the basic blog questions out of the way, I will move onto the specific readings.

Doheny-Farina is through. He utilizes several theoretical assumptions as well as four sources of data; field notes, meeting tape recordings, and two types of interviews that were taken three to five days a week over the course of eight months. Not only did Doheny-Farina use four sources of information captured multiple time a week, but he also brought it back to the theoretical framework. The result of the ethnography was that Doheny-Farina showed that the process was reciprocal and that he described the process. One of the better aspects of this project is that Doheny-Farina did not overstep the limitations of the ethnography, admitted the provisional nature of his model, and noted some limitations to the study. In my personal opinion, this was a good ethnography.

Beaufort, like Doheny-Farina, establishes a theoretical framework, uses a multitude of data sources (interviews, transcripts, written documents), and participates with the research participants over a ‘long’ period, weekly over the course of a year. However, the sample size of n=4 was relatively small and precludes generalization. Beaufort straddles a dangerous line by linking independent and dependent variables and by generalizing which the study may not support.

Sheehy once again uses an eight-month long study of middle school students at SMS. What is interesting for Sheehy’s study is the role the researcher played. Sheehy noted an inconsistent and that her role as a participant/observer changed within the study. This shift may have altered the data and may be a threat to the reliability of the ethnography, but the researcher does not explain any shift in the data. Unfortunately, the effects of the change in the role of the researcher are left unknown. Other than that, Sheehy is also on the edge about generalizing within her conclusion.

To be frank, Ellis is not an ethnography, it is more of a narrative. It does not pull from a theoretical framework that I am aware of as a planner. It does not pull from multiple sources of data, although it is unique. Finally, although it comprises of several months of reflection, the data was not collected (uniformly) over a long enough time. If anything, this is, at best, a good primary source of information for historians in the future and I question the “autoethnography” academic rigor.

Speaking of autoethnographies, Anderson provides a better approach to this style of qualitative research. Overall, Anderson provides a better definition of what an autoethnography entails, mostly because of the researcher being a full participating member of the group/environment that is being studied. However, Anderson provides a historical evolution and theoretical framework for autoethnography, but if we were to look at the whether or not this work is an ethnography, then it is not (duh) . Anderson provides the approach that Ellis lacks.

Saturday, February 21, 2009

Week 7: Survey and Sampling Platters

Blog Question: What are appropriate purposes for surveys, how are subjects selected, how is data collected and analyzed, and what kinds of generalizations are possible?

Surveys are descriptive according to Lauer and Asher (L & A), surveys are often very analytical and potentially quantitative. Surveys are well suited for complex and complicated research efforts that must be executed within a limited resource situation. Surveys reduce the target population to a manageable size with reasonable results. While surveys can be expensive, they may be cheaper that full experimental research. However, surveys are not just descriptive as L &A suggest. It is true that surveys are a wonderful tool for inferring about the descriptive variables of a population, but survey analysis could be considered (if we think back to Morgan) to be a correlational. Not only can a survey’s goals to describe or inform variables, but it can show relationships between variable depending on the questions and design of the survey. In particular, people’s attitudes are easier (and I use “easily” loosely) to correlate depending on their answers to various answers. For example, individual income and commute times normally have a relationship within urban research.

Subject selection is the crux of any survey. Essentially, the subject selection is a large influence on validity and if the sample subject selection is off then the validity is not there. As a result, there are a host of various sampling approaches, random, systematic random, quota, cluster, and stratified sampling. The type of sampling approach depends on the researcher’s goals and desired data. Random sampling works well for an amorphous population. Stratified sampling works best when the researcher is dividing or distinguishing between populations and quota sampling works well with representing certain subgroups within a larger sample. The analysis of the survey results, (for me) is normally a statistical analysis. The analysis must determine whether the sample is representative, statistically significant sample size, and then identify the significant variables and relationships. Within planning, identifying or describing variables is important, but correlational and eventually causal analysis of variables is demanded (although causal assertions can be technically inaccurate, city leaders normally demand “sure things”). Finally, surveys benefit from other methods and research. Adding a minor case study or quantitative analysis can increase the potency of the survey data.
Unlike case studies, a properly designed survey intends to draw inference about a population based on the sample or samples. If the survey is inappropriately designed or does not sample enough to represent the population, the researcher cannot generalize. The research must remain careful not to assert or claim things that the survey data does not support. The wording and execution of the survey does affect the researcher claim. Asking, “will you support transit” is not the same as a willingness to actually ride transit. In addition there is bias and cognitive dissonance that may not surface in the survey. An famous/infamous national survey about gasoline price asked “if gas cost $3, would you cease driving and rely solely on transit” … well 70 percent of the respondents said they would switch to transit; however, with the summer peak of gas prices, this claim by the survey respondents really did not happen. Another example with Clemson ridesharing, only 30 percent were not interested in a carpool program, but this number jumped to over 40 percent when asked to have their name entered into a database of potential users. The moral of this story: just because the survey responds in a particularly to a prompt, this does not mean the respondent will actually act this way when confronted in real life.

Friday, February 13, 2009

Week 6: Case Study

I must post my disclaimer before I talk about the case study readings. For the most part, city/urban planning is “addicted” to case studies. If one was to go to the American Planning Association (APA) or American Collegiate Schools of Planning Conferences (ACSP), the method used by most researchers and practicing planners will be some form of a case study. In fact, I am willing to bet that 95% of APA and 85% of ACSP papers use case studies. Even within my own experience writing a thesis, I think I was the only one in my cohort of planners who did not do a case study. The case study is the default within planning and I must say that I am fairly suspect of any research project that says it is a case study research. (There is also an Old Guard/New Guard distinction/fight and that the New Guard prefers other methods over case studies in planning.) Normally, the case study in planning is poorly done (now, there are exceptions) and is normally indicative of a planning researcher either not understanding the issue, willing to do the work, or it is an advertisement for a place or company that wants to say “look at what I have done.” Am I bias towards case studies? Yes, but I recognize their value in informing specific questions, especially in limited resource research projects, but the flaw in planning case studies lies within the planning community’s desire to over generalize from the case study.
Blog Question: What are appropriate purposes of case studies, how are subjects selected, how is data collected and analyzed, and what kinds of generalizations are possible?
To answer the blog question, I will break up the question into four questions. 1) What are appropriate purposes of case studies? 2) How are subjects selected? 3) How is data collected and analyzed? 4) What kinds of generalizations are possible? I hope this helps.
1) Quite frankly, I find it hard to discuss case studies and not talk about Robert Yin. While this is slightly outside the week’s readings, I am using Yin for the OWS project (My group is doing a case study essentially). Yin offers a wonderful answer in Applications of Case Study Research to the first part of the blog question, what are appropriate purposes of case studies? Yin explains that the “method is appropriate when investigators desire to define topics broadly and not narrowly, cover contextual conditions and not just the phenomenon of study, and rely on multiple and not singular sources of evidence” (xi). Laura and Asher loosely approach this definition with their description of qualitative descriptive research definition at the beginning of Chapter 2, but I find Yin’s definition to be more eloquent and specific.
2) I find the selection of the research subjects to be the crux of all case studies. The entire validity of the study depends on whether or not the cases are accurate or applicable to whatever the researcher is studying (think back to week 4 about validity). For the most part, the subjects must be selected from a specific set of criteria or distinctions such as Flower and Hayes’s experts and novice. While the “Pause” research project left out a lot of demographic data about the subjects, it did make a case based on the criteria they set forth. However, I felt that the simple distinctions left out important information that could affect how pauses were used. I personally wanted to know more about what defined an expert and what specifics could affect the distinctions.
3) Data collection and analysis follows on Yin’s last point in his application of the case study, “rely on multiple and not singular sources of evidence” (xi). Therefore, much of the data is from multiple source or at least multiple points of data from a specific source and the analysis if often a synthesis of the many forms of data. The data is used to create a new conglomerate of understanding and knowledge in a broad sense about a subject area and not to show causation or a universal generalization. While it is not necessary, case studies are or often augmented by multiple or approaches (such as surveys, tests, statistics, or forms of quantitative research) that strengthen the researcher’s case.
4) What kinds of generalizations are possible is the crux of my frustration with many planning case studies. What kinds of generalizations are possible: None (generally speaking). Case studies cannot “prove” something. Often a case study of a community or place is used as attempt to prove that New Urbanism or roundabouts are the best, safest, or most appealing. However, the case study research method is to inform, describe, and explore the context and the phenomenon. Therefore generalizations as universals are dangerous for the researcher and case studies, in and of themselves, do not follow the central tendency concept.

Friday, February 6, 2009

Week 5: CITI and the Internet

Blog Question: How does conducting research on the Internet impact the ways that researchers must deal with human subjects?

The internet presents a completely different set of risks and potential rewards, regardless of whether or not the actual research is done on the web. In addition, the internet also provides a pseudo-permanent record/data of every piece of data ever sent or transmitted. While the potential is there for gaining a large amount of information via the web, there is also the risk that confidential information may leak into the web and remain there in various electronic forms for eternality… especially with web archiving. Now, I will get to a very interesting case that I am aware of in a moments that shows how the internet can be much worse than an unlocked door that restricts the access of confidential information, but I guess I will comment on a couple of things from the modules first.

Every institution’s IRB is different and the concept of “risk” is different. Clemson’s IRB differs that Georgia Tech’s. Clemson may conclude a research project requires an expedited review while GT requires a full review. Therefore, while the modules and CITI certification may be standardized, there is a lot of variance in how a university handles risk. What is unique is that the internet allows for greater sharing of information/data between researchers at different universities with two different IRB approaches.
Conducting research on the interest must be done on secure servers and mainframes from what I understand, but those are only as secure as out IT folks make it to be. Now, I have work with a research office for a couple of years and we are responsible for a couple of internet surveys (work for Clemson Parking and Clemson Vanpool… so keep you comments to yourself). Now, the response rate for these surveys were outstanding, we got about 1,600 to 1,800 responses for each survey with is outstanding when considering that only Clemson students, faculty and staff could respond to the survey. However, there is relatively risk for these interest surveys since it did not ask sensitive questions any more sensitive than basic demographics and parking habits. However, if the survey asked more personal questions, there could be a problem because to take the survey, one had to enter their CU ID so theoretically we could trace each individual’s responses if the research group applied the man-hours to it. Therefore, while the internet seems private because the research subject may not be interacting with anyone at a potentially private location, electronic communication is never always private. Anyone with the expertise or time can listen in.

The internet can have a different role within non-electronic research. Having an electronic database or website could have security issues. I am aware of an incident with human subjects data, private data including very sensitive information that was put on the web due to human error. A excel file from a written data source with the all the survey/research data was accidentally placed on a website for several months before the managing body recognized that the data was copied to that location. This was beyond any expected and anticipated risks that the researchers and internal review board could have thought up and disclosed. So, while a study that did not have a internet focus still found itself susceptible to the risks of electronic communication, anything can be accessed by millions of individuals who care to look for it.