Friday, April 3, 2009
Week13/Historiographies: Vitanza is categorical!!!
Vitanza:
History is ideological, but then again, is there anything that is not ideological? According to Vitanza, history is the interpretation of events (social construction) based on ideological mis/assputions. Often readers/writers may not question the ideology or dismiss it as superficial, but the stronger question is that of “whose ideology?” But then again, there are several layers/types of histories based on the approach towards the author/s ideological standpoint. Vitanza’s layers are:
Traditional: Traditional histories are foundationalist/positivist (p. 73). It homogenizes and rests within a covering-law model. For the traditional author, the object of study rests within a binary lens; fact or fiction. Also, history is turned into a causal/narrative with events. Time is a major component, present in the order that it happened and artifacts are created by the culture for study (p. 85). The second subset of “no time/man” is less defined, but still can contain histories that are unselfconscious or positivistic.
Revisionary: Full disclosure-to add or correct a previous historical event or viewpoint. Adding artifact or artifacts were misinterpreted. Essentially this is just another traditional history that strikes against the established or existing ideology within a branch of history. Still, full disclosure revisionary lack self criticism towards the author’s ideology, hence Vitanza’s overlap criticsm (p. 95). In some cases, this form is just a recollection/addition of artifacts and sources within an existing/counter ideology, such as a feministic or ethnic framework.
Self-conscious critical is revisionary, but the author explains or is aware of the ideology behind the historians’ viewpoints. This is essentially the historical approach that I was instructed within as an undergrad history major. The major break with this sub/revisionary approach is that the author offers an interpretation of a history. Self-critical rejects a positivistic “THE interpretation” in preference to “one of many interpretation” approaches. Artifacts are constructed.
Sub/versive (Vitanza): Sub/versive messes with the head. This layer/author is aware of the ideology and is anti-foundational. For this category the author/reader/alien (oops) must always question the framework. All history is a fiction, creative non-fiction and there is no escaping ideology. Cynicism is not sub/versive. You might as well play with it. I would assume since Vitanza creates a new category he would assume that his work belongs there as well. Therefore, Vitanza is Sub/versive. However, I feel that the logic still brings the work back to self-conscious critical. Plus there is a love with using excessive slash “/” marks. It must be a part of the sub/versive style of messing/playing with the reader. Therefore, in the Vitanza style I will/use the slash at any moment. (Also, this work reminds me of Nietzsche… not to descend from the mountain and eventually carry a corpse of a tightrope walker… ahhh the odd metaphors)
Corbett is traditional. Overall, it seems to be a piece to a larger work, an introduction or a paper that frames further discussion. I guess that must be what the “3” is on the cover pages as noting the chapter. In any case, the traditional label particularly stems from p. 66. In particular, Corbett states, “communication has its own rhetorical system” and this system is the model that is assumed in the “fact/true” and potentially causal. Corbett even generalizes on p. 71 with “it is safe to say that the neatness and correctness of the text is more crucial in business and professional writing than in any other kind of writing”. Although it was a lot of reading to discover that Corbett does not address ideology, it is clear that he has unwittingly approached teaching/business/professional writing within a traditional framework.
Zappen is self-conscious critical. Upfront and straightforward, Zappen makes it clear on page 74 and 75 (the first two pages) that he is aware of both his own ideology as well as a multitude of many different interpretations. Zappen states that he “conclude[s] that each of these interpretations, including my own, reflects a different ideology. Now, the only other question is where Zappen is sub/versive or not. In this case, I conclude that he is not. Not only does he continue with his own ideological approach to history, but is does not take the multiple fictions approach and does assume some relative truth/interpretation.
Howard is full disclosure revisionary. In this article, Howard questions the purpose/origins of copyright. In particular, the difference or challenges that digital/electronic texts provide based on an early modern printing press approach. The full disclosure revisionist approach presses that the copyright process stole the natural right of the individuals thought/writings and turned it into a privilege by the state. However, after reading the full text, it was appeared that the author/article does not explain the assumed ideology. In this case, the revisionary label can be applied because but it has traditional overlaps as well. The overlap is that there is a causal element stating of copyright/state privilege and ownership/reproduction of ideas/knowledge and ownership.
Friday, March 27, 2009
Week 12: Experiment and Quant. Description
Experimental Study
Quick Lauer and Asher summary
True Experiments: According to Lauer and Asher, true experiments apply a treatment to a research subject in order to show cause an effect. Preferably, the researchers will control the variables and then change a variable(s) and observe the change in the research subject. Normally, the true experiment requires a control group to compare the reaction, changes, or differences with the group that received the treatment. The control and treatment groups are randomized. While not exclusive to a true experiment, a test hypothesis normally drives the variables involved. The hypothesis is also dependent on the result being able to dispel the null hypothesis. The results are them statistically compared to the likeliness of a type I/II error as well as the probability of chance.
Quasi-Experimental: Quasi-experimental studies differ from “true” experiments based on the inability to randomize, an inability to control all variables, and potentially unequal groupings. Often, and preferably, the quasi-experiments include a pretest to ensure that research subjects are comparable. While there is an attempt to generalize with the results, cause and effect should actually be correlation with variables. Like the true experiment, the quasi must also test a null hypothesis.
Caroll
Caroll’s method was a quasi-experiment testing the manual of a computer program as the independent variable. The study included a mall pretest, with 19 participants (office professional with some experience) using the two different manuals. The study lacked randomization (one reason for the “quasi” classification) and in addition did not have a “control group” that was given the same instructions without a manual. This would have strengthened the pretest because the manuals could potentially be statistically indistinguishable compared to a no-manual group. The second experiment tested more participants but also tested more variables. The limited number of participants (sample size) hindered the study and made the results a bit shaky considering the generalizations. Once again, claiming the minimal manual’s benefits seems premature due to the sample size and a comparison against a no-manual group. The subject needs to be analyzed further. Unlike the author’s claim of “less can be more” the study needs more to overcome the lack of sample and design. Also, isn’t there an issue about correlation is not causation?
Kroll
With another quasi-experimental study, Kroll uses a larger but stratified sample to determine how game rules are explained. Kroll uses 123 (non-random) students ranging from the 5th grade to freshmen college students. However, Kroll screened the participants and removed those who did not score high enough on a quiz. This was an unusual quirk and did not receive adequate explanation and appeared to be data manipulation/steering. The researchers filmed the students on multiple occasions on separate day, and the intercoder reliability was .76 with the explanatory approach. Again, the cause and effect was misapplied and it should have been that grade level has a strong correlation with the informative explanation. While the researcher may have been able to identify some correlation between student levels and explanation, the removal of observation/subjects lessens the values. In addition, I question how the students were taught them game. The video could have had different implications o learning style and age where the subjects understood the instructions differently.
Notarant/Cohen
And there are three quasi-experiments. Notarant and Cohen test communication styles on sales interactions. Using videotapes, videos include the different styles selling a stereo. While there are 80 subject, the explanation of the methodology was vague. They include a small sample (n=10) which I am unclear if that is the group size or the number of groups. The research subject (college students) were not random and were placed in various groups of unequal size (5-6). I not sure how the subjects were organized. While the authors randomize the subjects of the group, the subjects were not random… kinda misleading. In addition, the limited scope of age and higher female ratio limited the overall generalization ability of the study.
Quantitative Description
A super-quick Lauer and Asher summary
Quantitative descriptions seek to isolate, correlate, interrelate variables. It is not experimental since it does not describe a treatment to variables. For the most part, it is a statistical means for identifying relevant variables that is the qualitative research can also identify, but now allows for stronger generalization. (I just think the n=10 per variable is a lousy rule… there are other issues to deal with such as population size)
Faber
Faber has conducted a content analysis of popular media publications. He follows Huckin’s outline, which is actually a good approach (I used Huckin for the Client project). The project reduced the number from over 800 to 203 which strengthened/focused the project. He justified the units of analysis/themes. Finally the theme were compared based on subject and over time. Overall it was a well executed content analysis.
Golen
The better aspect of this research project was that the 25 variables (showing up along a broad range) were defined from the literature. Otherwise, the study is a bit of a mess. The “random” selection of the 10 breakout groups was weird, especially since there were 400 students in 3 classes… and 279 responses. He analysis was convoluted. The Likert scale scale results had large st. deviations. Also, stating 1 and “most of the time” and 5 as “never” seemed awkward. It also left looking at the data difficult (remembering that the lower scores meant more listening but then more laziness). Also, it was a difficult keeping the variables organized and some were not very well connected. It felt that the results were forced into the conclusions already established by the literature. It was a good set up to a project, but the execution of the research was awkward.
Friday, February 27, 2009
Week 8: Eth-Know-Graphies (EKG)
Blog Question: What distinguishes ethnographies from case studies, how does “triangulation” impact data collection and analysis, and what must ethnographers do to ensure their work is both reliable and valid?
To answer the blog question on the basic level quickly, ethnographies differ from cases studies mostly based on the time of the study, with ethnographies observing over a greater length of time, and observing the multiple facets within an environment. In addition, a hypothesis of also generated within the ethnography. Ethnographies require multiple sources of data and the combination of the data is triangulation. This combination of data is supposed to provide a “rich account” that could not otherwise be obtained via experimentation. Ethnographers have a great number of potential threats to both their reliability and validity. Laura and Asher use Sadler’s list of ten threats ranging from the misinterpretation of cause and effect (No. 10), uneven reliability (No. 6), or pitfalls in confidence in judgment (No .2) (p. 46-7). With the basic blog questions out of the way, I will move onto the specific readings.
Doheny-Farina is through. He utilizes several theoretical assumptions as well as four sources of data; field notes, meeting tape recordings, and two types of interviews that were taken three to five days a week over the course of eight months. Not only did Doheny-Farina use four sources of information captured multiple time a week, but he also brought it back to the theoretical framework. The result of the ethnography was that Doheny-Farina showed that the process was reciprocal and that he described the process. One of the better aspects of this project is that Doheny-Farina did not overstep the limitations of the ethnography, admitted the provisional nature of his model, and noted some limitations to the study. In my personal opinion, this was a good ethnography.
Beaufort, like Doheny-Farina, establishes a theoretical framework, uses a multitude of data sources (interviews, transcripts, written documents), and participates with the research participants over a ‘long’ period, weekly over the course of a year. However, the sample size of n=4 was relatively small and precludes generalization. Beaufort straddles a dangerous line by linking independent and dependent variables and by generalizing which the study may not support.
Sheehy once again uses an eight-month long study of middle school students at SMS. What is interesting for Sheehy’s study is the role the researcher played. Sheehy noted an inconsistent and that her role as a participant/observer changed within the study. This shift may have altered the data and may be a threat to the reliability of the ethnography, but the researcher does not explain any shift in the data. Unfortunately, the effects of the change in the role of the researcher are left unknown. Other than that, Sheehy is also on the edge about generalizing within her conclusion.
To be frank, Ellis is not an ethnography, it is more of a narrative. It does not pull from a theoretical framework that I am aware of as a planner. It does not pull from multiple sources of data, although it is unique. Finally, although it comprises of several months of reflection, the data was not collected (uniformly) over a long enough time. If anything, this is, at best, a good primary source of information for historians in the future and I question the “autoethnography” academic rigor.
Speaking of autoethnographies, Anderson provides a better approach to this style of qualitative research. Overall, Anderson provides a better definition of what an autoethnography entails, mostly because of the researcher being a full participating member of the group/environment that is being studied. However, Anderson provides a historical evolution and theoretical framework for autoethnography, but if we were to look at the whether or not this work is an ethnography, then it is not (duh) . Anderson provides the approach that Ellis lacks.
Saturday, February 21, 2009
Week 7: Survey and Sampling Platters
Surveys are descriptive according to Lauer and Asher (L & A), surveys are often very analytical and potentially quantitative. Surveys are well suited for complex and complicated research efforts that must be executed within a limited resource situation. Surveys reduce the target population to a manageable size with reasonable results. While surveys can be expensive, they may be cheaper that full experimental research. However, surveys are not just descriptive as L &A suggest. It is true that surveys are a wonderful tool for inferring about the descriptive variables of a population, but survey analysis could be considered (if we think back to Morgan) to be a correlational. Not only can a survey’s goals to describe or inform variables, but it can show relationships between variable depending on the questions and design of the survey. In particular, people’s attitudes are easier (and I use “easily” loosely) to correlate depending on their answers to various answers. For example, individual income and commute times normally have a relationship within urban research.
Subject selection is the crux of any survey. Essentially, the subject selection is a large influence on validity and if the sample subject selection is off then the validity is not there. As a result, there are a host of various sampling approaches, random, systematic random, quota, cluster, and stratified sampling. The type of sampling approach depends on the researcher’s goals and desired data. Random sampling works well for an amorphous population. Stratified sampling works best when the researcher is dividing or distinguishing between populations and quota sampling works well with representing certain subgroups within a larger sample. The analysis of the survey results, (for me) is normally a statistical analysis. The analysis must determine whether the sample is representative, statistically significant sample size, and then identify the significant variables and relationships. Within planning, identifying or describing variables is important, but correlational and eventually causal analysis of variables is demanded (although causal assertions can be technically inaccurate, city leaders normally demand “sure things”). Finally, surveys benefit from other methods and research. Adding a minor case study or quantitative analysis can increase the potency of the survey data.
Unlike case studies, a properly designed survey intends to draw inference about a population based on the sample or samples. If the survey is inappropriately designed or does not sample enough to represent the population, the researcher cannot generalize. The research must remain careful not to assert or claim things that the survey data does not support. The wording and execution of the survey does affect the researcher claim. Asking, “will you support transit” is not the same as a willingness to actually ride transit. In addition there is bias and cognitive dissonance that may not surface in the survey. An famous/infamous national survey about gasoline price asked “if gas cost $3, would you cease driving and rely solely on transit” … well 70 percent of the respondents said they would switch to transit; however, with the summer peak of gas prices, this claim by the survey respondents really did not happen. Another example with Clemson ridesharing, only 30 percent were not interested in a carpool program, but this number jumped to over 40 percent when asked to have their name entered into a database of potential users. The moral of this story: just because the survey responds in a particularly to a prompt, this does not mean the respondent will actually act this way when confronted in real life.
Friday, February 13, 2009
Week 6: Case Study
Blog Question: What are appropriate purposes of case studies, how are subjects selected, how is data collected and analyzed, and what kinds of generalizations are possible?
To answer the blog question, I will break up the question into four questions. 1) What are appropriate purposes of case studies? 2) How are subjects selected? 3) How is data collected and analyzed? 4) What kinds of generalizations are possible? I hope this helps.
1) Quite frankly, I find it hard to discuss case studies and not talk about Robert Yin. While this is slightly outside the week’s readings, I am using Yin for the OWS project (My group is doing a case study essentially). Yin offers a wonderful answer in Applications of Case Study Research to the first part of the blog question, what are appropriate purposes of case studies? Yin explains that the “method is appropriate when investigators desire to define topics broadly and not narrowly, cover contextual conditions and not just the phenomenon of study, and rely on multiple and not singular sources of evidence” (xi). Laura and Asher loosely approach this definition with their description of qualitative descriptive research definition at the beginning of Chapter 2, but I find Yin’s definition to be more eloquent and specific.
2) I find the selection of the research subjects to be the crux of all case studies. The entire validity of the study depends on whether or not the cases are accurate or applicable to whatever the researcher is studying (think back to week 4 about validity). For the most part, the subjects must be selected from a specific set of criteria or distinctions such as Flower and Hayes’s experts and novice. While the “Pause” research project left out a lot of demographic data about the subjects, it did make a case based on the criteria they set forth. However, I felt that the simple distinctions left out important information that could affect how pauses were used. I personally wanted to know more about what defined an expert and what specifics could affect the distinctions.
3) Data collection and analysis follows on Yin’s last point in his application of the case study, “rely on multiple and not singular sources of evidence” (xi). Therefore, much of the data is from multiple source or at least multiple points of data from a specific source and the analysis if often a synthesis of the many forms of data. The data is used to create a new conglomerate of understanding and knowledge in a broad sense about a subject area and not to show causation or a universal generalization. While it is not necessary, case studies are or often augmented by multiple or approaches (such as surveys, tests, statistics, or forms of quantitative research) that strengthen the researcher’s case.
4) What kinds of generalizations are possible is the crux of my frustration with many planning case studies. What kinds of generalizations are possible: None (generally speaking). Case studies cannot “prove” something. Often a case study of a community or place is used as attempt to prove that New Urbanism or roundabouts are the best, safest, or most appealing. However, the case study research method is to inform, describe, and explore the context and the phenomenon. Therefore generalizations as universals are dangerous for the researcher and case studies, in and of themselves, do not follow the central tendency concept.
Friday, February 6, 2009
Week 5: CITI and the Internet
The internet presents a completely different set of risks and potential rewards, regardless of whether or not the actual research is done on the web. In addition, the internet also provides a pseudo-permanent record/data of every piece of data ever sent or transmitted. While the potential is there for gaining a large amount of information via the web, there is also the risk that confidential information may leak into the web and remain there in various electronic forms for eternality… especially with web archiving. Now, I will get to a very interesting case that I am aware of in a moments that shows how the internet can be much worse than an unlocked door that restricts the access of confidential information, but I guess I will comment on a couple of things from the modules first.
Every institution’s IRB is different and the concept of “risk” is different. Clemson’s IRB differs that Georgia Tech’s. Clemson may conclude a research project requires an expedited review while GT requires a full review. Therefore, while the modules and CITI certification may be standardized, there is a lot of variance in how a university handles risk. What is unique is that the internet allows for greater sharing of information/data between researchers at different universities with two different IRB approaches.
Conducting research on the interest must be done on secure servers and mainframes from what I understand, but those are only as secure as out IT folks make it to be. Now, I have work with a research office for a couple of years and we are responsible for a couple of internet surveys (work for Clemson Parking and Clemson Vanpool… so keep you comments to yourself). Now, the response rate for these surveys were outstanding, we got about 1,600 to 1,800 responses for each survey with is outstanding when considering that only Clemson students, faculty and staff could respond to the survey. However, there is relatively risk for these interest surveys since it did not ask sensitive questions any more sensitive than basic demographics and parking habits. However, if the survey asked more personal questions, there could be a problem because to take the survey, one had to enter their CU ID so theoretically we could trace each individual’s responses if the research group applied the man-hours to it. Therefore, while the internet seems private because the research subject may not be interacting with anyone at a potentially private location, electronic communication is never always private. Anyone with the expertise or time can listen in.
The internet can have a different role within non-electronic research. Having an electronic database or website could have security issues. I am aware of an incident with human subjects data, private data including very sensitive information that was put on the web due to human error. A excel file from a written data source with the all the survey/research data was accidentally placed on a website for several months before the managing body recognized that the data was copied to that location. This was beyond any expected and anticipated risks that the researchers and internal review board could have thought up and disclosed. So, while a study that did not have a internet focus still found itself susceptible to the risks of electronic communication, anything can be accessed by millions of individuals who care to look for it.
Friday, January 30, 2009
Week 4: Measurement and Sadistics... I mean statistics
Blog Question: What distinguishes Quantitative from Qualitative designs, what is the difference between “validity” and “reliability,” and what is meant by the terms “probability” and “significance?”
Relating Variables (Quantitative) v. Describing Variables (Qualitative)
Now, I know the title of this section Relating v. Describing may be too simplistic, but for now I will keep with it. More importantly, the distinction between the types of research is beyond a math/not math approach.
Quantitative research is based heavily upon establishing a relationship and the strength of the relationship between variables. Unfortunately, transportation fields are heavily quantified. There is an expanse of quantitative research establishing the relationship between speed and automobile fatality rates, or in my latest experience, I established the relationships of growth of marine port activity, world production and population. Goubril uses the example of online manuals and the relationship between experience and problem solving. Most importantly, qualitative research allows for the research to determine if there is truly a reason to investigate a problem. In particular, whether or not one can reject a null hypothesis is particularly useful, especially in dispelling chanch (Williams 56). Although, the test of a null is often underutilized by my own personal opinion. Morgan appears to divide quantitative research into correlational and experimental. Correlational is the same definition as qualitative in establishing relationships between variables and experimental looks for cause and effect.
I found the better definition of qualitative research to be just outside the readings in the 3rd chapter of Lauer and Asher. Lauer and Asher define qualitative research as “to give a rich account of the complexity of … behavior” (45). In particular, it is best to “expose the blindness and gullibility of specious quantitative research” (46). The problem with some (if not many) quantitative research is when the research makes and inaccurate jump to generalize to claim cause and effect (46). As a result, qualitative research asks different questions, normally referring to how or why a phenomenon exists. Morgan argues that qualitative research enables a “investigate the process” or “describe features”.
On a personal note, I find it better to incorporate both “qual” and “quant” into my research, particularly since planning is comprised my both the quantitative and qualitative disciplines.
Validity v. Reliability
To keep it simple, validity is the “ability to measure whatever it is intended to assess” (Lauer & Asher, 140). Reliability is the measurement of agreement (134). Each of the terms have separate subdivisions. The tree types of reliability are equivalency, stability, and internal consistency (135). The types of validity are content, concurrent, predictive, and construct (141). Now the differences between the two are based on their definitions, reliability is the “closeness of the data tone another, while validity is the closeness of the data to the intended target. However, there cannot be validity without reliability. Therefore, reliability influences validity. A very simple example is the bull’s-eye analogy (I will use it from Singleton and Straights Approaches to Social Research p. 94 since it was a very useful example for me a while back). If a marksman shoots a target and the many shots he takes are randomly dispersed, then there is low reliability and low validity (as well as high random error and low systematic error). But if the marksman shoots and the shoots are clustered but not on the bull’s-eye, there is high reliability but low validity because he missed the target. There is also the likelihood of high systematic error. Now if the marksman clusters his shots within the bull’s eye, then there is both high reliability and high validity.
Probability and Significance
Probability is simply the chance or percentage that something may occur.
Significance is referring to a specific probability, the acceptable probability, of making an error. Specifically Type I error, but Type II errors be included. Williams talks about a 5% or .05 level of significance, but this is referring to that there is a five percent chance that we have rejected the null but the null actually is true (a Type I error). There are specific tests for Type II errors, but there more important issues is that the more restrictive probability one uses to determine the level of significance or LOS (ie, .1, .05, .01) the there is a reduction in the likely hood of error type. A higher LOS of .01 has a smaller change of Type I error but an increased change of a Type II error, but a .1 has less of a Type II error but a higher probability of a Type I error. If anyone want more knowledge about this, please take the torture that is ExStat 801 :D
Edit: Dang I hate Word '07 formatting issues