As I read Victor Vitanza's "'Notes' Towards Historiographies of Rhetorics; or the Rhetorics of the Histories of Rhetorics: Traditional, Revisionary, and Sub/Versive" and his attempt to track down the history of writing, rhetoric of history, historiographies, etc., the image of a terrier chasing a rat down a garden maze comes to mind. The rat of course being writing history. Vitanza attempts to latch on to this elusive relationship, without being either a strict materialist or a waning relativist. In fact, Vitanza approaches this enterprise with a wimsical thumbing-of-the-nose towards resolution and acknowledges the impossibility of obtaining a firm grounding. Vitanza doesn't deny the existence of a grounding. He is not so naive. The whole point of his project is to recognize the various types of groundings and accept their ongoing shiftiness. Vitanza makes a turn as the class cut-up, refusing to align with either the proletariat or the petty bourgeoisie. Alignments with one academic castes or the other is beside the point for vv. The historical methods he notes are:
1.Traditional Historiography
This is where writers of history do not really acknowledge how ideology informs their historical account.
2. Revisionary Historiography
This is where the history writer is self-consciously ideologically informed. They use rhetoric so they can correct "misinformed" historical perceptions.
3. Sub/Versive Historiography
Here the traditional views of history are subverted and revisionary histories' totalizing claims to explanatory power are held suspect. Sub/versive historiography rejects authoritarianism and disciplinarity.
Tharon Howard in "Who 'Owns' Electronic Texts?" makes several important points, mainly that copyright protection and intellectual property are little understood concepts by most academics, thereby putting intellectual production at risk. This weakens research and damages ethos. Howard argues that learning the rules governing fair use is an essential project if academic production (especially in the humanities) is to thrive in new media. He asserts that practically everything has been claimed as intellectual property except "truth." This is a subversive historiography. Howard demonstrates a self-awareness of his positionality and wishes to shed light on the subject of electronic texts. Moreover, in the title of his article Howard shows how the notion of "ownership" itself is not necessarily a mutually agreed upon concept.
Edward P. J. Corbett in "What Classical Rhetoric Has to Offer the Teacher and the Student of Business and Professional Writing," the teachings of Aristotle are outlined regarding the direct appeal to the audience emotions. This historiographic method comes across as though there is a direct unproblematic relationship between the subject to the object, and approaches the historical account of Aristotle with a matter of fact "what happened happened" ethos. This is naive. Here is an example of this traditional historiographic method: "Style was never just ornament for the clasical rhetoricians, even though they might admit that an elegant style could adorn an otherwise lackluster text" (70). Corbett reports this "fact" -- actually makes an unconscious interpretation about the classical rhetoricians -- as though he were there and could actually know.
James P. Zappen in his article "Francis Bacon and the Historiography of Scientific Rhetoric" asserts that the ways the 20th century interpretations of Bacon's science and rhetoric is derived from a Puritanical world view. He (re)defines this scientific rhetoric with having a positivistic science style that is very plain, or has an extremely institutionalized science style that is very high-figured, or has a democratic scientific rhetorical style that draws upon the traditions of Puritan reformers as exemplary of plain style. This is a revisionary historiography because it is concerned with correcting misinformed ways of seeing Bacon's writings.
Saturday, April 4, 2009
Saturday, March 28, 2009
Steven Golen
"A Factor Analysis of Barriers to Effective Listening"
1. Purpose / research question
Golen's purpose is to determine which barriers are perceived to be the most frequently ecncountered that may affect listening effectiveness among business college students. Therefore, one of the things Golen wanted to know was whether or not the listening behaviors identified by previous research was useful. Golen found that there were "some similarities in listening behaviors across the studies . . . however, no studies identified any factors or dimensions of listening barriers" (27). More specifically, Golen wanted to know: what do students perceive as the most frequent barriers to listening; what are the most frequent barrier factors to listening for students; and how do listening barrier factors differs among students based on demographic variables.
2. Subject selection
Students were selected from a major soutwest state university and participants were made up of 400 students from 3 large business communication lecture sections, all of which were taught by the same instructor. The majority of the students were between the ages of 19 and 21.
3. Data collection
According to Golen, 279 students participated in the study (n=279). They were asked to respond on a Likert scale from 1 (meaning "most of the time") to 5 (meaning "never") across a continuum of responses to whether or not their listening was impeded or inhibited.
4. Data analysis techniques
In order to interpret the data, Golen decided to cross analyze the demographics of the particpants acording to majors, age, and sex. Golen was able to find that the only demographic variable affecting listening barrier factors is sex. You guessed it: Men don't know how to listen. lol.
Brenton Faber
"Popularizing Nanoscience: The Public Rhetoric of Nanotechnology, 1986-1999"
1. Purpose / research question
Through this study, Faber wants to know how popular media influences the way nanotechnology is framed in general periodicals such as newspapers and magazines. The question up for consideration is how the emerging technological field is presented in the popular science . In other words, how are already well established popular understandings of science exploited for the mediation of the introduction of this branch of science to the public?
2. Subject selection
Brenton Faber conducted a search for the words, "nanotechnology" and "nanoscience" through his university database, which generated 885 hits on the subject. They appeared in publications like Newsweek, Time, and Popular Science. Many of those articles were duplicated in the press and so he finally chose 203 articles.Faber identifies 1986 as the date when the first discussions of nanotechnology appear in the public media. Faber chose the date of 1999 because it marks a significant date of the early emergence of the field.
3. Data collection
Faber's research revealed that much of the research was represented to the public interms of science fiction, cybernetics, possible inventions, biographies of the scientists, and foreign competition.
4. Data analysis techniques
He notices that the representation of these scientific fields related to nanotechnology changed over time. Faber calls these temporal findings. This, for Faber, was important since it helped popular science writers present the findings of nanoscience into scientific terms already understandable to popular science audiences.
John M. Carroll, et al.
"The Minimal Manual"
1. Purpose / research question
The research question was based on what types of rhetorical/communication strategies worked best for conveying computer operation skills to general consumer audiences. There were two studies. One was made up of 19 subjects observed for this study. Nine of them were given a typical "owner's manual" to read, whereas the other ten were provided with a "minimal manual." Unlike the traditional ownver's manual" type instruction book, the minimal manual was more streamlined and user based. The second study was comprised of 32 subjects.
2. Subject selection
It is difficult to tell how the subjects were chosen. Randomly? From intact groups? It is clear that the subjects were familiar with clerical office work.
3. Data collection
According to the study, partipants were asked to complete training and performance across 8 different word processing tasks. The tasks involved formatting procedures, as well as storing (saving) documents on diskettes. Specifically, they were asked to: create and print a letter, created and print a bulletin, and revise a bulletin.
4. Data analysis techniques
It was found that students using the minimal manual required 40% less learning time.
Barry M. Kroll
"Explaining How to Play a Game"
1. Purpose / research question
The researcher wanted to know if students' informative writing skills could be generalized across age groups. In other words, they wanted to now if there was evidence of "growth" of students' informative writing skills.
2. Subject selection
There were 86 students from three age groups. The grouped ages were from 7, and. There were also 7 th and 11th graders, as well as college freshmen included in the study. The students were asked to explain in writing the instructions to 57 different games.
3. Data collection
Students learned how to play the games and then were asked to write game instructions. Kroll devised a way to see if students emphasized scoring procedures, game objectives, orienting information, or whether or not he explanatory approach was taken.
4. Data analysis techniques
Kroll determined that 7 and 9 year old provided the least informative insturctions. As students became older, they adopted an increasingly conversational tone in their game instructions. Ninth and 11th graders exhibited almost identical skills. Students found the excercise required for the study challenging and interesting.
Notarantonio & Cohen
"Effects of Open and Dominant Communication Styles on
Perceptions of the Sales Interaction "
1. Purpose / research question
The researchers wanted to know how communication styles influence student perception of sales effectiveness.
2. Subject selection
The subjects were selected from 80 undergraduate business administration majors at Byrant College. The study included 58.7% females ranging from ages 17-21, 73% of whom wre 18 years old. The study was comprised of 92.1% freshmen.
3. Data collection
Videotapes of sales associates talking about themselves. The researchers were categorized as open and dominant based on how persuasive they were in talking about themselves and how much they monopolized the conversation that was taped. Each tape had 4 different types of sales communicators selling programmable stereo systems. Subjects were randomly
4. Data analysis techniques
Researchers saw no significant difference between salespersons who weere very open and those who were dominant. In fact, the ones who dominated the conversation by talking about him/herself were least effective because they spent less time discussing stereo equipment.
What are the appropriate purposes for quantitative descriptive studies, how are subjects selected, how is data collected and analyzed, and what kind of generalizations are possible?
According to Lauer & Ashe, "the conclusions of quantitative descriptive research indicate the calculated strengths or weaknesses of relationships among variables. Reseachers can also determine significance and the amount of variance explained" (96). This kind of research is descriptive and not experimental because no control groups are devised. Subjects ought to be chosen based on how appropriate they are according to the variables and based on their availabilty.
What do quasi and true experiments seek to demonstrate, what does a "control grop" have to do with subject selection, how do "independent" and "dependent variable" impact data collection and analysis?
Quasi-experiments are useful when researchers have no way to randomize groups, like in a classroom setting when the groups have to stay together. Independent and dependent variables impact data collection insofar as pre and post tests are used to account for the influence to internal validity. Hypotheses are applied so that the nonrandomization can be formally addressed.
"A Factor Analysis of Barriers to Effective Listening"
1. Purpose / research question
Golen's purpose is to determine which barriers are perceived to be the most frequently ecncountered that may affect listening effectiveness among business college students. Therefore, one of the things Golen wanted to know was whether or not the listening behaviors identified by previous research was useful. Golen found that there were "some similarities in listening behaviors across the studies . . . however, no studies identified any factors or dimensions of listening barriers" (27). More specifically, Golen wanted to know: what do students perceive as the most frequent barriers to listening; what are the most frequent barrier factors to listening for students; and how do listening barrier factors differs among students based on demographic variables.
2. Subject selection
Students were selected from a major soutwest state university and participants were made up of 400 students from 3 large business communication lecture sections, all of which were taught by the same instructor. The majority of the students were between the ages of 19 and 21.
3. Data collection
According to Golen, 279 students participated in the study (n=279). They were asked to respond on a Likert scale from 1 (meaning "most of the time") to 5 (meaning "never") across a continuum of responses to whether or not their listening was impeded or inhibited.
4. Data analysis techniques
In order to interpret the data, Golen decided to cross analyze the demographics of the particpants acording to majors, age, and sex. Golen was able to find that the only demographic variable affecting listening barrier factors is sex. You guessed it: Men don't know how to listen. lol.
Brenton Faber
"Popularizing Nanoscience: The Public Rhetoric of Nanotechnology, 1986-1999"
1. Purpose / research question
Through this study, Faber wants to know how popular media influences the way nanotechnology is framed in general periodicals such as newspapers and magazines. The question up for consideration is how the emerging technological field is presented in the popular science . In other words, how are already well established popular understandings of science exploited for the mediation of the introduction of this branch of science to the public?
2. Subject selection
Brenton Faber conducted a search for the words, "nanotechnology" and "nanoscience" through his university database, which generated 885 hits on the subject. They appeared in publications like Newsweek, Time, and Popular Science. Many of those articles were duplicated in the press and so he finally chose 203 articles.Faber identifies 1986 as the date when the first discussions of nanotechnology appear in the public media. Faber chose the date of 1999 because it marks a significant date of the early emergence of the field.
3. Data collection
Faber's research revealed that much of the research was represented to the public interms of science fiction, cybernetics, possible inventions, biographies of the scientists, and foreign competition.
4. Data analysis techniques
He notices that the representation of these scientific fields related to nanotechnology changed over time. Faber calls these temporal findings. This, for Faber, was important since it helped popular science writers present the findings of nanoscience into scientific terms already understandable to popular science audiences.
John M. Carroll, et al.
"The Minimal Manual"
1. Purpose / research question
The research question was based on what types of rhetorical/communication strategies worked best for conveying computer operation skills to general consumer audiences. There were two studies. One was made up of 19 subjects observed for this study. Nine of them were given a typical "owner's manual" to read, whereas the other ten were provided with a "minimal manual." Unlike the traditional ownver's manual" type instruction book, the minimal manual was more streamlined and user based. The second study was comprised of 32 subjects.
2. Subject selection
It is difficult to tell how the subjects were chosen. Randomly? From intact groups? It is clear that the subjects were familiar with clerical office work.
3. Data collection
According to the study, partipants were asked to complete training and performance across 8 different word processing tasks. The tasks involved formatting procedures, as well as storing (saving) documents on diskettes. Specifically, they were asked to: create and print a letter, created and print a bulletin, and revise a bulletin.
4. Data analysis techniques
It was found that students using the minimal manual required 40% less learning time.
Barry M. Kroll
"Explaining How to Play a Game"
1. Purpose / research question
The researcher wanted to know if students' informative writing skills could be generalized across age groups. In other words, they wanted to now if there was evidence of "growth" of students' informative writing skills.
2. Subject selection
There were 86 students from three age groups. The grouped ages were from 7, and. There were also 7 th and 11th graders, as well as college freshmen included in the study. The students were asked to explain in writing the instructions to 57 different games.
3. Data collection
Students learned how to play the games and then were asked to write game instructions. Kroll devised a way to see if students emphasized scoring procedures, game objectives, orienting information, or whether or not he explanatory approach was taken.
4. Data analysis techniques
Kroll determined that 7 and 9 year old provided the least informative insturctions. As students became older, they adopted an increasingly conversational tone in their game instructions. Ninth and 11th graders exhibited almost identical skills. Students found the excercise required for the study challenging and interesting.
Notarantonio & Cohen
"Effects of Open and Dominant Communication Styles on
Perceptions of the Sales Interaction "
1. Purpose / research question
The researchers wanted to know how communication styles influence student perception of sales effectiveness.
2. Subject selection
The subjects were selected from 80 undergraduate business administration majors at Byrant College. The study included 58.7% females ranging from ages 17-21, 73% of whom wre 18 years old. The study was comprised of 92.1% freshmen.
3. Data collection
Videotapes of sales associates talking about themselves. The researchers were categorized as open and dominant based on how persuasive they were in talking about themselves and how much they monopolized the conversation that was taped. Each tape had 4 different types of sales communicators selling programmable stereo systems. Subjects were randomly
4. Data analysis techniques
Researchers saw no significant difference between salespersons who weere very open and those who were dominant. In fact, the ones who dominated the conversation by talking about him/herself were least effective because they spent less time discussing stereo equipment.
What are the appropriate purposes for quantitative descriptive studies, how are subjects selected, how is data collected and analyzed, and what kind of generalizations are possible?
According to Lauer & Ashe, "the conclusions of quantitative descriptive research indicate the calculated strengths or weaknesses of relationships among variables. Reseachers can also determine significance and the amount of variance explained" (96). This kind of research is descriptive and not experimental because no control groups are devised. Subjects ought to be chosen based on how appropriate they are according to the variables and based on their availabilty.
What do quasi and true experiments seek to demonstrate, what does a "control grop" have to do with subject selection, how do "independent" and "dependent variable" impact data collection and analysis?
Quasi-experiments are useful when researchers have no way to randomize groups, like in a classroom setting when the groups have to stay together. Independent and dependent variables impact data collection insofar as pre and post tests are used to account for the influence to internal validity. Hypotheses are applied so that the nonrandomization can be formally addressed.
Friday, February 27, 2009
ethnographies -- week 8
I. Kathleen Casey. “The New Narrative Research in Education”
1. Purpose / research question
In the ethnography by Kathleen Casey the qualitative descriptive research deals with the entire environment of the narrative research including, autobiographies, biographies, autoethnographies, oral history, life stories etc., which Casey identifies as an interpretive posture derived from the 1970s cross between the expressivist and social epistemic. Casey wants to know if the negative criticisms of solipsism; emptiness in meaning and, fragmentation can be overcome and effectively be employed for a social episteme centered reassertion of the subject, as in feminist scholarship.
2. Subject selection
Collection of writing samples of 3 types of narrative research theoretically suggests accounts of why these 3 approaches occur. These approaches were identified as “problematics” in 3 strands. These problematics, Casey maintains, simultaneously draw together and pull apart
Existentialist – loss of self, alienation from the group
Political commitment – oral histories –marxian approach where social progress occurs on contested ground and told in the voice of “the people” – the phenomenon of testimonias in the Latin American liberation pedagogy, for instance
Postmodernism – plasticity of the subject, e.g. black and latino gay culture meld into Madonna identified celebrity voguing – this plasticity blocks materialists analysis that might trace drag ball culture back to the Harlem Renaissance .
3. Data collection
The method for collecting data follows the protocols of the traditional literature review.
4. Data analysis techniques
Scholars are finding that their interpretations of narrative research are constrained or limited by their ideological positions, much like their anthropologist counterparts. The researcher report results for further scrutiny. Casey generalizes across all genres of narrative research
II. Margaret Sheehy. “The Social Life of an Essay: Standardizing Forces in Writing”
1. Purpose / research question
This ethnography wants to shed light on standardization and draws its theoretical questions from a Foucauldian analysis. However, according to a Gramscian view, students at Sanders develop discursive strategies for overcoming barriers to citizenry, which Sheehy believes standardization attempts to foreclose. Sheehy wants to know what standardizing interventions can be made by teachers effectively teach students this “gatekeeping” exercise. She finds the 5 paragraph essay to be useful strategies taught to the students, though students devised ways to break away from this master structure and “did some funky stuff” (360).
2. Subject selection – environment and culture
The students come from Sanders middle school where there is a history of intergenerational poverty and low academic achievement, “half the adult population has not completed high school” (336). According to this study and a Foucauldian analysis, essay writing is viewed as a “gatekeeping” exercise and so it is considered a useful rubric for contextualization
3. Data collection methods
The method used in this ethnography involves observation and participation, which Sheehy acknowledges was inconsistent and contingent. She realizes that her work as a participant observer for the whole group was awkward for both the students and herself, although she developed easy rapports with some of the students in her focus groups. Audio and video tapes were made and transcribed during student group work. All together, 600 pages of transcripts were collected.
4. Data analysis techniques
This ethnography concluded that even as students successfully integrated the standardizing techniques they were taught, they also broke out of strict structures in order to articulate a sense of social subjectivity and community agency. Sheehy was able to draw this connection based on the comments she received from the students’ essay/speech audience, including teachers and general members of the community – not to mention Sheehy herself, who’s conclusions seemed to draw heavily from her theoretical, i.e. ideological leanings.
III. Anne Beaufort. “Learning the Trade: A Social Apprenticeship Model for Gaining Writing Expertise”
1. Purpose / research question
The theoretical framework in operation is derived from Dell Hymes’ sociolinguistic approach that holds social/political context impacts how individuals position themselves in writing task relative to the larger goals and values of the discourse community in which the writer considers herself to hold membership.
What is more effective: process and feedback – based on expressivist approaches? Or dialogical communication -- social constructivist approaches? How can these approaches be combined to create the best pedagogy for helping novice writers adapt to professional discourse communities? How can students develop an agency over their writing so as to gain greater control over their process as professionals that can be applied to membership in their respective discourse communities?
More specifically Beaufort wanted to know: What differentiated simple task from more complex ones? How or what determines writers roles in a given community? What methods of socialization of writers new to an organization occur and to what effect?
2. Subject selection – environment and culture
Subjects were selected from an urban community and were participating in a work and professional training resource program. Most participants were single headed households employed in or undergoing training for the clerical work in business and medical settings or either administrative work in the hospitality industry. However the two participant “informants” chosen for the study were particularly high achieving with fairly well developed writing and communication skills.
3. Data collection methods
Both women were interviewed almost on a weekly basis , all of which was captured on audiotape. Along with observations of the program participants taking part in both formal and informal activities, including impromptu conversations, formal interviews, instruction and feedback on writing activities was taken up by the researcher as an overall context for the field inquiry. Notes taken during interviews and professional writing done by the two informants provided the details for the ethnography, however.
4. Data analysis techniques
In order to determine somewhat generalizable results, the researcher applied the results to understanding a framework for how writers make the leap from one type of writing to another more professionalized set of purposes for writing. Although the research did not produce a reliability or validity applicable to a generalized result, Beaufort determined that the observations made regarding socialization and writing professionalization to be useful for the formulation of a theory of process.
IV. Stephen Doheny-Farina. “Writing in an Emerging Organization.”
1. Purpose/ research question
How do software company executives process the writing they produce for vital company documents? The theoretical assumptions grounding the research question takes into account that rhetorical discourse is situated in the here and now and is based on the purpose, available means of persuasion, audience, etc. That the software executive is a rhetor who determines what is the best means of persuasion based on factors outside of his control is a main assumption of this research. The researcher brings to the ethnography his own set of rhetorical assumptions that the microcosmic observations made can be applied to an understanding of the rhetorical culture of the executives. While the researcher values the subjects’ interpretations of their own meanings and motives for the company documentation created, the researcher assumes the diversity of meanings and motives among his study participants, from a distant, though empathetic standpoint.
2. Subject selection
The five subjects were selected from 25 full-time executives employed by Microware, Inc.
3. Data collection methods
Interviews, informal conversations, observation of staff meetings occurred. Field notes, tape recordings, open-ended interviews, and discourse based interviews about the process of documents from first draft to final product were used to triangulate the study.
4. Data analysis
The researcher determined the executives had different ways of articulating their process. They saw the writing process in either a production capacity or as a collaborative effort. Doheny-Farina sees this as useful to understanding the rhetorical activities of social and organizational contexts in executive writing functions and therefore applicable to theory building, teaching, and further research.
V. Leon Anderson. “Analytic Autoethnography”
1. Purpose/ research question
How does a researcher use her experience as a self-avowed and duly appointed member of a discourse community in such a way that she offers research that is clear and reliable, despite the self-reflexive demands that are made of the autoethnographic subject? How does this qualitative method militate against postmodern frames or heed the calls for objective research?
2. Subject selection
A history of autoethnographies is provided as they must exhibit five key features. They are: complete member researcher (CMR) status, analytic reflexivity, narrative visibility of the researcher, dialogue with informants outside of the CMR, and a commitment to theoretical analysis.
3. Data collection methods
Each autoethnographic feature was separately described in rich detail and examples were provided as evidence.
4. Data analysis
The assertion was made that social science research methods are ever evolving and expanding. It was further determined that “non-traditional” research methods be incorporated into the field of traditional empirical methods.
VI. Carolyn Ellis. “Shattered Lives: Making Sense of September 11th and Its Aftermath”
1. Purpose/ research question
How do everyday stories make sense of the historical touchstone of 9-11? In effect, how does a historical predicament coalesce into meaning when separate, seeming disparate stories are told?
2. Subject selection
The researcher selects herself, her brother, and her mother and mother-in-laws personal narratives.
3. Data collection methods
The personal experiences of the author and the author’s loved ones are disclosed as a means to illicit the telling of more stories in order to make sense of what happened on September 11, 2001.
4. Data analysis
The analysis of the tragedy is ongoing as the stories continue to be told. So far, according to Ellis, there is only the certainty of loss and unpredictability.
So, finally the questions beg: What distinguishes ethnographies from case studies? How does “triangulation” impact data collection and analysis? And what must ethnographers do to ensure their work is both reliable and valid?
In ethnographies the results are almost never generalizable. In fact, since the methodology follows the form of the anthropological study, usually the only components that can be determined as reliable are the researchers' attitudes and assumptions. The triangulation of data attempts to provide valid results by applying numerous cross-cutting methods for gathering what can be known. Ethnographers must ensure that their work is valid by constantly, continually reevaluating their methods and ideologies.
1. Purpose / research question
In the ethnography by Kathleen Casey the qualitative descriptive research deals with the entire environment of the narrative research including, autobiographies, biographies, autoethnographies, oral history, life stories etc., which Casey identifies as an interpretive posture derived from the 1970s cross between the expressivist and social epistemic. Casey wants to know if the negative criticisms of solipsism; emptiness in meaning and, fragmentation can be overcome and effectively be employed for a social episteme centered reassertion of the subject, as in feminist scholarship.
2. Subject selection
Collection of writing samples of 3 types of narrative research theoretically suggests accounts of why these 3 approaches occur. These approaches were identified as “problematics” in 3 strands. These problematics, Casey maintains, simultaneously draw together and pull apart
Existentialist – loss of self, alienation from the group
Political commitment – oral histories –marxian approach where social progress occurs on contested ground and told in the voice of “the people” – the phenomenon of testimonias in the Latin American liberation pedagogy, for instance
Postmodernism – plasticity of the subject, e.g. black and latino gay culture meld into Madonna identified celebrity voguing – this plasticity blocks materialists analysis that might trace drag ball culture back to the Harlem Renaissance .
3. Data collection
The method for collecting data follows the protocols of the traditional literature review.
4. Data analysis techniques
Scholars are finding that their interpretations of narrative research are constrained or limited by their ideological positions, much like their anthropologist counterparts. The researcher report results for further scrutiny. Casey generalizes across all genres of narrative research
II. Margaret Sheehy. “The Social Life of an Essay: Standardizing Forces in Writing”
1. Purpose / research question
This ethnography wants to shed light on standardization and draws its theoretical questions from a Foucauldian analysis. However, according to a Gramscian view, students at Sanders develop discursive strategies for overcoming barriers to citizenry, which Sheehy believes standardization attempts to foreclose. Sheehy wants to know what standardizing interventions can be made by teachers effectively teach students this “gatekeeping” exercise. She finds the 5 paragraph essay to be useful strategies taught to the students, though students devised ways to break away from this master structure and “did some funky stuff” (360).
2. Subject selection – environment and culture
The students come from Sanders middle school where there is a history of intergenerational poverty and low academic achievement, “half the adult population has not completed high school” (336). According to this study and a Foucauldian analysis, essay writing is viewed as a “gatekeeping” exercise and so it is considered a useful rubric for contextualization
3. Data collection methods
The method used in this ethnography involves observation and participation, which Sheehy acknowledges was inconsistent and contingent. She realizes that her work as a participant observer for the whole group was awkward for both the students and herself, although she developed easy rapports with some of the students in her focus groups. Audio and video tapes were made and transcribed during student group work. All together, 600 pages of transcripts were collected.
4. Data analysis techniques
This ethnography concluded that even as students successfully integrated the standardizing techniques they were taught, they also broke out of strict structures in order to articulate a sense of social subjectivity and community agency. Sheehy was able to draw this connection based on the comments she received from the students’ essay/speech audience, including teachers and general members of the community – not to mention Sheehy herself, who’s conclusions seemed to draw heavily from her theoretical, i.e. ideological leanings.
III. Anne Beaufort. “Learning the Trade: A Social Apprenticeship Model for Gaining Writing Expertise”
1. Purpose / research question
The theoretical framework in operation is derived from Dell Hymes’ sociolinguistic approach that holds social/political context impacts how individuals position themselves in writing task relative to the larger goals and values of the discourse community in which the writer considers herself to hold membership.
What is more effective: process and feedback – based on expressivist approaches? Or dialogical communication -- social constructivist approaches? How can these approaches be combined to create the best pedagogy for helping novice writers adapt to professional discourse communities? How can students develop an agency over their writing so as to gain greater control over their process as professionals that can be applied to membership in their respective discourse communities?
More specifically Beaufort wanted to know: What differentiated simple task from more complex ones? How or what determines writers roles in a given community? What methods of socialization of writers new to an organization occur and to what effect?
2. Subject selection – environment and culture
Subjects were selected from an urban community and were participating in a work and professional training resource program. Most participants were single headed households employed in or undergoing training for the clerical work in business and medical settings or either administrative work in the hospitality industry. However the two participant “informants” chosen for the study were particularly high achieving with fairly well developed writing and communication skills.
3. Data collection methods
Both women were interviewed almost on a weekly basis , all of which was captured on audiotape. Along with observations of the program participants taking part in both formal and informal activities, including impromptu conversations, formal interviews, instruction and feedback on writing activities was taken up by the researcher as an overall context for the field inquiry. Notes taken during interviews and professional writing done by the two informants provided the details for the ethnography, however.
4. Data analysis techniques
In order to determine somewhat generalizable results, the researcher applied the results to understanding a framework for how writers make the leap from one type of writing to another more professionalized set of purposes for writing. Although the research did not produce a reliability or validity applicable to a generalized result, Beaufort determined that the observations made regarding socialization and writing professionalization to be useful for the formulation of a theory of process.
IV. Stephen Doheny-Farina. “Writing in an Emerging Organization.”
1. Purpose/ research question
How do software company executives process the writing they produce for vital company documents? The theoretical assumptions grounding the research question takes into account that rhetorical discourse is situated in the here and now and is based on the purpose, available means of persuasion, audience, etc. That the software executive is a rhetor who determines what is the best means of persuasion based on factors outside of his control is a main assumption of this research. The researcher brings to the ethnography his own set of rhetorical assumptions that the microcosmic observations made can be applied to an understanding of the rhetorical culture of the executives. While the researcher values the subjects’ interpretations of their own meanings and motives for the company documentation created, the researcher assumes the diversity of meanings and motives among his study participants, from a distant, though empathetic standpoint.
2. Subject selection
The five subjects were selected from 25 full-time executives employed by Microware, Inc.
3. Data collection methods
Interviews, informal conversations, observation of staff meetings occurred. Field notes, tape recordings, open-ended interviews, and discourse based interviews about the process of documents from first draft to final product were used to triangulate the study.
4. Data analysis
The researcher determined the executives had different ways of articulating their process. They saw the writing process in either a production capacity or as a collaborative effort. Doheny-Farina sees this as useful to understanding the rhetorical activities of social and organizational contexts in executive writing functions and therefore applicable to theory building, teaching, and further research.
V. Leon Anderson. “Analytic Autoethnography”
1. Purpose/ research question
How does a researcher use her experience as a self-avowed and duly appointed member of a discourse community in such a way that she offers research that is clear and reliable, despite the self-reflexive demands that are made of the autoethnographic subject? How does this qualitative method militate against postmodern frames or heed the calls for objective research?
2. Subject selection
A history of autoethnographies is provided as they must exhibit five key features. They are: complete member researcher (CMR) status, analytic reflexivity, narrative visibility of the researcher, dialogue with informants outside of the CMR, and a commitment to theoretical analysis.
3. Data collection methods
Each autoethnographic feature was separately described in rich detail and examples were provided as evidence.
4. Data analysis
The assertion was made that social science research methods are ever evolving and expanding. It was further determined that “non-traditional” research methods be incorporated into the field of traditional empirical methods.
VI. Carolyn Ellis. “Shattered Lives: Making Sense of September 11th and Its Aftermath”
1. Purpose/ research question
How do everyday stories make sense of the historical touchstone of 9-11? In effect, how does a historical predicament coalesce into meaning when separate, seeming disparate stories are told?
2. Subject selection
The researcher selects herself, her brother, and her mother and mother-in-laws personal narratives.
3. Data collection methods
The personal experiences of the author and the author’s loved ones are disclosed as a means to illicit the telling of more stories in order to make sense of what happened on September 11, 2001.
4. Data analysis
The analysis of the tragedy is ongoing as the stories continue to be told. So far, according to Ellis, there is only the certainty of loss and unpredictability.
So, finally the questions beg: What distinguishes ethnographies from case studies? How does “triangulation” impact data collection and analysis? And what must ethnographers do to ensure their work is both reliable and valid?
In ethnographies the results are almost never generalizable. In fact, since the methodology follows the form of the anthropological study, usually the only components that can be determined as reliable are the researchers' attitudes and assumptions. The triangulation of data attempts to provide valid results by applying numerous cross-cutting methods for gathering what can be known. Ethnographers must ensure that their work is valid by constantly, continually reevaluating their methods and ideologies.
Saturday, February 21, 2009
week 7 surveys
Blog Question: What are appropriate purposes for surveys, how are subjects
selected, how is data collected and analyzed, and what kinds of
generalizations are possible?
The appropriate purposes for surveys are describing large groups or populations -- whether those populations are comprised of composition courses, English teachers, or students. Surveys help researchers manage large and otherwise unwieldy data. Subjects are selected by drawing from a sample and by considering the question of feasibility. This can be done by random sampling, quota sampling, stratified samples, or cluster samples. A researcher must be careful about drawing conclusions and applying them to results. Data is collected and analyzed based on the few features of interest to the researcher present among a large group. Researchers look at nominal data obtained through counting whatever feature is of interest, such as the number of comma splices or the number of part-time composition teachers. Researchers may also obtain results through the analysis of interval data, which is derived from percentage ratings among large groups. There is also the method of analysis characterized by rank order data. In rank order data analysis, results are based on hierarchically assigned ranks. Also, data analysis could be based on mean or the average number applied to whatever is being measured. Generalizations are more possible in sample surveys than they are in case studies. This is true because surveys provide a valuable method for deriving reliable, representative, descriptive data about large populations as the number being analyzed is actually reduced to a manageable size through sampling procedures.
selected, how is data collected and analyzed, and what kinds of
generalizations are possible?
The appropriate purposes for surveys are describing large groups or populations -- whether those populations are comprised of composition courses, English teachers, or students. Surveys help researchers manage large and otherwise unwieldy data. Subjects are selected by drawing from a sample and by considering the question of feasibility. This can be done by random sampling, quota sampling, stratified samples, or cluster samples. A researcher must be careful about drawing conclusions and applying them to results. Data is collected and analyzed based on the few features of interest to the researcher present among a large group. Researchers look at nominal data obtained through counting whatever feature is of interest, such as the number of comma splices or the number of part-time composition teachers. Researchers may also obtain results through the analysis of interval data, which is derived from percentage ratings among large groups. There is also the method of analysis characterized by rank order data. In rank order data analysis, results are based on hierarchically assigned ranks. Also, data analysis could be based on mean or the average number applied to whatever is being measured. Generalizations are more possible in sample surveys than they are in case studies. This is true because surveys provide a valuable method for deriving reliable, representative, descriptive data about large populations as the number being analyzed is actually reduced to a manageable size through sampling procedures.
Friday, February 13, 2009
week 6 case studies
Blog Question: What are appropriate purposes of case studies, how are
subjects selected, how is data collected and analyzed, and what kinds of
generalizations are possible?
Case studies are appropriate when the researcher wants to identify new variables and further research questions. Subjects are selected based on the categories (activities, processes, demographics, conversations, etc.) being investigated.
Data is collected based on the setting up and labeling of categories, or conducting a "content analysis." Therefor, coding because very necessary to the process. Generalizations are not actually possible. The data gathered is uniquely specific to the case that is observed and results can only be applied under limited circumstances. Nonetheless, what is learned from a case study is applicable to developing theories of behavior and process for use under similar circumstances. With a case study, practices can be analyzed and manipulated in order to adjust practices for improved research applications. In the humanities and social sciences case studies have proven useful because they offer new data for analysis in human ght endeavors. Case studies are useful in testing pedagogy, especially when they are expanding on previous research questions and applied under the same (or very similar) circumstances.
For this reason, it is important that researchers clearly articulate the characteristics of the subjects and the circumstances under which they are being observed. Data can be collected through interviews that are recorded by notes, and video or audio recordings.
subjects selected, how is data collected and analyzed, and what kinds of
generalizations are possible?
Case studies are appropriate when the researcher wants to identify new variables and further research questions. Subjects are selected based on the categories (activities, processes, demographics, conversations, etc.) being investigated.
Data is collected based on the setting up and labeling of categories, or conducting a "content analysis." Therefor, coding because very necessary to the process. Generalizations are not actually possible. The data gathered is uniquely specific to the case that is observed and results can only be applied under limited circumstances. Nonetheless, what is learned from a case study is applicable to developing theories of behavior and process for use under similar circumstances. With a case study, practices can be analyzed and manipulated in order to adjust practices for improved research applications. In the humanities and social sciences case studies have proven useful because they offer new data for analysis in human ght endeavors. Case studies are useful in testing pedagogy, especially when they are expanding on previous research questions and applied under the same (or very similar) circumstances.
For this reason, it is important that researchers clearly articulate the characteristics of the subjects and the circumstances under which they are being observed. Data can be collected through interviews that are recorded by notes, and video or audio recordings.
Sunday, February 8, 2009
week 5 IRB
Question: How does conducting research on the Internet impact the ways that researchers must deal with human subjects?
Well, this is a problem. Many people will probably opt out of human subject research for privacy concerns. As this type of information becomes available to the wider public, how can we insure that databases be used and disclosed ethically and responsibly? How will this impact the terms of health insurance and hiring practices?
Also, the chance of involuntary disclosure is likely to increase when internet technology comes into play. The whole question of internet research might challenge our notions of sample accuracy, as study subjects may increasingly draw from self-selecting technologically connected populations. On the other hand, internet research could decrease or eliminate the need for actual subjects altogether, as the potential for online simulation can be tapped into.
It's clear that the internet poses new ethical challenges, that the laws and regulations involving human subjects will have to catch up to as problems are addressed.
When considering how the internet impacts the ways researchers deal with human subjects, several different considerations should always be taken into account. The first has to do with database privacy. For instance the human genome project has a database of names and identities associated with genetic coding that is completely open source. Of course these subjects have all agreed to have their genes mapped for scientific posterity, but what about those who wish to participate in the mapping project, but wish to keep their information private?
Well, this is a problem. Many people will probably opt out of human subject research for privacy concerns. As this type of information becomes available to the wider public, how can we insure that databases be used and disclosed ethically and responsibly? How will this impact the terms of health insurance and hiring practices?
Also, the chance of involuntary disclosure is likely to increase when internet technology comes into play. The whole question of internet research might challenge our notions of sample accuracy, as study subjects may increasingly draw from self-selecting technologically connected populations. On the other hand, internet research could decrease or eliminate the need for actual subjects altogether, as the potential for online simulation can be tapped into.
It's clear that the internet poses new ethical challenges, that the laws and regulations involving human subjects will have to catch up to as problems are addressed.
Saturday, January 31, 2009
week 4 measurement
Blog Question: What distinguishes Quantitative from Qualitative designs,
what is the difference between “validity” and “reliability,” and what is
meant by the terms “probability” and “significance?”
According to Goubil-Gambrell, quantitative research tries to attach numerical values to variables, populations, samples, etc. and show their relationship to each other. When conducting quantitative inquiry, researchers perform experiments that manipulate the variables for analysis in order to establish causal links. Qualitative research is descriptive and identifies variables in light of circumstances that frame inquiry. Lauer and Ashe add that qualitative research is not concerned with treatments or manipulating control groups.
Frederick Williams offers an analysis of validity and reliability and provides key distinctions. The main difference is that validity requires some standard outside of the component of measurement to which comparisons are made, whereas reliabiltiy is concerned with "comparing a measure with itself," but one does not necessarily imply the other. In other words, validity questions the fitness of a researchers tools to that thing he/she claims to measure and reliability has to do with the replication of research, which may or may not be consistent based on the component "subparts."
Probability means that an outcome will occur in all likelihood due to the necessary factors being in place, whereas significance occurs based on circumstances that are not necessarily present. In the words of Lauer and Ashe, "significance in statistics is a statement of the degree of rarity of a result based on chance probability alone. In almost all cases of statistical analysis, the calculated statistic is compared with standard, tabled values of chance distribution of that statitistic. If the calculated value is sufficiently larger (for almost all statistics) than the tabled value, the result is called statistically significant, meaning that the statistical relationship between two variables observed is unlikely to have occurred simply by random chance alone. Significance is declared usually at a probability (p) level of five or one percent" (287, emphasis mine).
what is the difference between “validity” and “reliability,” and what is
meant by the terms “probability” and “significance?”
According to Goubil-Gambrell, quantitative research tries to attach numerical values to variables, populations, samples, etc. and show their relationship to each other. When conducting quantitative inquiry, researchers perform experiments that manipulate the variables for analysis in order to establish causal links. Qualitative research is descriptive and identifies variables in light of circumstances that frame inquiry. Lauer and Ashe add that qualitative research is not concerned with treatments or manipulating control groups.
Frederick Williams offers an analysis of validity and reliability and provides key distinctions. The main difference is that validity requires some standard outside of the component of measurement to which comparisons are made, whereas reliabiltiy is concerned with "comparing a measure with itself," but one does not necessarily imply the other. In other words, validity questions the fitness of a researchers tools to that thing he/she claims to measure and reliability has to do with the replication of research, which may or may not be consistent based on the component "subparts."
Probability means that an outcome will occur in all likelihood due to the necessary factors being in place, whereas significance occurs based on circumstances that are not necessarily present. In the words of Lauer and Ashe, "significance in statistics is a statement of the degree of rarity of a result based on chance probability alone. In almost all cases of statistical analysis, the calculated statistic is compared with standard, tabled values of chance distribution of that statitistic. If the calculated value is sufficiently larger (for almost all statistics) than the tabled value, the result is called statistically significant, meaning that the statistical relationship between two variables observed is unlikely to have occurred simply by random chance alone. Significance is declared usually at a probability (p) level of five or one percent" (287, emphasis mine).
Subscribe to:
Posts (Atom)