Saturday, December 26, 2009

Launching the next phase--Lit Review Part II

I'm ramping up to dig into the second half of my lit review. I certainly hope it doesn't take as long as part I. I have this feeling that I need to charge through this draft because it is a draft. Also, I need to move more quickly because what will end up going into my actual dissertation will be considerably different and shorter. The style of writing will be different. Right now I am writing in depth summaries and considerations of key points. That's OK. But it isn't the synthetic narrative of the dissertation. If my draft one ends up at 50K or 70K words, my actual lit review in the diss will end up being 12.5 K (50 pages double-spaced). Even if I stretch it to 100 pages, the lit review would be only 25K. I will have a lot of conversion to do; however, I am finding this detailed review of the research and literature enormously helpful in expanding my understanding.

As I dive into this next phase, I am pulling together my sources. It is nice to be focusing on writing! I've started by looking at my summaries of research on rhetorical reflection. Now it the time I can build on previous work--thank goodness. I also have a notebook (actually notebooks) with all the various articles. I need to find my close summary notes on Yancey's book. They are in my box somewhere.

I have a few thoughts right now as I look at the lit on reflection research. First, there is a lot of good material. Many studies are weak in rigor, and I've thought about developing some kind of star system for rating research. I have a number of one star studies, but I have what I have.

Some impressions--
A number of studies point to ability and proficiency with reflection leads to improvement of some kind or correlates with superior ability. Sumsion has probably the most interesting things to say because she is critical of reflection. Her study doubts reflection can be quantitatively measured. It isn't suited for that kind of evaluation. She also noticed that students can be reflective and yet still not academically able. I have certainly seen that with a noticeable number of students. They can write a beautiful reflection in their final portfolio reflection, but their actual writing performance does not match the sophistication of their reflective awareness.

I see lots of influence from Flavell and his notion of metacognition. I struggle in my own mind to pin down a definition of metacognition, or rather I struggle matching exact forms or expressions of thinking with what is "metacognitive." How do I distinguish these forms of thinking we might label as "metacognitive" from those we should label "reflective?" Should I distinguish them, or can I lump them together? Flavell asserted that metacognition could be learned, that it would improve based upon training. Some research has tried to validate that assertion, and there is a group of findings that characterize reflection in the same way--that reflection is a learned behavior/skill. I might mention that this school of thinking counters the findings of King and Kitchener who believe reflective thinking is developmental.

I also see two different views of reflection. One view is based from Dewey and it presents reflective thinking in qualified terms. Reflection is triggered from a problem, exists within an ill-structured situation, and is inquiry based upon seeking a solution to the problem. David Boud offers another school of thinking about reflection that has a broader definition. In this sense, reflection is a form of thoughtful processing of experience with the goal of gaining better understanding that it is assumed leads to improved practice. Boud has a general practice orientation. I don't think these definitions or perspectives need to be exclusive of each other. Boud's allows for what McAlpine saw which is that practitioners engaged in reflection-in-action didn't always reflect around a problem, but they could also trigger significant reflection and resulting action around possibilities.

So much for the moment. I am presently gaining a perspective on the terrain of all this literature and scholarship. I'm gathering all my lego pieces. Once I have them together, I will chart out my game plan for writing and begin.

Wednesday, December 23, 2009

Persuasive Developments: Reflective Judgment and College Students' Written Argumentation

2003 Dissertation by Amy Overbay--available

Excerpts from  Conclusions Section

"However, the majority of the essays written by both groups of participants used onesided
positions, and did not examine or respond to objections in a sustained way. In most
cases, participants used evidence that was not examined critically, and offered unqualified
claims. Participants in both groups appeared reluctant to concede contested points, and in the
majority of their essays failed to address the fundamental conflict underlying the rhetorical
problem. These characteristics have been identified in other studies of students’ persuasive
writing (Crammond, 1998; Hays, 1988; Hillocks, 1995)" (202).


"The results of this study provide substantiation for Davidson et al.’s (1990) prediction
that reflective judgment may play an important role in how some students construct solutions
to the dilemmas they face when writing arguments. Given these findings, assuming that all
freshmen come to the first day of classes equipped with the necessary repertoire of cognitive
skills for dealing with ill-structured problems in writing may problematize their ability to
produce the kinds of arguments we want them to write" (207)

"The instructor in this study voiced a widely-held belief that students’ difficulties with written arguments pertained to their lack of preparation for college writing, or to their lack of intelligence. The possibility that some writing behaviors may be related to the developmental nature of students’ beliefs about knowing and justifying provides an important alternative explanation for instructors searching for ways to clarify for students what they expect from them" (208).

*************************************************

Discussion and Implications of Overbay's Study

I have taught Freshman Composition II, the class that focuses most explicitely on academic argumentation, for almost twenty years, and I have from the start noticed the difficulty freshmen have with what I term the "critical essay." Students that might do well in Freshman Comp I where the writing is more expressive in nature fall flat on their face when confronted with this task of forming and supporting an argument. From the start, I have noticed the high failure rate (if I could call it that) on the first essay which typically has been to form an argument supporting an interpretation of a work of literature, so I have my students rewrite the first paper. Overbay states that most instructors believe that lack of familiarity with academic conventions or a students intelligence are the prime causes for this difficulty in writing. However, her research confirming the presence of the expected stages of development within freshmen writer's arguments indicates that cognitive development issues may be more important. Students are not ready for the kind of reflective thinking we set as our learning objectives for this kind of writing assignment. It is like we are asking them to jump and touch a ten foot high basketball rim, and they are only able to jump and touch a six foot rim.

Two things about these findings jump out at me:
1. Deterministic view
If we carry these findings too far, we fall into what we might call a materialistic view of human behavior (in this case the learning and performing behavior of freshman writers). The hard-wired nature of the mind's development shapes, then, what these individuals are able to think and do in terms of their writing tasks. For those of us who ascribe to the outside influence of society and language upon our thinking and consciousness, this sort of fundamental determinism below this level of social influence is disturbing and hard to swallow.

2. Modifications
Maybe we can see the teaching practices that have been labeled "current traditional" in a new light based upon these findings. In many ways, the formalist impulses and of current traditional pedagogy might be seen as modifications made by generations of teachers to create writing tasks that are more developmentally appropriate for students at this developmental level. Such formalist tasks tend to make writing into a "well-structured" problem; whereas, the new rhetorical pedagogy that emerged since the 1960s emphasized the rhetorical, and thus "ill-structured" aspects of the writing task. Perhaps freshmen writers are not yet ready for the full blast of rhetoric's ill-structured nature? We might ask why after all these years of critiquing writing forms such as the five-paragraph essay do they persist. The work of King and Kitchener may offer an interesting explanation in this sort of writing's developmental appropriateness.

In my own thinking about reflection, I have come up with a number of different metaphors to explain some of my assumptions about its nature and role in learning and writing. My favorite is the Superman telephone booth. Clark Kent sees a crisis or problem, jumps into a near by phone booth still clothed in his newspaperman suit, and then after moments emerges as Superman in his superman suit. Reflection is like that phone booth--students enter it and become transformed. The act of reflection is some kind of catalyst for change or development in thinking (and by extension action). It is nice to leave the booth as a black box where unknown and unidentified things happen, but many thinkers on reflection have anatomized the thinking that goes on within the reflection telephone booth. This reflective thinking is described as a method or even as a sort of formula or perhaps you could call it a dance. We can map this thinking and it exists as a static model representing a form of mental activity.

The assumption has been that if we could only engage students in this type of thinking (because it is out there as a method or model to perform)--if we could only shove them in the Superman telephone booth--good things would happen. The magic powers of reflection would create change and transformation and all sort of good things.

King and Kitchener's work, as well as Overbay's, tells us that we might shove students in the telephone booth, but they are not able or ready to engage in the kind of reflective thinking we assume they will do. No wonder we are disappointed in the kinds of reflections our students do. No wonder we don't see the results of this kind of reflection that we might expect. No wonder suddenly all our students don't have superman caps and are flying through the sky after we ask them to reflect.

My sense is that the take away from asking students to engage in reflection is more than what can be summarized in the capacity of their epistemic cognition. As Boud as discussed, the outputs from reflection are multiple and varied. However, I believe this research is significant and can help me understand better what expectation I might have for my students' reflections (as well as their abilities as thinkers and writers in my class)

Sunday, December 20, 2009

2009 Dissertation Progress Report


1/5-1/9         Qualifying Exam taken

1/20             News of Passing on Qualifying Exam

2/20             Dissertation Proposal Submitted (accepted)

3/13            Paper presentation 2009 CCCC in San Francisco “Researching Rhetorical Reflection” –involved updated review of research

March-May        Review and Preparations for Engaging in Grounded Theory Research
                          (see blog for posts)-- http://thespeculum.blogspot.com/

5/15            Slice 1 of data analysis completed

5/31            Slice 2 of data analysis completed

6/13            Slice 3 of data analysis completed

6/15 – 8/4          Work on Lit Review

8/5-8/15            Slice 4 data analysis Phase I

8/15-10/7          Work on Lit Review

10/7-11/9            Slice 4 data analysis Phase II (completed)
--completion of Open Coding and identification of categories and properties near complete

11/10-            Return to Work on Lit Review

*************************************************
Additional professionally related tasks:

Spring 2009—acted as peer reviewer for two articles submitted for publication to Voices in the Middle
Spring 2009—revised and resubmitted Writing Program Profile accepted by Composition Forum twice. Article finally accepted and published in the June issue.
            Program at Eastern Michigan University.” Composition Forum 20, Summer 2009. 
July 2009—led 1-week Open Institute on College Readiness for the San Antonio Writing Project (19 high school teachers attended)
Summer 2009—textbook chapter accepted and draft written for Writing Spaces: Readings on Writing (Edited by Charlie Lowe): “What is Writing? What is ‘Academic’ Writing?” Draft available
Dec. 2009--peer reviewed article submission for CCC

******************************
Details on Lit Review and Research Work

Literature Review progress
As of 12/21, I will have completed my review of scholars and research on rhetorical reflection OUTSIDE of Composition/Rhetoric. These include the work predominantly of Dewey, Moon, Mezirow, Boud, Schon, and King and Kitchener. The draft of this section of my literature review is approximately 35 thousand words.

Beginning promptly on 12/22, I will begin my review of the scholarship and research on rhetorical reflection within the field of composition/rhetoric or writing studies broadly speaking. The general areas I will cover are reflection and composition, cognitive views of the writing process, student self-evaluation/assessments, and revision.

Projected completion date for Lit Review Draft #1:  March 15th

Research Work progress
For now, I plan to focus on the lit review. If I bog down earlier, I may take a break by focusing on Slice 5 analysis, but in all likelihood this analysis will begin in Mid-March. This last slice of open coding will a “draft cycle” view for eight to twelve “cycles” containing these items: draft #1, peer responses on draft #1, Writing Review of draft #1, draft #2 (or it could be between draft two and three).

***********************
Projected Timeline of Work
Goal: To have my lit review and open coding completed by May Seminar. Focus on May Seminar for Axial Coding and revision plans for Lit Review. It would be nice to have the data analysis completed earlier, though.  The projected timeline is deliberately conservative.

Late Spring-Early Summer: Complete research data analysis (Axial/Focused)

Summer: Submit draft chapters of Dissertation

Fall: Drafting and revision of chapters

Late Fall 2010-Early Spring 2011 Dissertation Defense






Wednesday, November 4, 2009

More on slice 4

Once again, it is slow slogging with my data analysis. Two hours here, one hours there. Chip. Chip. Chip. Slog. Slog. Slog. I believe, however, that my categories are coming together as well as the various dynamics of relationship between them. Wow! I can't believe it is happening. I think I pretty much have categories for describing everything that is happening in these writer's reviews. This emergence of categories has been happening from the beginning, but I have had so many different names for things and so many different "things" going on that I was not able to spot the groupings of these things. I don't like the phrasing here "things" but it is ok for now. I'm struggling now with how so many of these categories have valences or variances. For instance, one category is goals or setting goals. This could be writing goals or revision goals: one is abstract and the other is specific to the task. Are these two separate categories or one category with two valences? This variance among categories also gets compounded with the categories are in relationship with each other. For instance, I have a category named "considering/evaluating what is." The "considering/evaluating" is a two-fold variable and "is" is a much larger variable, so it could be
considering writing goals
considering revision goals
evaluating writing goals
evaluating revision goals

In a big picture, this list describes two larger categories in dynamic with each other, but am I describing the variance within these categories correctly? I already am feeling the complicated way in which these categories related to each other, but then that is the goal of Axial Coding. Perhaps it is good that I am beginning to move into Axial land as my categories begin to solidify.

I'm wondering now whether Nvivo will become a good tool to use, and then whether I can get it to represent digitally the complicated dynamics within this phenomena so that I can code easily. That will come I think as I do one more slice of data--slice 5.

What will slice 5 be?
I think slice 5 needs to be "draft cycle" views--that is, everything surrounding a single draft and then what results in the next draft. Question: Do I include the previous writer's review or not? I don't know.
So if I were focused on draft 2 in an essay cycle:
(Writer's review 1.1?)
Draft 1.2
Peer response 1.2
Writer's review 1.2
Draft 1.3 (look at the changes and connect them back to draft 1.2 and reflective processing)

Although it may be tempting to include the 1.1 Writer's Review, I think it will be cleaner to focus on the draft cycle materials without it. I will get to see just what role the writer's review plays in the dynamic of draft to draft. How many to I sample? 4-6 1301 and 4-6 1302?

How do I code these? I think I should attempt to "code" them with my categories and see what sorts of tensions and problems I experience with the codes. What don't these categories capture? What other categories do I need? Hmm... We'll see.

I still have a bit of a ways to go with slice 4--I NEED to finish this weekend. We shall see what my continued analysis shows and what I figure out from my final slice 4 memo.

Slog slog

Wednesday, October 21, 2009

Processing Slice 4



Slice 4 of my data analysis has taken complete sets of drafting-texts for three essay cycles. This involves each draft, DI critique, peer response, and Writer's Review for every draft a student writes in a semester. It is a lot of data. The goal is to see these Writer's Reviews in context--that is, to try to take into account (as much as is possible with these artifacts) the situational conditions and influences affecting the writer as they work on his or her draft. To be sure, the view I have from this data is limited, but it is ok. I must remember and appreciate that all the data I have is a textual representation of thinking. Other verbal or non-discursive factors are not visible, and so I must view an appreciate this data for what it is as written representations of the writer's thoughts and feelings and as such they are constructions. They are interesting because they are "textual makings" like the writer's essay. Grounded Theory as a form of ethnography is based upon close observation of phenomenon. This close observation depends to a high degree upon the researcher's sensitivity and understanding of the phenomena and context in which the phenomena exists. For me, that is what I am after here with this slice. Get the full context (as much as I can).

This slice gives me the fullest picture of Writer's Reviews in context. The image above provides the full picture of the context for Writer's Reviews and rhetorical reflection.

So what am I seeing in Slice 4?
It is extremely tedious and time consuming to chart out all the details of what is happening in a draft. I am going through one student's work in 1302, and have finished with one essay cycle. I see so many factors involved that it is hard to nail them all down. Here is a semi-list:
--the writing task
--the student's understanding of the writing task
--the nebulous concept of "assignment success" and its rough relationship (open to interpretation) with "writing success"
--what the writer thinks/wants
--outside critique from DI/peers (what the teacher/peer thinks/wants)
--available content/knowledge from which to write
--the writer's proficiency and skill with "tools"

I want to write about this last item. In this particular case, it is apparent that the writer was limited in his researching skills. I every WR and almost in every peer response the need for more support and evidence is mentioned, yet the writer hardly brings in any additional research. IF he had found support for his original position to keep bases open, he might have written an essay supporting that position. But because he didn't have the evidence for one position and did for the other, he chose argue against his original (and we might say true) position. What caused him to change? Transformation, right? I don't see this transformation happening inside the WRs--that's for sure. His 1.3 WR before his last draft states in the last line that he will argue for keeping the bases open, but his actual draft 1.4 is opposed to keeping the bases open. Why? In his 1.4 WR he says that he opposed because he had more convincing data and evidence.

What happened? The goal/task in his mind was more a matter of "assignment success" which meant a convincing argument of a certain length. When he rattled around in his lego box of content and information on the subject, he didn't have enough pieces to fit together something that would reach assignment success, but he did have some good pieces to make an argument opposed to the bases. Hence, he too the expedient path toward assignment success and the grade. After all, the grade is waht matters.

What role did Writer's Reviews play in this path toward his final draft? What role did WRs play in the relationship between thinking and action? For this writer in particular, I am conscious that these WRs are constructions themselves. You can see him consciously addressing the questions in the prompt and writing what he things he needs to write to "show off" his "good-studentness" to the teacher. He also takes a very deferential tone toward the process--looking for errors and jumping to say he will fix them and not do them (ever) again. You can see him in places reaching for things to write, particularly as far as grammar goes. I can' t say he is the best subject.

What is important, however, is to see the elements at play. I hate to say that I am seeing Schon's four constants that affect reflection-in-action and eventual practice:
  1. media/tools to engage in task/reflection
  2. appreciative systems
  3. cognitive framework, schema, terministic screen
  4. understanding of role, role playing
Is this a warning? Am I beginning to bring outside theory into my interpretations? If so, I think I need to keep grounding those ideas in the concrete appearances in my data.

OK. Enough for now. More later on slice 4 as I look at this kids second essay cycle.

Monday, May 18, 2009

Processing the Annual Review—2009

I want to write out some notes from my talk with my committee. Overall, my impression is that the actual researching (the doing of grounded theory) may be longer and harder than I envision. I got a better idea about the flow of doing the chapters, so I’ll summarize it below as a list:
• Come back to chapter 1 AFTER the research is complete
• Don’t wait until after research is complete to do the Lit Review section (this was something I had planned to wait to do)
• Chapter 3 may take longer than I anticipate because I may change things that I am doing
• Chapters 4 and 5 (findings and implications) will come more quickly after research is complete

We talked a fair amount about timeline and the flow of work. First drafts will flow through Rich until they are acceptable to go out to the whole committee. I expressed the desire to try to get an entire draft completed first and then work on revision. Now that I look at it, it seems to me that this goal may be wrong-headed. First, it puts a massive text in Rich’s lap, and second, moving chapter by chapter may be more realistic. Anyway, I have my first goal of getting a draft of the lit review done soon. I had the impression that the lit review is good for the entire committee. The most important thing about the timeline is that the dissertation needs to be all done at least one month before the defense (and preferably sooner than that). That means I should shoot for a January 1 completion data. Oh, my! I hope I have a productive summer.

We talked for quite a while about my research methodology. A big topic was the hybrid nature of my research where it is grounded in the traditional sense of analyzing slices of data, by hybrid in being able to go back to the massive database. One point that Rich brought up that I thought was interesting is about what a failed study would reveal. By failed he meant that what would be revealed if the findings from my by necessity narrow look at the database (I will actually be sampling a very small portion of what is available) and my findings were shown to be wrong or not to carry out? So much of our research in Composition is based upon making conclusions from small samples, but this result would indicate that this practice is perilous. We shall see. The big question is whether I will be able to take an emerged core category and query for that throughout the entire database. Right now, Fred says he doesn’t know how to query the text inside the database. The labor involved in taking out the separate texts and putting them into Atlas or Nvivo would be immense! Hmm… . I wonder about Nvivo. I believe it has a SQL database on the back end, so I wonder if I could import the entire database into Nvivo? Now that would make NVivo worth using. I can hardly imagine that it can do this large scale importing, but I’ll check it out.

Rich and I have talked about how the eventual methodology I will use will be noteworthy and worth an article when I’m all done. I have to admit that I don’t see it yet. I think it has to do with the back and forth of both sampling the database and then wide-scale querying of results inside the database. So it is how this database is used for research that is significant. That is what I am seeing at this point.

All and all I am beginning to see the enormity of the work to come like a large mountain to climb. I have to remember that mountains are climbed one step at a time. I think it helps to have a clear path, and though I feel pretty clear about my path, I need to clarify it more. And start walking!

Friday, May 15, 2009

Of first codings

I'm near the end of my first round of coding my first slice of data. I've been engaged in what is called micro-analysis, a line-by-line (even word by word) look at these textual artifacts. In this blog post, I want to write about my experience doing this micro-analysis (and not get into what I am seeing so far).

My initial task has been to underline sections of text and ask "What is going on here?" I'm looking at objects, events, acts, or happenings in the text. I've struggled mightily with the concept of categories—what is a category? That is what I am supposed to look for. Ian Dey specifically distinguishes them from names or labels. Gratefully, Strauss and Corbin have a helpful section on Open Coding. I looked at what I was doing, and I would say predominantly I was labeling what I saw. I had a lot of continuous tense verbs—showing evidence, affirming success, looking at draft, fitting in bounds. I see now that I was doing a lot of identifying and naming of what is happening for the students or happening in what they are doing.

The difficulty I have found is finding the right words for the labels. Often there are subtle shades of difference. What's the difference, for instance, between "showing evidence" and "offering specifics?" I'm using different language to describe the same essential thing. I saw a lot of where students would point to some fault or problem in either their draft of how they understood the assignment. Is it "admitting a fault" or "not fitting in bounds" or is there a better language to capture this form of self-assessment. But, as this last sentence illustrates, I have begun to categorize (I guess that is what I am doing) what is going on. I have grouped both pointing to faults or errors and point to successes as a form of self-assessment. Or should I call it "assessing the performance?" The problem of finding the right language to describe and label what I am seeing is difficult. It all seems a muddle, and it gets more muddled if I use different language. I've tried a few times to put aside the data and try to code the next set of artifacts with fresh eyes to see if I use different language. More muddle.

Let me describe two other things I have done as I did this first-level of micro-analysis. With most all the artifacts, I wrote a short note describing in a larger sense what was going on in this writer's review and what I thought of it. I also tried something Strauss and Corbin suggested, and that is to dig deeply into key words. For instance, I took the words "I believe" and "I realized" and wrote a string of synonyms beneath them trying to open up what the student may be really meaning when they say those words. This was helpful in places. I also noted what I considered "in vivo" names for what is going on. "Fit the assignment" is one example of what I marked as an in vivo term, though I have not decided to use it as a code yet.

Once I had coded all the artifacts, I took another blank sheet of paper and tried to determine classes of labels or what is going on. I was then able to fit a number of other labels within these classes. "Assessing the performance" became one where I could fit both self-assessment and peer assessment under that larger term. But things seem to be nested in loose ways, and how do I know when a sub-category should be its own category. For instance, I used the term "processing a problem" as a category which I had also put as a sub-category under "assessing performance." Both involved self and peer assessment. So where does this self and peer assessment go?

I'm used to from my literary analysis background to doing close reading, and I find myself slipping into this mode. I am not sure how to describe it, but at a certain point trying to process the muddle I feel like I have abandoned the identifying of categories, properties and dimensions.

I know that "constant comparative analysis" is a cornerstone of grounded theory, so I have attempted to do some comparing as I have reprocessed my initial micro-analysis. After I had spent some time just pulling together a comprehensive list of labels and groupings of labels (categories?), I decided to zero in on one common move I saw students doing and that was what appeared to be an end point generally described as "what I see now" or "what I realize" or "what I now know or know to do." Or even what I see to do next.

In writing a short memo about one of my artifacts, I saw that the student was telling a story, a narrative of sorts, of getting to this end point of what I labeled "knowing what to do." I'm not completely happy with the language of this term, but it works for now. Then I looked for this move or story in the different artifacts, finding it in pretty much each one. I diagrammed this narrative as a kind of flow chart. It seemed that there were different paths students took to get to this point of "knowing what to do." The diagram is messy at this point with some redundancies and I will next need to clean up this flow chart. What I am not sure of right now is whether I am on track by creating this diagram (is it an emerging theory) or am I going off-base. Drawing the chart along with examining the data made me see that this end point is really a middle point in some cases. Not only is there a sequence leading to this "knowing what to do" but there is a forward looking projected action in many cases, and these "what I will change" or do next are not all the same.

The goal of initial micro-analysis and open coding is to begin to identify categories and develop some emerging thoughts on theory (or what is going on here). I believe I have nearly accomplished this task with what I have done, but I'm uncertain of what to do next. How do I approach my next slice of data? I know that I need first to do some memoing to process my analysis so far to see where to go next. For me, especially, this reflective writing is very useful. I think I will do another round of micro-analysis on a set of writer's reviews done by students further along in their freshman comp sequence (1302 students) and see if similar patterns are evident or others are there.

I'm thinking right now that muddle is not all bad. I need to embrace the muddle and keep "muddling forward" and not get too stuck in figuring everything out at this point.

Thursday, April 23, 2009

My Preconceptions

I'm going to take a stab at putting down my theoretical assumptions and beliefs BEFORE I begin the analysis of my data. Although Grounded Theory seems to have the requirement that the researcher NOT come to the data with any sort of preconceived theory, I think this requirement gets misinterpreted. First, it is impossible not to have preconceived notions. We all look at the world through our own terministic screens. Dey stresses that the belief in atheoretical observation is a myth. The importance is to be open to data and be aware of the theoretical biases that you already possess. So here it goes.

I thought I would start with some assumptions I expressed about reflection in 2004 in a paper I wrote that summer at the Central Texas Writing Project:

My Assumptions about Reflection

1) Writing reflectively is a learned skill.
2) Reflection helps students formulate and gain ownership of their own knowledge
3) Reflection plays a mediating role for learning from experience
4) Reflection plays a mediating role for learning from experience.
5) Deep reflection becomes "reflexive" (or transformational)
6) Reflection helps students formulate goals and solve problems as they compose.

Surprisingly, I still hold these assumptions to be true. Underneath all of this is a certain agency that I believe this act of reflection possesses. I have thought of various comparisons to describe this agency characteristic I believe reflection has: Superman's telephone booth,and a catalyst for a chemical reaction are the chief two. It just hit me that I didn't include something that indicates a slower growth process such as an oven to bake. Reflection is not an oven; it is something that is more immediate, or at least in its local effects. Implicit in all of this also is the notion of positive change. The most idealistic change or reaction is Mezirow's notions of transformational learning which Qualley picks up on and calls "reflexive." The contemplation, examination, and critique of one's assumptions for thought, belief, or action is said to create almost magically a significant change. This is the home run of reflection.

I also believe more modest, but still significant, things can happen through the "mindfulness" reflection promotes in students. Because the prompt asks students to consider and be more aware of certain things, those items may become more defined or real to the student.

Underneath all of these assumptions also is what we might consider a belief in the magical nature of language. A lot of reflective activities done in the 80s stressed the value of verbal reflection (Boud). Yet, I am focusing on written reflections. These two kinds of reflection seem to share the idea that students putting their thoughts and feelings into words (into language) gain something from that activity. Since our thoughts and perceptions of the world are formed to a large degree within the framework of language, using language is significant for developing this thought. I prefer written reflection because the student has more time to consider what they are writing, and then they have this document to look back on.

What other assumptions do I have?
I also have a more specific notion of this generative power of reflection for Composition that I believe links directly with "invention." For writer's I believe that the most significant concern of the writer's is the negotiation of their "rhetorical stance." That is, reflection helps students position themselves (their text and their thinking) in terms of the writing situation. Reflection provides the space to be "mindful" of the writing situation and all the unique factors that come into play. It is also the place where phronesis can be enacted--that is the flexible application of general rules to specific contexts or ill-structured situations. Reflection, then, becomes the pedagogical activity that reactivates the concerns of invention.

So I bring to my examination of writing and essays a whole lot of baggage about writing process, invention and pre-writing strategies, concepts of the writing situation, and what constitutes writing growth. I firmly believe in the importance of drafting and revision as a means for working on a piece of writing--it isn't a one shot deal. This notion of the "writing feedback loop" and drafting cycle is paired with the developmental nature of learning and knowledge (i.e. Kolb's experiential learning cycle). All of these assumptions are important to me.

I also have a few assumptions regarding causes of poor reflection or a lack of reflection. These predominantly revolve around four things: learning styles, developmental factors, knowledge, and conceptual frameworks of the task. I don't know, but it could be that some people are just not hardwired to think and learn in reflective ways. Being a reflective person myself, I can hardly imagine this kind of person (but we do have George Bush as an example). King and Kitchener as well as other intellectual development models assert that reflective thinking is a higher order level of thinking that comes with more maturity and development. I don't buy this idea that younger kids can't be reflective about what they do, but it is a significant idea and one I am unsure of. Some research also showed that how well students reflected depended to a great degree both on their knowledge (how can they be mindful of something they aren't even aware of or think in ways if they don't have the knowledge to think that way) and their conception or mental schema of the task (if they see the task as being about XYZ when it really is about ABC, then of course they will fail or flail).

For now, these are the key preconceptions I can think of. For me, these thoughts seem so natural and self-evident; thus, it is so important for me to get them down and in the open so I am aware of them. They, of course, are not natural at all.

Sunday, April 19, 2009

Grounding Grounded Theory


I finally have finished Ian Dey's Grounding Grounded Theory, and I know that for the next few weeks (and perhaps the rest of my life) I will be processing this text. However, I want to take a big picture appraisal of his text.

He ends his book speaking about the growth of a misunderstanding about what grounded theory is that has been created by the proliferation of software tools for qualitative analysis. He is speaking from the perspective of 1999 looking back at what has happened in the 1990s. Pointing to work by Coffey in 1996, he describes how "the centrality of coding in both software for qualitative analysis and in grounded theory promotes an 'unnecessary close equation of grounded theory, coding, and software'" (qtd. in Dey 271). The mechanics of coding made more simple through computer software combines with the methodology of qualitative analysis that introduced the notion of "coding" to qualitative analysis to the point that "to code" meant "to engage in grounded theory." Any systematic analysis of data via "coding" meant grounded theory. And worse, as Dey notes, this convergence has resulted in "an uncritical attitude toward methodology" (271-2). Although these views equating qualitative analysis (coding) into one methodology (grounded theory) have been disputed, and I don't think anyone now would equate them, Dey speaks of a time when grounded theory became trendy and was used uncritically thanks to these computer software tools. Dey sums up the general problem in this trend: "It seems that anxieties over the convergence of qualitative research around a single methodology, which takes coding as the core of theorizing, may be well-founded" (272). The culprit seems to be the introduction of software tools that make coding and then the relating of categories through the retrieval of data easier. Dey's larger critique is that this process of theory generation and qualitative analysis becomes "mechanistic" and is done uncritically.

What his book reveals is that the we shouldn't blame the software tools for promoting a "mechanistic approach" because the seeds for this approach are there within grounded theory, as articulated by Glaser and Strauss. I'll list some of these tendencies in grounded theory that he says can lead toward a mechanistic and uncritical approach:
  1. The inclination to consider coding as an aconceptual process
  2. Observation is presented as atheoretical
  3. Coding is said to be emergent rather than constructed
  4. Theory is something we "discover"
  5. Categories are conceived as separate concepts that are later connected
  6. Process is analyzed through "slices of time" (rather than through an evolutionary analysis) (273)
He notes that it is fairly ironic and a case of unintended consequences that grounded theory has become reduced to these narrower, mechanistic approaches because of its stress on creativity and openness of analysis. As Dey states in his last lines of the book, "The dangers of a mechanistic approach cannot be avoided merely through exhortation. It requires reflection on the origins of the particular path taken and the problems it leaves unresolved" (273). Thanks to this book by Dey, I hope I will be able to avoid a mechanistic and uncritical engagement with my own research, and I believe he has given me tools to better understand what I will be doing as I start this analysis and the choices open to me and that I make.

The daunting task for me now seems to be to possess the adequate level of comprehension to do this research. It seems very complex. The thought occurred to me that I felt like a basketball player who has just been drafted to a new team, and he doesn't now the plays and the different schemes used for offence and defence. He's sent out on to the court to play, but things are happening so fast he doesn't know what is going on. Before I get out on the court (figuratively speaking), I hope to understand the dynamics at work more and about my choices. Since I am supposed to do a presentation for the May Workshop, I think I will make it on Grounded Theory so that I can use this task to get a stronger grasp on what I will be doing.

Thursday, April 9, 2009

Preliminary Thinking on Sampling

My lunch today with Fred was primarily about how and what to sample in the TOPIC database. I've talked to Becky about this sampling question also, as well as John Horner (my "every man audience"). Plus, I'm doing a fair amount of thinking about it from my reading. What I want to do at this point is jot down some of my thoughts right now about what and how I would sample from TOPIC.

First, I need to keep in mind that my sampling is theoretical (that is, theory driven). This goal may be hard for the initial sample, and hard to predict for future samples because the emerging theory should guide the selection of data. I in no way need to make my sample in some way representative or of a certain number to be valid. No.

One thing I have noticed when others have thought about this question of what and how to pull data from the TOPIC vault is that people can see it as overwhelming. John Horner offered the advice to be careful not to set up a project that might take me years to do. He was seeing the vastness of the data and conjectured I would have to do some kind of research sample that would encompass the entire pool of data. No. No. Becky looked at it and paused at the amount and the complexity of the data. Each I felt had a sort of Grand Canyon moment--this is BIG.

Fred and I are in agreement that these Writer's Reviews documents should be viewed within the context of an entire Writing Cycle. In addition, he is leaning toward examining the relationship between what is said and happens in Writer's Reviews and what is said and done in subsequent drafts. Of course, this relationship is what is most interesting, and it is what other researchers have discovered is ambiguous. What students say they will do for revision and what they actually do can be quite different. I diverge from Fred a little bit in that I want to look at some Writer's Reviews just on their own as well.

The semesters of data I have chosen are these:
Year 04-05 1301 and 1302 (called (05-1301 and 05-1302)
Year 05-06 1301 and 1302 (called 06-1301 and 06-1302)

My current thought is that I will get a broad sample of writing reviews for my initial sample. I might also keep this sample fairly small so that I can make mistakes and it won't cost me a lot of time or effort, but it will be substantial enough for me to sink my teeth into the researching. For each course, I thought I would grab Writer's Reviews from Essay #2 and Essay #3.

I wish I could do tables in here, but here is my proposed selection.

For each essay in each class and year, I would grab two Writer's Review from draft #1 and two Writer's Reviews from draft #2 (4 Writer's Reviews for each essay and each year). That adds up to 32 isolated Writer's Reviews. In addition, I would pull a sample of full Essay Cycles from each course and each essay. A full Essay Cycle includes the every draft, every peer response, every Document Instructor response and grade, and every Writer's Review for the entire draft. I propose pulling two full Essay Cycles from each course and for each essay--that equates to 16 full Writing Cycles. I could modify this number down to 12 full Essay Cycles (6 from 1301 and 6 from 1302). 12 sounds more manageable to me, so I am not sure.

Fred might say only to do the full essay cycles, but I'm thinking I want the mix of Writing Reviews in isolation and then some in context. Hmm... I wonder if I should have some that are the same so that I could do a pass through of the data out of context and then look at it again in context. Hmm... . I have to consider that one.

This initial sample seems like it is large enough and broad enough to give me a base from which to then go in more particular directions depending up the emergence of my theory. My next quandary has to do with whether I will use a qualitative researching software tool. I probably will use one, but which one. It would be nice for me to be able to import a bunch of this text into the software program, but it looks like I may have to copy and past it in. ... More to look into.

Tuesday, April 7, 2009

On Coding in Grounded Theory

Dey has a fairly unsatisfactory chapter on coding because he truly shoots some holes into Grounded Theory in his discussion about coding. He opens with a good distinction between categorization and coding: "With categories we impute meanings, with coding we compute them. The former involves a creative leap, for 'comprehending experience via metaphor is one of the great imaginative triumphs of the human mind' (Lakoff, 1987, p. 303). The latter involves reduction and ready reckoning" (95). It is interesting how Dey uses uncommon words such as "impute" and "reckon"--one involves a leap, while the other involves reduction (pulling back and being conservative). These seem like contrary impulses.

Dey provides a bit of historical perspective on the term "coding" and notes with some ironic amusement that it has been the combination of grounded theory and computer analysis tools that has led to the term"coding" as being the term describing the key process in qualitative analysis. What is ironic is that coding comes from the quantitative use of survey methods where "coding" happens to analyze surveys that have "precoded" questions. The survey already has within it the concepts and categories so that when the surveys are analyzed the researcher identifies and assigns the appropriate codes to responses and tabulates them. Dey notes that in qualitative analysis the researcher does not have the conceptualization already complete as they review the data because it is yet to be accomplished (96).

However, Glaser and Strauss strongly believe "coding and analysis proceed jointly in grounded theory" (96). Thus they contrast grounded theory from other qualitative methodologies that would code first then analyze second (separating the processes). What this joint coding and analysis exactly looks like, I am not sure. In this section distinguishing coding in grounded theory, Dey notes a point Glaser and Strauss emphatically make: "they reject the method of coding data 'into crudely quantifiable form' in order to test hypotheses, since they are interested in generating theory rather than verifying it" (96). My first reading of this emphatic point was that they rejected the idea of analyzing to determine a code from which you next test (which is what my research design provisionally contains). I now see that I may have been reading too much into this quote. What they seem to say is not to count your data--there is no need to determine the number of occurrences of a concept or interaction, especially if this counting is done in order to support some hypothesis or theory the researcher has as they come to the data. As Dey says, "coding is governed only by theoretical relevance and is not concerned with the accumulation of supporting evidence" (96). I wonder about this claim since I believe computer analysis tools can easily provide you a graphical representation of the prevalence of a particular code (such as a tag cloud). I'll see this possibility more as I get into using these computer analysis tools. The last key points about coding Dey stresses is that coding is a method to make conceptualization explicit, and its function is "to generate rather than test a theory" (97). One must resist, then, impulses to turn coding into a hunt for verification, it seems.

The orthodox Grounded Theory view toward coding is that it proceeds in phases:
  1. Categorize the data (open coding)
  2. Connect the categories (theoretical or axiel coding)
  3. Focus on a core category (selective coding) (98)
In his discussion about this breaking down of the data, especially in the initial open coding, Dey indulges himself by countering analysis with "holism." His basic point seems to be twofold--taking a big picture, holistic view may reveal things that narrow analysis will miss, and that through this holism we may tap into what he calls "direct understandings" of the world better. These direct understandings come from our "bodily experience in the world" and they may be "both rich and routine" (102). Dey seems to be going off on a strange tangent here, but I think his call to maintain a big picture awareness as one focuses on the microscopic is wise, and it is our "sensitizing experience" that can guide this holistic analysis.

Dey offers another innovation (we might say) toward coding. This time he offers a counter viewpoint to the notion of phases and the idea that you categorize first and then connect these categories. Dey's point is this: "categories cannot be considered in isolation. Categories acquire their meaning in part from their place in the wider scheme of things... . ...discrimination among objects may depend on their place in a larger taxonomy" (105). Rather than category sets, Dey chooses the metaphor of "category strings" to represent how categories exist within a network of other categories. Thinking of Lakoff, Dey is stressing the point that any category/categorization activates a larger conceptual framework, and we need to be aware of this network of connections. His point is that "in grounded theory, the division between open coding and axial coding needs to be treated with caution" (105). His point is that we need not wait to uncover links between categories as we open code (I think), and that we need to be aware of these larger "strings" of relationships within the categories we declare. It makes me think a bit about activity theory.

Next Dey considers Axial Coding in more detail, first discussing Strauss and Corbin's "coding paradigm" (1987). Strauss' coding paradigm examines: conditions, interaction among the actors, strategies and tactics, and consequences (106). The value of this coding paradigm is its clarity that makes the entire process of coding more manageable. You know what you are doing with this paradigm. Dey questions why this paradigm should be privileged? Glaser criticized Strauss' "coding paradigm" because it ignored Glaser's work on "theoretical coding": "Instead of 'forcing' the data to fit a pregiven paradigm, Glaser suggests we consider a range of theoretical options of which the proposed paradigm is only one" (107). Glaser in his 1978 book lists sixteen "coding families" that provide a range of options for coding (107). Glaser stresses that a coding family should only be used once it is indicated by the data. Dey questions what to do if more than one family could fit the data, thus making the choice of a coding family arbitrary.

The final part of this chapter discusses "core categories" in detail. Core categories are central for grounded theory. As Glaser believes, "The aim of producing a grounded theory that is relevant and workable to practitioners requires conceptual delimitation" (110). Core categories are where the researcher delimits their categorization. Glaser believes a core category has to "earn its privileged position" (111) by containing these qualities (or meeting these criteria): "[it] has to be central, stable, complex integrative, incisive, powerful, and highly variable" (111). Dey questions why only ONE category is chosen, and not more. He blasts grounded theory at this point for forcing a fit, and what obviously involves an "elimination of alternative accounts" (112). He criticizes the core category as also being paradoxical in terms of its role as both dependent and independent variable.

All these critiques come back to the basic difficulty of dealing with subjectivities when coding data. This quote from Dey seems to express the difficulty well: "The construction of a category or the appropriateness of assigning it to some part of the date will undoubtedly reflect our wider comprehensions--both of the data and what we are trying to do with it. The researcher (who brings to categorization an evolving set of assumptions, biases, and sensitivities) cannot be eliminated from this process" (104). I knew this truth before, but seeing it spelled out in more detail in this chapter makes the entire process seem more and more daunting.

Sunday, April 5, 2009

Categories and Categorization in Grounded Theory

This blog post will attempt to make sense of Ian Dey's long chapter on Categories and Categorization in Grounded Theory. Categories are maddeningly confusing, and at times it seems what Dey reveals in this chapter is like the soft underbelly of a dragon.

Let's start by presenting definitions:
Categories are conceptual "and never just a name or a label" (49). Categories are said to stand alone and refer to a class of things.

Properties, though, can't stand alone and are "conceptual characteristics of a category" (51). They refer to external to external relationships and relate through interaction.

Dimensions represent the spectrum of variation possible within properties. Dey says, "Identifying dimensions therefore involves (internal) differentiation rather than (external) comparison" (52).

I will chart out Dey's example illustrating these three concepts below:

Category-- color
Properties-- shade, intensity, and hue
Dimensions--intensity can be high or low, hue can be dark or light

Dey says this example illustrates the orthodox distinction between properties and dimensions. Here is one good quote from Dey: "Whereas properties and dimensions 'belong' to the thing itself, the categories to which we assign it do not belong to the thing itself but are part of how we choose to classify it" (54). He stresses the point that categories are derived through comparison.

Dey uncovers a confusion within the analytic processes that each of these three refers to. He says that we can apply all of these analytic processes to the same phenomenon. He stresses that each process of analysis has a different purpose: "We use categories to distinguish and compare; we identify properties and attributes to analyze agency and effects; and we measure dimensions to identify more precisely the characteristics of what we are studying" (57). Dey feels that distinguishing these three concepts through purpose is better than seeing them as varying levels of abstraction.

The next section in the chapter presents the classical view of categories as elements in theory. Dey presents this summary before undercutting it completely. Theorizing involves discovering how categories relate to each other, and GT seems to have two ways of relating: one is through relations of similarity and difference, and second through connection and interaction. He provides this example of relating a cat, dog, and bone.

Formal relations based on similarity and difference: puts the cat and dog together
Substantive relations based on connections: puts the dog and bone together

Our understanding of substantive connections is based upon our observation of the process.

Digging deeper, Dey tackles Glaser's "concept indicator model" which Glaser claims "provides the essential link between data and concept" (qtd. in Dey 60). The meaning of the category (or code) is defined in terms of its indicators. Dey uses the example of prejudice as a concept (category). We can't observe the abstract concept of prejudice, but we see it in action, so to speak--through its "indicators." We can look at statements or actions and identify them as indicators of prejudice. Glaser believes that constant comparative analysis slowly builds concepts through "the careful combination of indicators." Concepts, then, become "the 'sum' of its indicators (61). Dey has some questions about Glasers concept-indicator model.

Summing up the chapter, Dey points out that one of the special characteristics of grounded theory "is its firm location in an interactionist methodology." It is focused on explicating social processes in dynamic terms. I think this characteristic of GT is important to remember. The elements of theory--categories, properties, dimensions--and the process of categorization--constant comparison, focus on indicators--all facilitate this interactionist methodology.

Next Dey digs the deepest into "categorization"--the fundamental process of distinguishing and classifying. Here is where things get messy. Dey explores modern developments in categorization that question the simple concept-indicator model of Glaser and it process of basing categorization upon judgments of similarity and difference that seems to figure so largely in GT. As Dey says quite simply: "The identification of categories on the basis of similarity and difference turns out to be rather problematic ... [and] in practice the process of drawing distinctions is much more complicated and ambiguous than the concept-indicator model allows" (66). Great. Pull the rug out from underneath me. Dey reveals that categorization is much more variable than Glaser describes it in his concept-indicator model and "it challenges any simple assumption that categories are 'indicated' by data in a straightforward way" (75).

Dey goes on to describe three alternative understandings of categorization from scholarship done since the 1967 advent of GT. The chapter is dense, so I will include Dey's own summary:

"In the above discussion, we can identify at least four different accounts of categorization. First, we have the classic account, which assumes that category boundaries are crisp, membership is based on common features, and relations between categories are governed by logical operations. Second, we have 'fuzzy' sets, where category boundaries become vague, membership is graded, and relationships between categories become a matter of degree. Third, we have the 'prototypical' model, which stresses the role of category exemplars and shifts focus from membership to degrees of fit. Finally, we have categorization in terms of 'idealized cognitive models' (this is Lakoff) which 'motivate' the creation of categories through various forms of 'chaining' and 'extension'" (86).

After revealing the basic instability of categories and the process of categorization, Dey seems to mollify his reader by stating a hopeful message: "while the processes of categorization may not be strictly logical, neither are they entirely arbitrary" (87). He then provides a number of things the researcher needs to do to render her analysis not entirely arbitrary:
  1. Render the cognitive processes of categorization explicit (i.e which of the four approaches to categorization will you take)
  2. Assess the adequacy of the cognitive process in terms of the underlying cognitive assumptions employed.
  3. Recognize the various processes involved in categorization
  4. Identify the aims of categorization (for example, prediction or inference)
  5. Make more explicit the grounds (cue or category validity) on which these categories can be realized
  6. Identify the underlying conceptual models and make explicit their metonomyic or metaphorical extensions (a la Lakoff) (87)
This list is quite daunting, and to me seems to require a level of self-awareness that might be impossible to achieve. I believe that I can attempt to define these foundations for my categorization before I start coding, but I may not be able to get far until I start coding data. Coming back to this list will probably be an important thing to do after my pilot study.

The larger point of Dey's chapter is summed up in a statement he makes near the end of the chapter: "In grounded theory innocence is preserved and bias precluded by allowing categories to emerge from (and hence correspond to) the data. But Lakoff's analysis suggests that such innocence is impossible to achieve. We think in terms of categories and our categories are structured in terms of our prior experience and knowledge" (92-3). Grounded theory, Dey, believes must reassess how it categorizes in light of new theories that challenge and expand "how categories are actually assigned and used in the production of knowledge" (91).

I always knew that the original description of generating categories from the data was naive; however, Dey has overwhelmed me with the detailed description of this inadequacy. Yet I would rather be aware of these problems and strategically (and perhaps rhetorically) chart my approach to categorization and analysis that enter this forest without a plan. I am hopeful that Dey will offer more explicit suggestions as I keep reading.

Monday, March 30, 2009

A Mixed Marriage Or: Having Your Cake and Eating it Too

Grounded Theory, as Ian Dey points out, contains unresolved tensions coming from its origins in rival traditions. Glaser came from Columbia University and brought the rigor associated with quantitative survey methods. Numbers serve as facts that tell generalizable truths. Strauss, however, came from the University of Chicago, and his background was in "symbolic interactionism" and its tradition of qualitative research. The following is a encapsulation of symbolic interactionism from wikipedia:

Herbert Blumer (1969), who coined the term "symbolic interactionism," set out three basic premises of the perspective:

  1. "Human beings act toward things on the basis of the meanings they ascribe to those things."
  2. "The meaning of such things is derived from, or arises out of, the social interaction that one has with others and the society."
  3. "These meanings are handled in, and modified through, an interpretive process used by the person in dealing with the things he/she encounters."

Blumer, following Mead, claimed that people interact with each and other by interpret[ing] or 'defin[ing]' each other's actions instead of merely reacting to each other's actions. Their 'response' is not made directly to the actions of one another but instead is based on the meaning which they attach to such actions. Thus, human interaction is mediated by the use of symbolssignification, by interpretation, or by ascertaining the meaning of one another's actions (Blumer 1962). Blumer contrasted this process, which he called "symbolic interaction," with behaviorist explanations of human behavior, which don't allow for interpretation between stimulus and response.

As Dey summarizes, "In the marriage of these two traditions, it was intended to harness the logic and rigor of quantitative methods to the rich interpretive insights of the symbolic interactionist tradition" (25). These two traditions are described as "naturalistic inquiry" and "variable analysis." Dey points out that later interpreters of Grounded Theory have leaned more toward the symbolic interactionist side of this marriage and bemoaned the quantitative roots of Grounded Theory. Naturalistic inquiry is valued for its ability to provide rich and deep interpretation that is contextually based, while variable analysis "facilitated easy measurement" and variables that were "consistent and stable" (27). How can this fixed and rational logic of variable analysis be married with naturalistic inquiry? Dey sets out to explain how Glaser and Strauss accomplish this strange blending of methodologies. His first explanation sets out the broad approach of Glaser and Strauss: "They locate inquiry in naturalistic settings, focused on interaction and its interpretation; and they construe analysis in terms of the identification of categories (variables?) and their relationships" (27). So inquiry and data gathering are purely qualitative, but when it comes to analyzing this data, it becomes more fixed and quantitative in nature. Can this be?

Dey identifies two bridges G&S create to make this marriage work. The first is the use of Categories as a "means of mediating between transient interpretations on the one hand and stable conceptualization on the other" (27). The term "categories" is used instead of variables or values. The second bridge Dey identifies is their notion that "theory can be grounded as it is generated" (27). Fluid concepts can be fixed through this back and forth interpretation between theory and data (the "constant comparative method"). The word G&S have for this connection between concepts (that indicate or lead to theory) and data is "sensitizing" (28). That means that these concepts "remain meaningful in the context of everyday interaction" (28).

Many problems, as Dey points out, exist with this strange mixing of methodologies, but I find I am attracted to GT because it seeks this hybrid goal toward knowledge. If one views GT from a strictly qualitative view point, I can see that they would miss the generalizable ambitions of GT. I believe everything is context-dependent; however, many similarities still exist across contexts and should be acknowledges as well. How come the hero has a thousand faces? Yet, how can we acknowledge and include the rich variety and influence of specific contexts? Can't we find a way to acknowledge both?

I think it is this mixed marriage found uneasily within the roots of GT that attracts me to this methodology. It does seek some generalizable truths (especially as the researcher moves from substantive to formal theory), and it has faith in systematic rigor for revealing truth, yet it must still be sensitized and matched with specific contexts. I need to feel out this paradoxical epistemology that seems to be at the heart of GT and think more about what that way of knowing and seeing the world means. I'm uncomfortable with an extremely context-dependent view of truth, yet I don't believe a transcendent truth that is devoid of context completely. There must be some middle ground.

Wednesday, March 25, 2009

Facing up to issues in grounded theory research --preconceived frameworks and verification

"The very attraction of grounded theory may lie in the way it obliges us--because of its commitment to theory--to face up to some fairly basic issues about the nature of social research. If we accept the elementary (but awkward) principle that to do research requires reflection on what we are doing and how we do it, at the very least we should try to confront and clarify these issues." Grounding Grounded Theory (1999), Ian Dey p. 24


This is the first of a number of posts I will make discussing grounded theory and my own interpretation of the methodology. As Dey points out, there are a "plurality of 'interpretations'" of grounded theory that seem to fall into three camps: 1) the orthodoxy of Glaser, 2) the safe schemas of Strauss and Corbin, or 3) the doctrine of dimensionality of Schatzman. I have increasingly developed a sense of what grounded theory is through reading, but as I do more reading in preparation actually to perform the methods of grounded theory analysis, I feel that I need to coalesce some of my thinking. That's what these posts will be, and I anticipate that they will continue through the researching process.

I want to talk about two questions that Dey raises at the end of his first chapter in the book cited above. Here's the first one:

--How much scope does grounded theory allow for adopting preconceived frameworks as an aid to analysis?

This question is, of course, the source of the big break between Glaser and Strauss (and Corbin). Glaser adheres to the principle that the researcher should lay aside all preconceptions and theories as they analyze data. This question gets at the scope that will be allowed for adopting preconceived conceptual frameworks. This doesn't necessarily mean a particular theory but simply the use of ANY frameworks ahead of time for understanding data.

It seems to me that the question is moot since it is impossible NOT to bring conceptual frameworks and even theories with us as we examine data. I suppose we could get too worked up over this point, but I very much have hinged my study on faulting previous theory building for creating that theory through deductive analysis of data based on outside theories. Theories to understand data. I want to go the other way--data to form a theory to understand the data. Thus, I think I need to side more in the camp with Glaer. One comment made in chapter 2 about categories seems relevant here: "Glaser and Strauss (1967, pp. 240-41) present categories as 'sensitizing' concepts that related meaningfully to the realities of interaction (as perceived by participants)" (28). I think the last part is significant. I need to key into as much as possible how my participants are seeing this interaction and representing it. The important thing is not my interpretation of the interaction, but their interpretation (or my interpretation of their interpretation). The data itself will indicate the concepts that will emerge.

Strauss and Corbin developed a particular method for "integrative coding" rather than allowing for broader possibilities for coding. Strauss and Corbin's "coding paradigm" followed conditions, context, action/interaction strategies, and consequences. This is what is referred to as "axiel coding." They believed the use of this coding paradigm "allowed data to be related systematically in complex ways" (Dey 11). I look at this paradigm and I am immediately reminded of Aristotle's Topoi. The topics from which the rhetor selects his or her arguments. I, personally, don't see a huge problem with applying these analytical heuristics to the data to see what they reveal. It seems that Strauss and Corbin are creating a more detailed procedure to guide coding data and analyzing it. Glaser would be more open an improvisational in his coding.

A bigger question regards how much will I bring in theories and preconceived ideas regarding rhetorical reflection into my analysis. Strauss and Corbin allow a number of different options for possibly using theory. My stance, I think, will be this:
--define my preconceived theories, ideas, and biases
--set them aside
--code with an open mind
--see what happens

I think in my initial pilot coding I will try to be "objective" and let the data speak. When I have what I hear the data saying I may then turn to my theory to give it a name or an explanation. I'll just have to feel out what I will do. From my reading of Strauss and Corbin, I believe they adopt this approach which is flexible and allows the use of theory if it seem appropriate. My thinking right now may reflect that I have read more from Strauss and Corbin than Glaser (other than the 1967 book). I have ordered three Glaser books to look at, so I may learn more about his approach and change my mind. For now, Strauss and Corbin seem to outline a better procedure to follow for doing the coding successfully (and systematically).

Second question:
What place has verification in grounded theory?

This question has significance for me because the current research design I will follow involves grounded theory analysis to develop a coding instrument and then a test to see how well this coding instrument is useful for analysis through a large scale content/rhetorical analysis of student reflective texts.

So I would generate and then verify--all in one dissertation study.

The issue of verification of theory for grounded theory is complex. Glaser's position is that verification has no place in grounded theory--it is a different methodology. Grounded theory takes two positions toward verification. One position is that verification is left to others researchers at a different time. Grounded theory's job is not verification, so it would go against this purpose of grounded theory to have a study that involved both generation and verification of theory. The second position of grounded theory regarding verification is that the theory generated from grounded theory contains implicit verification within it because it fits the data. There is no need to check the theory because these ideas are induced from the data--to verify the theory we need only look at the data. I believe I am oversimplifying things--as Dey mentions there is "ambivalence in grounded theory about the status of discovery and verification" (38). It is clear to me, as well, that "verification" has two different meanings in these two positions.

I am reluctant to make my own study a two part study where I generate and then verify. Not only would it possibly make my study very large and time consuming, but it would violate some of the principles of grounded theory methodology (it seems to me). Also, I conceive of my study as the initial stage of theory building where a more homogeneous set of data is examined. The second stage is to maximize differences in one's theoretical sampling to see how the theory becomes refined.

Saturday, March 21, 2009

What If?


What if the researchers into brain imaging talked about in this NPR story were to focus their research on the influence of "reflection" on brain function?

Reflection has this belief surrounding it--it is magical. It has super powers. It is almost as if we were to put students/people into the telephone booth of reflection, and POOF! Out they come with a new understanding, even perhaps with a transformed view of the world.

I have been gravitated to reflection because I see it as the central mediating factor for learning (and action). Mediation. What does that mean? It means it has to be added, it has to be passed through, it is the lever to make something happen. I have particularly associated it with the classical notion of phronesis or practical wisdom. It is the application of knowledge to particular contexts.

What if... we were able to do brain scans of people engaged in reflective thinking? What would it show? What parts of the brain would be stimulated? How would this brain function differ from other kinds of brain function?

Wouldn't it be amazing if I were to talk these researchers into studying the brain on reflection! It seems like it would be a good idea. Reflection has this high and hollowed place in the kingdom of thinking; surely researchers would be interested in studying these kinds of higher order brain functions.

Oh--and could we detail physiologically developmental differences in reflective thinking? To confirm the research from King and Kitchener. Wouldn't that matter? If we knew better what our students were capable of cognitively, wouldn't that matter? It seems to me that it would.

I'm just dreaming here. Maybe I should try a letter to these folks doing this brain scanning research just for grins. Maybe the social sciences can join hands with the hard sciences.

What if?

Thursday, March 19, 2009

Zeroing in on Researching

My sights are now set on getting prepared to start my the actual grounded theory coding, so this post will be about planning. I have a window while my IRB is getting approved, so I need to take full advantage of that time.

IRB Window
--Rrice has draft, may need some edits, hopefully signed and to the IRB no later than 3/27
--IRB approval optimistically in 10 working days, but it could be more. IRB approval by 4/13-4/27?

So I have roughly three to six weeks to get ready. That's lots of time. To be honest, I should have had this IRB done in mid-January. Nevertheless, I will work within the constraints I have and try to make full use of this time.

Preparation Tasks: (not in any especial order)
  1. Secure access for Fred to copy of TOPIC from Susan
  2. Work with Fred and Rich for what I will want from the database and what might be my first slice of data
  3. Brainstorm and get input on just how I will approach this mountain of data--initially, the most important thing is to determine what will be my first slice of data
  4. Secure copies of each of the textbooks/curriculum guides for 1301/1302 for 2004-2006 (I have some of them already, but I think I have to scramble for the others). Review this curriculum.
  5. Review how to do grounded theory coding, perhaps practice a bit with other writing examples (maybe peer responses or something)
  6. Get a couple of other grounded theory analysis books via ILL
  7. Explore if I will use a qualitative analysis software program like ATLAS-TI or NVIVO and practice using the tool
  8. Do a theoretical bias inventory--that is, lay out my thoughts on what I think and the theories I have about rhetorical reflection. This must be carefully done. Which leads to next point--
  9. Investigate and get some help about how to code and balance one's theory and observations.
  10. Make a plan for what my "pilot" study will be--that is, what will I do (in a rigorous and systematic way...) to process my "pilot" study (which is my first slice of data)
  11. Make initial investigations about what might be possible eventually with larger scale "data-mining" within the database and the eventual qualitative analysis of quantitative data (I've had some correspondence with Gloria McMillan on this subject)
I think this is a pretty thorough list, and I'm glad I have a bit of time to prepare. If by chance you can spot any other things that might need to be done or specific suggestions for individual tasks, please let me know.

That's the plan for now. I'm going to tape it up on my wall...

Saturday, February 14, 2009

Slow working on the proposal

I've just had a morning session working on the proposal, and I thought I would process it with a bit of thinking on paper about how it is going. It is really interesting to "feel" how this writing is going. First, it goes slowly. My sense of what I am doing is like carving or perhaps it is more modular like building a wall or a lego. I am taking pieces and fitting them in and fitting them together. But it is a bit more than that. I am creating pieces that themselves are often formulated by taking other pieces and shaping them to fit in and together. So it is slow work. I have this image of a mason who builds a rock wall and chips and breaks stone pieces to fit in. The mason also trowels cement into gaps and lubricates the fitting together of pieces. All I can say is it is slow going.

What I think is taking shape better is what this Proposal as a first chapter of the dissertation is about--what it is and what it is for. I believe I have an opening that sets the problem fairly well with a concrete example (as Fred said I should). I think that concern or as Fred says the "friction" underlying my study is fairly clear. I am now working on the "So What?"--the reason why writing teachers should be interested in this study too. My main rationale is based upon premises for how we learn and what we understand the activity of writing is. I'm a bit worried that this discussion about premises may be seen as a digression, especially as right now they are long, but I think these concerns are important. They also happen to be chunks I have been able to import in-whole from my qualifying exam. Again, I don't know if that is a good idea, but there they are. I'm not complete in fashioning and refashioning them to fit and work yet.

My next task will be to clearly define the relatively narrow focus of my study--rhetorical reflection. Again, I have a worthy chunk from my quals I can fit in here, but I believe it will take a fair bit of refashioning. I think this can be followed by a restatement of the significance of the subject of study (the WGRA) and the problem underlying the subject of study. It will be in this section that I will have to gauge how to bring in the inadequacy of current theory and how far I go in delving into this inadequacy in this section or if I leave it to the Lit Review. This recap of the significance and problem will lead into my research question--the guiding direction for pursuing a better understanding of this subject.

What I guess I am seeing now a bit better is how the first chapter is simply setting the problem, clarifying the place this problem/study has for the field, and clearly defining the subject of study. I think I am getting a bit better sense of the difference between it and the lit review.

We shall see if I can reach my 2/20 deadline for this draft. I NEED to make this deadline, so I will do my best.

About Writing

Writing is always more precise and less precise than our thoughts: that is why our writing pieces glow with being and beckon with the promis...