I'm going to take a stab at putting down my theoretical assumptions and beliefs BEFORE I begin the analysis of my data. Although Grounded Theory seems to have the requirement that the researcher NOT come to the data with any sort of preconceived theory, I think this requirement gets misinterpreted. First, it is impossible not to have preconceived notions. We all look at the world through our own terministic screens. Dey stresses that the belief in atheoretical observation is a myth. The importance is to be open to data and be aware of the theoretical biases that you already possess. So here it goes.
I thought I would start with some assumptions I expressed about reflection in 2004 in a paper I wrote that summer at the Central Texas Writing Project:
My Assumptions about Reflection
1) Writing reflectively is a learned skill.
2) Reflection helps students formulate and gain ownership of their own knowledge
3) Reflection plays a mediating role for learning from experience
4) Reflection plays a mediating role for learning from experience.
5) Deep reflection becomes "reflexive" (or transformational)
6) Reflection helps students formulate goals and solve problems as they compose.
Surprisingly, I still hold these assumptions to be true. Underneath all of this is a certain agency that I believe this act of reflection possesses. I have thought of various comparisons to describe this agency characteristic I believe reflection has: Superman's telephone booth,and a catalyst for a chemical reaction are the chief two. It just hit me that I didn't include something that indicates a slower growth process such as an oven to bake. Reflection is not an oven; it is something that is more immediate, or at least in its local effects. Implicit in all of this also is the notion of positive change. The most idealistic change or reaction is Mezirow's notions of transformational learning which Qualley picks up on and calls "reflexive." The contemplation, examination, and critique of one's assumptions for thought, belief, or action is said to create almost magically a significant change. This is the home run of reflection.
I also believe more modest, but still significant, things can happen through the "mindfulness" reflection promotes in students. Because the prompt asks students to consider and be more aware of certain things, those items may become more defined or real to the student.
Underneath all of these assumptions also is what we might consider a belief in the magical nature of language. A lot of reflective activities done in the 80s stressed the value of verbal reflection (Boud). Yet, I am focusing on written reflections. These two kinds of reflection seem to share the idea that students putting their thoughts and feelings into words (into language) gain something from that activity. Since our thoughts and perceptions of the world are formed to a large degree within the framework of language, using language is significant for developing this thought. I prefer written reflection because the student has more time to consider what they are writing, and then they have this document to look back on.
What other assumptions do I have?
I also have a more specific notion of this generative power of reflection for Composition that I believe links directly with "invention." For writer's I believe that the most significant concern of the writer's is the negotiation of their "rhetorical stance." That is, reflection helps students position themselves (their text and their thinking) in terms of the writing situation. Reflection provides the space to be "mindful" of the writing situation and all the unique factors that come into play. It is also the place where phronesis can be enacted--that is the flexible application of general rules to specific contexts or ill-structured situations. Reflection, then, becomes the pedagogical activity that reactivates the concerns of invention.
So I bring to my examination of writing and essays a whole lot of baggage about writing process, invention and pre-writing strategies, concepts of the writing situation, and what constitutes writing growth. I firmly believe in the importance of drafting and revision as a means for working on a piece of writing--it isn't a one shot deal. This notion of the "writing feedback loop" and drafting cycle is paired with the developmental nature of learning and knowledge (i.e. Kolb's experiential learning cycle). All of these assumptions are important to me.
I also have a few assumptions regarding causes of poor reflection or a lack of reflection. These predominantly revolve around four things: learning styles, developmental factors, knowledge, and conceptual frameworks of the task. I don't know, but it could be that some people are just not hardwired to think and learn in reflective ways. Being a reflective person myself, I can hardly imagine this kind of person (but we do have George Bush as an example). King and Kitchener as well as other intellectual development models assert that reflective thinking is a higher order level of thinking that comes with more maturity and development. I don't buy this idea that younger kids can't be reflective about what they do, but it is a significant idea and one I am unsure of. Some research also showed that how well students reflected depended to a great degree both on their knowledge (how can they be mindful of something they aren't even aware of or think in ways if they don't have the knowledge to think that way) and their conception or mental schema of the task (if they see the task as being about XYZ when it really is about ABC, then of course they will fail or flail).
For now, these are the key preconceptions I can think of. For me, these thoughts seem so natural and self-evident; thus, it is so important for me to get them down and in the open so I am aware of them. They, of course, are not natural at all.
Thursday, April 23, 2009
Sunday, April 19, 2009
Grounding Grounded Theory
I finally have finished Ian Dey's Grounding Grounded Theory, and I know that for the next few weeks (and perhaps the rest of my life) I will be processing this text. However, I want to take a big picture appraisal of his text.
He ends his book speaking about the growth of a misunderstanding about what grounded theory is that has been created by the proliferation of software tools for qualitative analysis. He is speaking from the perspective of 1999 looking back at what has happened in the 1990s. Pointing to work by Coffey in 1996, he describes how "the centrality of coding in both software for qualitative analysis and in grounded theory promotes an 'unnecessary close equation of grounded theory, coding, and software'" (qtd. in Dey 271). The mechanics of coding made more simple through computer software combines with the methodology of qualitative analysis that introduced the notion of "coding" to qualitative analysis to the point that "to code" meant "to engage in grounded theory." Any systematic analysis of data via "coding" meant grounded theory. And worse, as Dey notes, this convergence has resulted in "an uncritical attitude toward methodology" (271-2). Although these views equating qualitative analysis (coding) into one methodology (grounded theory) have been disputed, and I don't think anyone now would equate them, Dey speaks of a time when grounded theory became trendy and was used uncritically thanks to these computer software tools. Dey sums up the general problem in this trend: "It seems that anxieties over the convergence of qualitative research around a single methodology, which takes coding as the core of theorizing, may be well-founded" (272). The culprit seems to be the introduction of software tools that make coding and then the relating of categories through the retrieval of data easier. Dey's larger critique is that this process of theory generation and qualitative analysis becomes "mechanistic" and is done uncritically.
What his book reveals is that the we shouldn't blame the software tools for promoting a "mechanistic approach" because the seeds for this approach are there within grounded theory, as articulated by Glaser and Strauss. I'll list some of these tendencies in grounded theory that he says can lead toward a mechanistic and uncritical approach:
- The inclination to consider coding as an aconceptual process
- Observation is presented as atheoretical
- Coding is said to be emergent rather than constructed
- Theory is something we "discover"
- Categories are conceived as separate concepts that are later connected
- Process is analyzed through "slices of time" (rather than through an evolutionary analysis) (273)
The daunting task for me now seems to be to possess the adequate level of comprehension to do this research. It seems very complex. The thought occurred to me that I felt like a basketball player who has just been drafted to a new team, and he doesn't now the plays and the different schemes used for offence and defence. He's sent out on to the court to play, but things are happening so fast he doesn't know what is going on. Before I get out on the court (figuratively speaking), I hope to understand the dynamics at work more and about my choices. Since I am supposed to do a presentation for the May Workshop, I think I will make it on Grounded Theory so that I can use this task to get a stronger grasp on what I will be doing.
Thursday, April 9, 2009
Preliminary Thinking on Sampling
My lunch today with Fred was primarily about how and what to sample in the TOPIC database. I've talked to Becky about this sampling question also, as well as John Horner (my "every man audience"). Plus, I'm doing a fair amount of thinking about it from my reading. What I want to do at this point is jot down some of my thoughts right now about what and how I would sample from TOPIC.
First, I need to keep in mind that my sampling is theoretical (that is, theory driven). This goal may be hard for the initial sample, and hard to predict for future samples because the emerging theory should guide the selection of data. I in no way need to make my sample in some way representative or of a certain number to be valid. No.
One thing I have noticed when others have thought about this question of what and how to pull data from the TOPIC vault is that people can see it as overwhelming. John Horner offered the advice to be careful not to set up a project that might take me years to do. He was seeing the vastness of the data and conjectured I would have to do some kind of research sample that would encompass the entire pool of data. No. No. Becky looked at it and paused at the amount and the complexity of the data. Each I felt had a sort of Grand Canyon moment--this is BIG.
Fred and I are in agreement that these Writer's Reviews documents should be viewed within the context of an entire Writing Cycle. In addition, he is leaning toward examining the relationship between what is said and happens in Writer's Reviews and what is said and done in subsequent drafts. Of course, this relationship is what is most interesting, and it is what other researchers have discovered is ambiguous. What students say they will do for revision and what they actually do can be quite different. I diverge from Fred a little bit in that I want to look at some Writer's Reviews just on their own as well.
The semesters of data I have chosen are these:
Year 04-05 1301 and 1302 (called (05-1301 and 05-1302)
Year 05-06 1301 and 1302 (called 06-1301 and 06-1302)
My current thought is that I will get a broad sample of writing reviews for my initial sample. I might also keep this sample fairly small so that I can make mistakes and it won't cost me a lot of time or effort, but it will be substantial enough for me to sink my teeth into the researching. For each course, I thought I would grab Writer's Reviews from Essay #2 and Essay #3.
I wish I could do tables in here, but here is my proposed selection.
For each essay in each class and year, I would grab two Writer's Review from draft #1 and two Writer's Reviews from draft #2 (4 Writer's Reviews for each essay and each year). That adds up to 32 isolated Writer's Reviews. In addition, I would pull a sample of full Essay Cycles from each course and each essay. A full Essay Cycle includes the every draft, every peer response, every Document Instructor response and grade, and every Writer's Review for the entire draft. I propose pulling two full Essay Cycles from each course and for each essay--that equates to 16 full Writing Cycles. I could modify this number down to 12 full Essay Cycles (6 from 1301 and 6 from 1302). 12 sounds more manageable to me, so I am not sure.
Fred might say only to do the full essay cycles, but I'm thinking I want the mix of Writing Reviews in isolation and then some in context. Hmm... I wonder if I should have some that are the same so that I could do a pass through of the data out of context and then look at it again in context. Hmm... . I have to consider that one.
This initial sample seems like it is large enough and broad enough to give me a base from which to then go in more particular directions depending up the emergence of my theory. My next quandary has to do with whether I will use a qualitative researching software tool. I probably will use one, but which one. It would be nice for me to be able to import a bunch of this text into the software program, but it looks like I may have to copy and past it in. ... More to look into.
First, I need to keep in mind that my sampling is theoretical (that is, theory driven). This goal may be hard for the initial sample, and hard to predict for future samples because the emerging theory should guide the selection of data. I in no way need to make my sample in some way representative or of a certain number to be valid. No.
One thing I have noticed when others have thought about this question of what and how to pull data from the TOPIC vault is that people can see it as overwhelming. John Horner offered the advice to be careful not to set up a project that might take me years to do. He was seeing the vastness of the data and conjectured I would have to do some kind of research sample that would encompass the entire pool of data. No. No. Becky looked at it and paused at the amount and the complexity of the data. Each I felt had a sort of Grand Canyon moment--this is BIG.
Fred and I are in agreement that these Writer's Reviews documents should be viewed within the context of an entire Writing Cycle. In addition, he is leaning toward examining the relationship between what is said and happens in Writer's Reviews and what is said and done in subsequent drafts. Of course, this relationship is what is most interesting, and it is what other researchers have discovered is ambiguous. What students say they will do for revision and what they actually do can be quite different. I diverge from Fred a little bit in that I want to look at some Writer's Reviews just on their own as well.
The semesters of data I have chosen are these:
Year 04-05 1301 and 1302 (called (05-1301 and 05-1302)
Year 05-06 1301 and 1302 (called 06-1301 and 06-1302)
My current thought is that I will get a broad sample of writing reviews for my initial sample. I might also keep this sample fairly small so that I can make mistakes and it won't cost me a lot of time or effort, but it will be substantial enough for me to sink my teeth into the researching. For each course, I thought I would grab Writer's Reviews from Essay #2 and Essay #3.
I wish I could do tables in here, but here is my proposed selection.
For each essay in each class and year, I would grab two Writer's Review from draft #1 and two Writer's Reviews from draft #2 (4 Writer's Reviews for each essay and each year). That adds up to 32 isolated Writer's Reviews. In addition, I would pull a sample of full Essay Cycles from each course and each essay. A full Essay Cycle includes the every draft, every peer response, every Document Instructor response and grade, and every Writer's Review for the entire draft. I propose pulling two full Essay Cycles from each course and for each essay--that equates to 16 full Writing Cycles. I could modify this number down to 12 full Essay Cycles (6 from 1301 and 6 from 1302). 12 sounds more manageable to me, so I am not sure.
Fred might say only to do the full essay cycles, but I'm thinking I want the mix of Writing Reviews in isolation and then some in context. Hmm... I wonder if I should have some that are the same so that I could do a pass through of the data out of context and then look at it again in context. Hmm... . I have to consider that one.
This initial sample seems like it is large enough and broad enough to give me a base from which to then go in more particular directions depending up the emergence of my theory. My next quandary has to do with whether I will use a qualitative researching software tool. I probably will use one, but which one. It would be nice for me to be able to import a bunch of this text into the software program, but it looks like I may have to copy and past it in. ... More to look into.
Tuesday, April 7, 2009
On Coding in Grounded Theory
Dey has a fairly unsatisfactory chapter on coding because he truly shoots some holes into Grounded Theory in his discussion about coding. He opens with a good distinction between categorization and coding: "With categories we impute meanings, with coding we compute them. The former involves a creative leap, for 'comprehending experience via metaphor is one of the great imaginative triumphs of the human mind' (Lakoff, 1987, p. 303). The latter involves reduction and ready reckoning" (95). It is interesting how Dey uses uncommon words such as "impute" and "reckon"--one involves a leap, while the other involves reduction (pulling back and being conservative). These seem like contrary impulses.
Dey provides a bit of historical perspective on the term "coding" and notes with some ironic amusement that it has been the combination of grounded theory and computer analysis tools that has led to the term"coding" as being the term describing the key process in qualitative analysis. What is ironic is that coding comes from the quantitative use of survey methods where "coding" happens to analyze surveys that have "precoded" questions. The survey already has within it the concepts and categories so that when the surveys are analyzed the researcher identifies and assigns the appropriate codes to responses and tabulates them. Dey notes that in qualitative analysis the researcher does not have the conceptualization already complete as they review the data because it is yet to be accomplished (96).
However, Glaser and Strauss strongly believe "coding and analysis proceed jointly in grounded theory" (96). Thus they contrast grounded theory from other qualitative methodologies that would code first then analyze second (separating the processes). What this joint coding and analysis exactly looks like, I am not sure. In this section distinguishing coding in grounded theory, Dey notes a point Glaser and Strauss emphatically make: "they reject the method of coding data 'into crudely quantifiable form' in order to test hypotheses, since they are interested in generating theory rather than verifying it" (96). My first reading of this emphatic point was that they rejected the idea of analyzing to determine a code from which you next test (which is what my research design provisionally contains). I now see that I may have been reading too much into this quote. What they seem to say is not to count your data--there is no need to determine the number of occurrences of a concept or interaction, especially if this counting is done in order to support some hypothesis or theory the researcher has as they come to the data. As Dey says, "coding is governed only by theoretical relevance and is not concerned with the accumulation of supporting evidence" (96). I wonder about this claim since I believe computer analysis tools can easily provide you a graphical representation of the prevalence of a particular code (such as a tag cloud). I'll see this possibility more as I get into using these computer analysis tools. The last key points about coding Dey stresses is that coding is a method to make conceptualization explicit, and its function is "to generate rather than test a theory" (97). One must resist, then, impulses to turn coding into a hunt for verification, it seems.
The orthodox Grounded Theory view toward coding is that it proceeds in phases:
Dey offers another innovation (we might say) toward coding. This time he offers a counter viewpoint to the notion of phases and the idea that you categorize first and then connect these categories. Dey's point is this: "categories cannot be considered in isolation. Categories acquire their meaning in part from their place in the wider scheme of things... . ...discrimination among objects may depend on their place in a larger taxonomy" (105). Rather than category sets, Dey chooses the metaphor of "category strings" to represent how categories exist within a network of other categories. Thinking of Lakoff, Dey is stressing the point that any category/categorization activates a larger conceptual framework, and we need to be aware of this network of connections. His point is that "in grounded theory, the division between open coding and axial coding needs to be treated with caution" (105). His point is that we need not wait to uncover links between categories as we open code (I think), and that we need to be aware of these larger "strings" of relationships within the categories we declare. It makes me think a bit about activity theory.
Next Dey considers Axial Coding in more detail, first discussing Strauss and Corbin's "coding paradigm" (1987). Strauss' coding paradigm examines: conditions, interaction among the actors, strategies and tactics, and consequences (106). The value of this coding paradigm is its clarity that makes the entire process of coding more manageable. You know what you are doing with this paradigm. Dey questions why this paradigm should be privileged? Glaser criticized Strauss' "coding paradigm" because it ignored Glaser's work on "theoretical coding": "Instead of 'forcing' the data to fit a pregiven paradigm, Glaser suggests we consider a range of theoretical options of which the proposed paradigm is only one" (107). Glaser in his 1978 book lists sixteen "coding families" that provide a range of options for coding (107). Glaser stresses that a coding family should only be used once it is indicated by the data. Dey questions what to do if more than one family could fit the data, thus making the choice of a coding family arbitrary.
The final part of this chapter discusses "core categories" in detail. Core categories are central for grounded theory. As Glaser believes, "The aim of producing a grounded theory that is relevant and workable to practitioners requires conceptual delimitation" (110). Core categories are where the researcher delimits their categorization. Glaser believes a core category has to "earn its privileged position" (111) by containing these qualities (or meeting these criteria): "[it] has to be central, stable, complex integrative, incisive, powerful, and highly variable" (111). Dey questions why only ONE category is chosen, and not more. He blasts grounded theory at this point for forcing a fit, and what obviously involves an "elimination of alternative accounts" (112). He criticizes the core category as also being paradoxical in terms of its role as both dependent and independent variable.
All these critiques come back to the basic difficulty of dealing with subjectivities when coding data. This quote from Dey seems to express the difficulty well: "The construction of a category or the appropriateness of assigning it to some part of the date will undoubtedly reflect our wider comprehensions--both of the data and what we are trying to do with it. The researcher (who brings to categorization an evolving set of assumptions, biases, and sensitivities) cannot be eliminated from this process" (104). I knew this truth before, but seeing it spelled out in more detail in this chapter makes the entire process seem more and more daunting.
Dey provides a bit of historical perspective on the term "coding" and notes with some ironic amusement that it has been the combination of grounded theory and computer analysis tools that has led to the term"coding" as being the term describing the key process in qualitative analysis. What is ironic is that coding comes from the quantitative use of survey methods where "coding" happens to analyze surveys that have "precoded" questions. The survey already has within it the concepts and categories so that when the surveys are analyzed the researcher identifies and assigns the appropriate codes to responses and tabulates them. Dey notes that in qualitative analysis the researcher does not have the conceptualization already complete as they review the data because it is yet to be accomplished (96).
However, Glaser and Strauss strongly believe "coding and analysis proceed jointly in grounded theory" (96). Thus they contrast grounded theory from other qualitative methodologies that would code first then analyze second (separating the processes). What this joint coding and analysis exactly looks like, I am not sure. In this section distinguishing coding in grounded theory, Dey notes a point Glaser and Strauss emphatically make: "they reject the method of coding data 'into crudely quantifiable form' in order to test hypotheses, since they are interested in generating theory rather than verifying it" (96). My first reading of this emphatic point was that they rejected the idea of analyzing to determine a code from which you next test (which is what my research design provisionally contains). I now see that I may have been reading too much into this quote. What they seem to say is not to count your data--there is no need to determine the number of occurrences of a concept or interaction, especially if this counting is done in order to support some hypothesis or theory the researcher has as they come to the data. As Dey says, "coding is governed only by theoretical relevance and is not concerned with the accumulation of supporting evidence" (96). I wonder about this claim since I believe computer analysis tools can easily provide you a graphical representation of the prevalence of a particular code (such as a tag cloud). I'll see this possibility more as I get into using these computer analysis tools. The last key points about coding Dey stresses is that coding is a method to make conceptualization explicit, and its function is "to generate rather than test a theory" (97). One must resist, then, impulses to turn coding into a hunt for verification, it seems.
The orthodox Grounded Theory view toward coding is that it proceeds in phases:
- Categorize the data (open coding)
- Connect the categories (theoretical or axiel coding)
- Focus on a core category (selective coding) (98)
Dey offers another innovation (we might say) toward coding. This time he offers a counter viewpoint to the notion of phases and the idea that you categorize first and then connect these categories. Dey's point is this: "categories cannot be considered in isolation. Categories acquire their meaning in part from their place in the wider scheme of things... . ...discrimination among objects may depend on their place in a larger taxonomy" (105). Rather than category sets, Dey chooses the metaphor of "category strings" to represent how categories exist within a network of other categories. Thinking of Lakoff, Dey is stressing the point that any category/categorization activates a larger conceptual framework, and we need to be aware of this network of connections. His point is that "in grounded theory, the division between open coding and axial coding needs to be treated with caution" (105). His point is that we need not wait to uncover links between categories as we open code (I think), and that we need to be aware of these larger "strings" of relationships within the categories we declare. It makes me think a bit about activity theory.
Next Dey considers Axial Coding in more detail, first discussing Strauss and Corbin's "coding paradigm" (1987). Strauss' coding paradigm examines: conditions, interaction among the actors, strategies and tactics, and consequences (106). The value of this coding paradigm is its clarity that makes the entire process of coding more manageable. You know what you are doing with this paradigm. Dey questions why this paradigm should be privileged? Glaser criticized Strauss' "coding paradigm" because it ignored Glaser's work on "theoretical coding": "Instead of 'forcing' the data to fit a pregiven paradigm, Glaser suggests we consider a range of theoretical options of which the proposed paradigm is only one" (107). Glaser in his 1978 book lists sixteen "coding families" that provide a range of options for coding (107). Glaser stresses that a coding family should only be used once it is indicated by the data. Dey questions what to do if more than one family could fit the data, thus making the choice of a coding family arbitrary.
The final part of this chapter discusses "core categories" in detail. Core categories are central for grounded theory. As Glaser believes, "The aim of producing a grounded theory that is relevant and workable to practitioners requires conceptual delimitation" (110). Core categories are where the researcher delimits their categorization. Glaser believes a core category has to "earn its privileged position" (111) by containing these qualities (or meeting these criteria): "[it] has to be central, stable, complex integrative, incisive, powerful, and highly variable" (111). Dey questions why only ONE category is chosen, and not more. He blasts grounded theory at this point for forcing a fit, and what obviously involves an "elimination of alternative accounts" (112). He criticizes the core category as also being paradoxical in terms of its role as both dependent and independent variable.
All these critiques come back to the basic difficulty of dealing with subjectivities when coding data. This quote from Dey seems to express the difficulty well: "The construction of a category or the appropriateness of assigning it to some part of the date will undoubtedly reflect our wider comprehensions--both of the data and what we are trying to do with it. The researcher (who brings to categorization an evolving set of assumptions, biases, and sensitivities) cannot be eliminated from this process" (104). I knew this truth before, but seeing it spelled out in more detail in this chapter makes the entire process seem more and more daunting.
Sunday, April 5, 2009
Categories and Categorization in Grounded Theory
This blog post will attempt to make sense of Ian Dey's long chapter on Categories and Categorization in Grounded Theory. Categories are maddeningly confusing, and at times it seems what Dey reveals in this chapter is like the soft underbelly of a dragon.
Let's start by presenting definitions:
Categories are conceptual "and never just a name or a label" (49). Categories are said to stand alone and refer to a class of things.
Properties, though, can't stand alone and are "conceptual characteristics of a category" (51). They refer to external to external relationships and relate through interaction.
Dimensions represent the spectrum of variation possible within properties. Dey says, "Identifying dimensions therefore involves (internal) differentiation rather than (external) comparison" (52).
I will chart out Dey's example illustrating these three concepts below:
Category-- color
Properties-- shade, intensity, and hue
Dimensions--intensity can be high or low, hue can be dark or light
Dey says this example illustrates the orthodox distinction between properties and dimensions. Here is one good quote from Dey: "Whereas properties and dimensions 'belong' to the thing itself, the categories to which we assign it do not belong to the thing itself but are part of how we choose to classify it" (54). He stresses the point that categories are derived through comparison.
Dey uncovers a confusion within the analytic processes that each of these three refers to. He says that we can apply all of these analytic processes to the same phenomenon. He stresses that each process of analysis has a different purpose: "We use categories to distinguish and compare; we identify properties and attributes to analyze agency and effects; and we measure dimensions to identify more precisely the characteristics of what we are studying" (57). Dey feels that distinguishing these three concepts through purpose is better than seeing them as varying levels of abstraction.
The next section in the chapter presents the classical view of categories as elements in theory. Dey presents this summary before undercutting it completely. Theorizing involves discovering how categories relate to each other, and GT seems to have two ways of relating: one is through relations of similarity and difference, and second through connection and interaction. He provides this example of relating a cat, dog, and bone.
Formal relations based on similarity and difference: puts the cat and dog together
Substantive relations based on connections: puts the dog and bone together
Our understanding of substantive connections is based upon our observation of the process.
Digging deeper, Dey tackles Glaser's "concept indicator model" which Glaser claims "provides the essential link between data and concept" (qtd. in Dey 60). The meaning of the category (or code) is defined in terms of its indicators. Dey uses the example of prejudice as a concept (category). We can't observe the abstract concept of prejudice, but we see it in action, so to speak--through its "indicators." We can look at statements or actions and identify them as indicators of prejudice. Glaser believes that constant comparative analysis slowly builds concepts through "the careful combination of indicators." Concepts, then, become "the 'sum' of its indicators (61). Dey has some questions about Glasers concept-indicator model.
Summing up the chapter, Dey points out that one of the special characteristics of grounded theory "is its firm location in an interactionist methodology." It is focused on explicating social processes in dynamic terms. I think this characteristic of GT is important to remember. The elements of theory--categories, properties, dimensions--and the process of categorization--constant comparison, focus on indicators--all facilitate this interactionist methodology.
Next Dey digs the deepest into "categorization"--the fundamental process of distinguishing and classifying. Here is where things get messy. Dey explores modern developments in categorization that question the simple concept-indicator model of Glaser and it process of basing categorization upon judgments of similarity and difference that seems to figure so largely in GT. As Dey says quite simply: "The identification of categories on the basis of similarity and difference turns out to be rather problematic ... [and] in practice the process of drawing distinctions is much more complicated and ambiguous than the concept-indicator model allows" (66). Great. Pull the rug out from underneath me. Dey reveals that categorization is much more variable than Glaser describes it in his concept-indicator model and "it challenges any simple assumption that categories are 'indicated' by data in a straightforward way" (75).
Dey goes on to describe three alternative understandings of categorization from scholarship done since the 1967 advent of GT. The chapter is dense, so I will include Dey's own summary:
"In the above discussion, we can identify at least four different accounts of categorization. First, we have the classic account, which assumes that category boundaries are crisp, membership is based on common features, and relations between categories are governed by logical operations. Second, we have 'fuzzy' sets, where category boundaries become vague, membership is graded, and relationships between categories become a matter of degree. Third, we have the 'prototypical' model, which stresses the role of category exemplars and shifts focus from membership to degrees of fit. Finally, we have categorization in terms of 'idealized cognitive models' (this is Lakoff) which 'motivate' the creation of categories through various forms of 'chaining' and 'extension'" (86).
After revealing the basic instability of categories and the process of categorization, Dey seems to mollify his reader by stating a hopeful message: "while the processes of categorization may not be strictly logical, neither are they entirely arbitrary" (87). He then provides a number of things the researcher needs to do to render her analysis not entirely arbitrary:
The larger point of Dey's chapter is summed up in a statement he makes near the end of the chapter: "In grounded theory innocence is preserved and bias precluded by allowing categories to emerge from (and hence correspond to) the data. But Lakoff's analysis suggests that such innocence is impossible to achieve. We think in terms of categories and our categories are structured in terms of our prior experience and knowledge" (92-3). Grounded theory, Dey, believes must reassess how it categorizes in light of new theories that challenge and expand "how categories are actually assigned and used in the production of knowledge" (91).
I always knew that the original description of generating categories from the data was naive; however, Dey has overwhelmed me with the detailed description of this inadequacy. Yet I would rather be aware of these problems and strategically (and perhaps rhetorically) chart my approach to categorization and analysis that enter this forest without a plan. I am hopeful that Dey will offer more explicit suggestions as I keep reading.
Let's start by presenting definitions:
Categories are conceptual "and never just a name or a label" (49). Categories are said to stand alone and refer to a class of things.
Properties, though, can't stand alone and are "conceptual characteristics of a category" (51). They refer to external to external relationships and relate through interaction.
Dimensions represent the spectrum of variation possible within properties. Dey says, "Identifying dimensions therefore involves (internal) differentiation rather than (external) comparison" (52).
I will chart out Dey's example illustrating these three concepts below:
Category-- color
Properties-- shade, intensity, and hue
Dimensions--intensity can be high or low, hue can be dark or light
Dey says this example illustrates the orthodox distinction between properties and dimensions. Here is one good quote from Dey: "Whereas properties and dimensions 'belong' to the thing itself, the categories to which we assign it do not belong to the thing itself but are part of how we choose to classify it" (54). He stresses the point that categories are derived through comparison.
Dey uncovers a confusion within the analytic processes that each of these three refers to. He says that we can apply all of these analytic processes to the same phenomenon. He stresses that each process of analysis has a different purpose: "We use categories to distinguish and compare; we identify properties and attributes to analyze agency and effects; and we measure dimensions to identify more precisely the characteristics of what we are studying" (57). Dey feels that distinguishing these three concepts through purpose is better than seeing them as varying levels of abstraction.
The next section in the chapter presents the classical view of categories as elements in theory. Dey presents this summary before undercutting it completely. Theorizing involves discovering how categories relate to each other, and GT seems to have two ways of relating: one is through relations of similarity and difference, and second through connection and interaction. He provides this example of relating a cat, dog, and bone.
Formal relations based on similarity and difference: puts the cat and dog together
Substantive relations based on connections: puts the dog and bone together
Our understanding of substantive connections is based upon our observation of the process.
Digging deeper, Dey tackles Glaser's "concept indicator model" which Glaser claims "provides the essential link between data and concept" (qtd. in Dey 60). The meaning of the category (or code) is defined in terms of its indicators. Dey uses the example of prejudice as a concept (category). We can't observe the abstract concept of prejudice, but we see it in action, so to speak--through its "indicators." We can look at statements or actions and identify them as indicators of prejudice. Glaser believes that constant comparative analysis slowly builds concepts through "the careful combination of indicators." Concepts, then, become "the 'sum' of its indicators (61). Dey has some questions about Glasers concept-indicator model.
Summing up the chapter, Dey points out that one of the special characteristics of grounded theory "is its firm location in an interactionist methodology." It is focused on explicating social processes in dynamic terms. I think this characteristic of GT is important to remember. The elements of theory--categories, properties, dimensions--and the process of categorization--constant comparison, focus on indicators--all facilitate this interactionist methodology.
Next Dey digs the deepest into "categorization"--the fundamental process of distinguishing and classifying. Here is where things get messy. Dey explores modern developments in categorization that question the simple concept-indicator model of Glaser and it process of basing categorization upon judgments of similarity and difference that seems to figure so largely in GT. As Dey says quite simply: "The identification of categories on the basis of similarity and difference turns out to be rather problematic ... [and] in practice the process of drawing distinctions is much more complicated and ambiguous than the concept-indicator model allows" (66). Great. Pull the rug out from underneath me. Dey reveals that categorization is much more variable than Glaser describes it in his concept-indicator model and "it challenges any simple assumption that categories are 'indicated' by data in a straightforward way" (75).
Dey goes on to describe three alternative understandings of categorization from scholarship done since the 1967 advent of GT. The chapter is dense, so I will include Dey's own summary:
"In the above discussion, we can identify at least four different accounts of categorization. First, we have the classic account, which assumes that category boundaries are crisp, membership is based on common features, and relations between categories are governed by logical operations. Second, we have 'fuzzy' sets, where category boundaries become vague, membership is graded, and relationships between categories become a matter of degree. Third, we have the 'prototypical' model, which stresses the role of category exemplars and shifts focus from membership to degrees of fit. Finally, we have categorization in terms of 'idealized cognitive models' (this is Lakoff) which 'motivate' the creation of categories through various forms of 'chaining' and 'extension'" (86).
After revealing the basic instability of categories and the process of categorization, Dey seems to mollify his reader by stating a hopeful message: "while the processes of categorization may not be strictly logical, neither are they entirely arbitrary" (87). He then provides a number of things the researcher needs to do to render her analysis not entirely arbitrary:
- Render the cognitive processes of categorization explicit (i.e which of the four approaches to categorization will you take)
- Assess the adequacy of the cognitive process in terms of the underlying cognitive assumptions employed.
- Recognize the various processes involved in categorization
- Identify the aims of categorization (for example, prediction or inference)
- Make more explicit the grounds (cue or category validity) on which these categories can be realized
- Identify the underlying conceptual models and make explicit their metonomyic or metaphorical extensions (a la Lakoff) (87)
The larger point of Dey's chapter is summed up in a statement he makes near the end of the chapter: "In grounded theory innocence is preserved and bias precluded by allowing categories to emerge from (and hence correspond to) the data. But Lakoff's analysis suggests that such innocence is impossible to achieve. We think in terms of categories and our categories are structured in terms of our prior experience and knowledge" (92-3). Grounded theory, Dey, believes must reassess how it categorizes in light of new theories that challenge and expand "how categories are actually assigned and used in the production of knowledge" (91).
I always knew that the original description of generating categories from the data was naive; however, Dey has overwhelmed me with the detailed description of this inadequacy. Yet I would rather be aware of these problems and strategically (and perhaps rhetorically) chart my approach to categorization and analysis that enter this forest without a plan. I am hopeful that Dey will offer more explicit suggestions as I keep reading.
Subscribe to:
Posts (Atom)
About Writing
Writing is always more precise and less precise than our thoughts: that is why our writing pieces glow with being and beckon with the promis...
-
I just picked up Stephen North's The Making of Knowledge in Composition: Portrait of an Emerging Field (1987) and I found a passage tha...
-
As Ian Dey notes, the conceptual elements of categories, properties, and dimensions can be a muddle and the distinction between them can get...
-
Pre-dissertation Proposal Lennie Irvin Ph.D. Student in Technical Communication and Rhetoric, Texas Tech University Identify the Problem Req...