Dey provides a bit of historical perspective on the term "coding" and notes with some ironic amusement that it has been the combination of grounded theory and computer analysis tools that has led to the term"coding" as being the term describing the key process in qualitative analysis. What is ironic is that coding comes from the quantitative use of survey methods where "coding" happens to analyze surveys that have "precoded" questions. The survey already has within it the concepts and categories so that when the surveys are analyzed the researcher identifies and assigns the appropriate codes to responses and tabulates them. Dey notes that in qualitative analysis the researcher does not have the conceptualization already complete as they review the data because it is yet to be accomplished (96).
However, Glaser and Strauss strongly believe "coding and analysis proceed jointly in grounded theory" (96). Thus they contrast grounded theory from other qualitative methodologies that would code first then analyze second (separating the processes). What this joint coding and analysis exactly looks like, I am not sure. In this section distinguishing coding in grounded theory, Dey notes a point Glaser and Strauss emphatically make: "they reject the method of coding data 'into crudely quantifiable form' in order to test hypotheses, since they are interested in generating theory rather than verifying it" (96). My first reading of this emphatic point was that they rejected the idea of analyzing to determine a code from which you next test (which is what my research design provisionally contains). I now see that I may have been reading too much into this quote. What they seem to say is not to count your data--there is no need to determine the number of occurrences of a concept or interaction, especially if this counting is done in order to support some hypothesis or theory the researcher has as they come to the data. As Dey says, "coding is governed only by theoretical relevance and is not concerned with the accumulation of supporting evidence" (96). I wonder about this claim since I believe computer analysis tools can easily provide you a graphical representation of the prevalence of a particular code (such as a tag cloud). I'll see this possibility more as I get into using these computer analysis tools. The last key points about coding Dey stresses is that coding is a method to make conceptualization explicit, and its function is "to generate rather than test a theory" (97). One must resist, then, impulses to turn coding into a hunt for verification, it seems.
The orthodox Grounded Theory view toward coding is that it proceeds in phases:
- Categorize the data (open coding)
- Connect the categories (theoretical or axiel coding)
- Focus on a core category (selective coding) (98)
Dey offers another innovation (we might say) toward coding. This time he offers a counter viewpoint to the notion of phases and the idea that you categorize first and then connect these categories. Dey's point is this: "categories cannot be considered in isolation. Categories acquire their meaning in part from their place in the wider scheme of things... . ...discrimination among objects may depend on their place in a larger taxonomy" (105). Rather than category sets, Dey chooses the metaphor of "category strings" to represent how categories exist within a network of other categories. Thinking of Lakoff, Dey is stressing the point that any category/categorization activates a larger conceptual framework, and we need to be aware of this network of connections. His point is that "in grounded theory, the division between open coding and axial coding needs to be treated with caution" (105). His point is that we need not wait to uncover links between categories as we open code (I think), and that we need to be aware of these larger "strings" of relationships within the categories we declare. It makes me think a bit about activity theory.
Next Dey considers Axial Coding in more detail, first discussing Strauss and Corbin's "coding paradigm" (1987). Strauss' coding paradigm examines: conditions, interaction among the actors, strategies and tactics, and consequences (106). The value of this coding paradigm is its clarity that makes the entire process of coding more manageable. You know what you are doing with this paradigm. Dey questions why this paradigm should be privileged? Glaser criticized Strauss' "coding paradigm" because it ignored Glaser's work on "theoretical coding": "Instead of 'forcing' the data to fit a pregiven paradigm, Glaser suggests we consider a range of theoretical options of which the proposed paradigm is only one" (107). Glaser in his 1978 book lists sixteen "coding families" that provide a range of options for coding (107). Glaser stresses that a coding family should only be used once it is indicated by the data. Dey questions what to do if more than one family could fit the data, thus making the choice of a coding family arbitrary.
The final part of this chapter discusses "core categories" in detail. Core categories are central for grounded theory. As Glaser believes, "The aim of producing a grounded theory that is relevant and workable to practitioners requires conceptual delimitation" (110). Core categories are where the researcher delimits their categorization. Glaser believes a core category has to "earn its privileged position" (111) by containing these qualities (or meeting these criteria): "[it] has to be central, stable, complex integrative, incisive, powerful, and highly variable" (111). Dey questions why only ONE category is chosen, and not more. He blasts grounded theory at this point for forcing a fit, and what obviously involves an "elimination of alternative accounts" (112). He criticizes the core category as also being paradoxical in terms of its role as both dependent and independent variable.
All these critiques come back to the basic difficulty of dealing with subjectivities when coding data. This quote from Dey seems to express the difficulty well: "The construction of a category or the appropriateness of assigning it to some part of the date will undoubtedly reflect our wider comprehensions--both of the data and what we are trying to do with it. The researcher (who brings to categorization an evolving set of assumptions, biases, and sensitivities) cannot be eliminated from this process" (104). I knew this truth before, but seeing it spelled out in more detail in this chapter makes the entire process seem more and more daunting.
1 comment:
This is the nature of language, though. If anyone suggests that any method or methodology is without error, then the language/meaning/subjectivity angle comes out. Yes, all is subjective. So, it's the best we have, deal with it. Reminds me of the College English article recently that slams ICON. Everything is always already circumspect. Now, research can mitigate subjectivity through triangulation and attempts at objectivity. That's what good research does, and it also acknowledges possible problems. It doesn't dismiss entire approaches.
Post a Comment