Joanna Hodges

Since the beginning of the semester in this course, assessment seems to have always been an issue of discussion. It was nice to be able to read more about the topic in depth and see what some scholars had to say about this point we kept coming back to in class. As for as the specific articles we read, I hadn't either heard of them or the authors who wrote them before (except for, naturally, the CCCC--but you know what I mean). I found that the articles brought up some really good points about how we read student writing and how we assess it in general.

CCCC Position Statement

I was surprised by some of the points made in the CCCC statement regarding what they thought about assessment, and I was wondering what kind of impact this statement made (since some of it kind of seems to be disregarded by some universities or colleges). I found it interesting that they say the assessment should be "contextualized" for the students--how often is this actually done? Is the THEA contextualized? (393). That's kind of a genuine question for me, since I never took it, but from everything I've heard about it...I would kind of doubt that. The second assumption says that "language by definition is social" (393). This definitely sticks out to me as a reason why I don't totally support standardized tests for assessment. This actually reminds me of Comps, too, and the common opposition us students make about not being able to discuss and work with others to make meaning: in the classwork we do, we're always encouraged to help each other and to do peer review, etc., yet when it comes to the tests, that all goes out the window, which seems contradictory. The sixth assumption also seemed significant to me: "assessment tends to drive pedagogy" (394). This is also mentioned in the other article we read for class this week ("Those Crazy Gates..."), and I find that this concept seems to really rear its head with the TAKS test. I've been observing at Miller recently and TAKS is approaching, so the teachers are pretty much teaching to the TAKS. Every day for probably the last month has only focused on TAKS-related activities, practice, or review (using reading samples, etc.). The teachers know that they are held accountable for the students passing the test, so that is the focus (rather than the students learning the subject, it is more important right now that they learn the test it seems). This leads into the seventh assumption, about the tests being used for accountability and misrepresentation of "the skills and abilities of students of color" (394). It all focuses on the correctness of the writing, whether the students are right or wrong, rather than looking at what they can do positively. I found it interesting that they promote the idea of having the students' work "evaluated by more than one reader, particularly in 'high stakes' stituations" (396). This made me consider, since it mentions that this should be done for receiving a grade for a course, how we grade based on one reader for composition courses at our university. Is this something we should consider? I remember reading an article about portfolios during practicum that mentioned having other professors evaluate portfolios of the students so that more than one person decides, but I still don't know about this idea. I do like the idea for standardized tests, though, as it gives the students more of a fair shot than if one person was reading it and deciding their fates. The positions in this statement, although many of them weren't exactly new ideas to me, seemed somewhat surprising to me. I guess this is because we have discussed a lot of the concepts in our class and have noticed some of these issues and I wouldn't have expected them to be written out so clearly almost 15 years ago! I guess I would have expected more progress in assessment in that amount of time, but it seems like we are still facing a lot of the same issues with assessment today.

Failure: The Student's or the Assessment's? and "Those Crazy Gates"

I decided to discuss these two articles together, since they seemed really closely related to me and I saw many connections between them. To answer the title of the first article, it seems like the conclusion for Mica's example is that it is both. Or that they don't really know because its a confusing and touchy situation when dealing with personal voice and dialect. Reading both of these articles reminded me intensely of the American Tongues movie we watched in class, especially of the people discriminating against others because of their dialects, such as the lady who didn't want the hillbilly babies crawling around inside of her. I guess because of the movie, I wasn't all too surprised of the negative language found in some of the evaluations or assessments of student writing in these articles dealing with correctness. I kept thinking, especially during the "Those Crazy Grates" article, how unfair the timed tests are for people who speak in different dialects than Standard American English. Of course, I think they are unfair for everyone, because they do not allow for the students to use the writing process fully as we try to teach them (doesn't make sense at all), but even more so for students who primarily speak a dialect such as AAVE. I also kept thinking, while reading these articles, about how glad I was that we have the program we have rather than a baseline basic writing course structure, such as Del Mar. Of course, we serve a different population of students, but I would rather teach a course with mainstreamed basic writers than teach a basic writing course. I really liked the point "Those Crazy Gates" made about students needing to "gain a reasonable mastery of the conventions of written standard English by the time they graduate" (98). I think this is significant, because it shows to me that when assessing students as basic writers, we should look more at their abilities to make sense organizationally or logically, looking at the content, rather then at the use of Standard American English. It seems ridiculous to place someone in Basic Writing mainly because they do not use the "correct" dialect. I agree with Agner and McLaughlin when they say "students such as Shanda and Rowena would have fared better if they had not been tracked into basic writing" (98). As I read the final paragraph of this article, I kept thinking about the syllabus for the basic writing class Garrett and I observed earlier in the semester. It tells the students point-blank that some "reliable" test said they needed to be in the class. Agner and McLaughlin end by saying: "When invalid and unreasonable assessment methods invent basic writers who may become trapped in noncredit courses, it is difficult to draw conclusions abuot how much or how many students benefit from the course (98). How reliable, really, are our methods of assessment? And how can we know? This brings me back to the definition of the basic writer. In class, we kept coming up with the idea that a basic writer is someone who did not pass the criteria on the test. Since that is the standard we go by, I suppose that would have to be the definition we go by, whether it is an accurate portrayal of what students need the extra assistance or not. Because whether or not they were assessed and tracked fairly and properly, they are still labeled and stigmatized as basic writers.

John Lamerson – Reading Response #7

During our group discussions at the end of class two weeks ago, I realized my problem with the CCCC. This was confirmed with the reading (“CCCC Position Statement”) this week. My problem with the CCCC is that it prepares students to be students, not to be anything else. Their positions are intended for those who spend their lives in education – not for those out of it. As proof, look at their assumptions.

1. Language is always learned and used most effectively in environments where it accomplishes something the user wants to accomplish for particular listeners or readers within that environment.

Why would one assume this? Good writing can originate because it coincides with the writer’s wants. It can also originate from the writer’s fear of failure, or in terms of the workplace, as desire to remain employed. Most people do not spend their lives writing what they want to write. By allowing students to cherry pick their topics, are they really being prepared from post-academia life?

2. Language is by definition social.

Obviously true. But social and collaborative are not the same thing. When I write for one of my professors, such writing is social. But it isn’t collaborative, usually. Again, aren’t students being taught a terrible lesson about life post-academia? That they’ll be able to rely on their peers for help with their writing?

3. Reading — and thus evaluation, since it is a variety of reading — is as socially contextualized as all other forms of language use.

See 2.

4. Any individual’s writing “ability” is a sum of a variety of skills employed in a diversity of contexts, and individual’s ability fluctuates unevenly among those varieties.

Completely untrue. One piece of writing can determine whether a student is aware of the basic mechanics of writing. Can additional works provide a finer assessment? Certainly. But everyone in our classroom should be able to write better than the basic eight grader, no matter the contexts or varieties.

5. Writing assessment is useful primarily as a means of improving learning.

True but the explanation is incomplete. Assessment provides the student an indication as to how he or she is currently performing. It does not just exist to “revise existing curricula.”

6. Assessment tends to drive pedagogy.

Agreed. But as has been said: “You have to walk before you can crawl.” Essay writing and multiple choice tests are a stepping stone to writing like real writers. A piano teacher teaches the scales first. The English teacher should perform the equivalent.

7. Standardized tests . . . misrepresent disproportionately the skills and abilities of students of color.

This is a rather large assumption. And one they forget to support with evidence.

8. The means used to test students’ writing ability shapes what they, too, consider writing to be.

The logical extension of this assumption is, of course, that if basics like grammar and spelling are not assessed, they are going to be considered unimportant by students. I am curious if CCCC published works by authors who can’t spell or punctuate, except of course those who do so ironically.

9 and 10. Financial resources . . .

Too long to type the whole thing. Again, another assumption without any evidence supporting it. I know that we are to assume that such evidence exists somewhere – but I do not assume such.

The CCCC’s Position Statement is predicated upon these assumptions. And to my understanding, this statement has been largely ignored. In my opinion, this is for the better.

With regards to Kay Harley and Susan Cannon’s article “Failure: The Student’s or the Assesment’s?” I have written before that personally I find single case studies unpersuasive when arguing a point. Using one set of experiences as proof of a principle seems as best irresponsible and at worst dangerous. With regard to Mica, if the purpose of the basic writing course at Saginaw State wasn’t only to “learn to dominant academic discourse,” (406) then what is the purpose?

What is it called when a person writes about his or her personal life without regards to “reason, orderliness, conscious strategy, and correctness” but with ideas and passion (406)? A diary. It isn’t “gutsy” (409) to turn a writing assignment into a diary. It’s heartbreaking when we find out she wants to learn where to “put a period, comma, and semicolon” and doesn’t. But that failure is not a mutant form of success, no matter how badly the authors want to spin it that way.

I remember the one calculus class I took. I badly wanted to learn calculus. I studied and worked hard, but I just didn’t get it and dropped out. Not every subject is teachable to every student. Furthermore, I’d be the first to admit that mathematics professors shouldn’t plan their curriculums around my failures. Nor should English teachers have to plan around the failures of students who, for whatever reason, are unable to learn the subject.

Finally, I greatly respected the work and conclusions in Agnew and McLaughlin?’s article “Those Crazy Gates.” The authors posited a problem and offered quantitative evidence of a discrepancy and how black and white students obtain different results in the collegiate environment. I disagree with their comments regarding “good” and “bad writing,” and the inclusion of only the introductory paragraphs screams of cherry picking, but on the whole they make a compelling argument that basic writing exit assessments need to be examined for cultural bias.

Darcy Lewis

CCCC Position Statement:

The CCCC takes some pretty surprising stands in this article…it wasn’t entirely what I expected when I read the introduction. The fact that their first point is that writing assessment should “identify purposes appropriate to and appealing to the particular students being tested” is conflicting. Appropriate, yes…we shouldn’t be asking teenagers to write about things that are beyond the scope of where they should be knowledge-wise, nor should any student being assessed be penalized for things that are contextually irrelevant. But tailoring assessment so that it’s “appealing” to the students? I don’t know…how much coddling has to be done in order to elicit “good” writing? It seems like this sends the message that the only writing a student is ever going to do in his or her academic career and/or employment is going to be fun with a capital F. That is unrealistic and it does the student an injustice. Isn’t asking students to write about something slightly difficult perhaps the best way to set up realistic expectations about what’s ahead (in college writing and in life in general), to get a better sense about what the student is able to “apply” his or her writing abilities to, and to identify the strengths and weaknesses so as to properly place the student? If students are not asked to stretch themselves just a little bit, then we’re getting the best possible representation of their abilities, which (unless I’m missing something) is not the point of assessment. I am all for incorporating “appealing” writing prompts into the classroom, but I believe in balance and that every “fun” assignment needs to have elements that ask the student to go outside his comfort zone.

That being said, I agree whole-heartedly that there isn’t one blanket test for all environments and for all purposes. I agree that administrators, legislators, and teachers need to be on the same page and that teachers and/or researchers should be driving the assessment bus, not the other way around. Also, (some of us had this conversation Monday night) writing does not take place in a vacuum. The writing assessment should be realistic, and the student should be allowed to use a dictionary and thesaurus, should be given time to review and revise, and possibly even get feedback. Nobody writes in a room from start to finish without any revision or tools. If the point of assessment is to see how a student will perform on an academic test, it needs to take into account the real environment in which students normally compose.

“Failure: The Student’s or the Assessment’s?”

This article brought up how student writing is most often assessed “as an isolated text, not contextually or intertextually” (402). The authors make the point that assessment is focused on what the writer doesn’t do as opposed to what it is achieving, which I think is a very valid point when considering a student’s work within the context of a semester, but if the system is going to be set up so that a writing sample is an exit requirement, there has to be some standardization. I think the fault here lies less in the assessment itself and more in the system. I agree with their point that “narrative strategies are undervalued,” but they themselves admit that they wouldn’t have passed Mica’s portfolio, so I’m not sure what their point is other than to highlight the negatives without bringing up any solutions. The recognition of the positive qualities of her writing should have been done within the class over the course of the semester and synthesized with her desire to write “academically” before turning in the final portfolio. Otherwise, all their good intentions in helping Mica are too little too late.

“Those Crazy Gates…”

I thought this article did a much better job of specifically locating the breakdown in assessment of speakers/writers of non-standard dialects than the one above. For one thing, Agnew and McLaughlin? clearly point out that the grading of essays is as situational as the writing of those essays. When they took Shanda’s essay back to the administrator, he called it “wonderful” declared it a passing essay when he himself had failed it originally. Even when trying to circumvent the unfair and subjective aspect of assessment (by having a group of graders instead of one), the system is still set up to discriminate against certain types of writing. As the authors say, “the inflexible assessment styles of some English instructors can impede the progress of students whose home language is AAVE” (91), and even those assessors can read things differently from one day to the next.

They also make great points about the unreasonable expectations set by “timed impromptu exit essays” which do nothing but privilege those who can write well on the spot without revision. Students whose first writing instinct contains AAVE patterns probably require more writing time, more revision, and more instructor intervention to approximate this “strict, Eurocentric model,” so they are set up with a double-whammy to fail this sort of assessment. Putting them in basic writing classes may not be the answer when the “breakdown” is one of dialect rather than true error. Rowena was a much more interesting writer than Kyle—recreating a formula doesn’t equal “good” writing, and rewarding formulaic writing in assessment is not the answer.


“Those Crazy Gates…”

The argument David Bartholomae makes about basic writing programs being beneficial to students is one that, I believe, will last a lifetime. I think it’s a never ending argument because every student, institution, professor, instructor, book material is different, among other things. We have discussed in class how some things or pedagogies work for some students, while at the same time it might not work for others. In counter argument is Karen Greenberg, who argues that BW programs do help students achieve success academically because they don’t coincide or “fit” in with the mainstream students (85). While I agree with both, I think, there are many things, including a time factor, to consider. However, I do agree that sometime assessment systems or tests are invalid and do not fit a certain program (86). The authors include some statistics in the reading, targeting black students, but I’m glad that they note that “statistics are one thing and real students with real lives and hopes are another” (86). It really is, often statistics are used to base or level an argument off, but in reality, it’s only a factor in a study. The example they give about Shanda having failed the written exam, only to find out that one of the graders messed up is very upsetting. I think the problem with that is often a grader can easily be having a bad day, tiring day, or something that might not have him focus the way he should, so when reading essays they just look for keywords, or errors to justify that they read it. The worst part about Shanda’s situation is that the university does not have any appeals process. It’s so interesting that universities such as Georgia do so well in working to have a type of essay that will fit students; they consider that the assessment is “designed to reward the person who can come up with an idea fast and throw together some good sentences…and is based upon outdated theory supported by an irrelevant epistemology” (87). As for how the grading is done at the university that these authors work at is simply irresponsible, and I don’t understand why they don’t work on fixing the problem they know is there. It’s like that saying about the pink elephant in the room no one can ignore, well the same concept is here, except students’ futures are at stake. I do believe that many institutions do fail in training, not only graders, but teachers as well. I don’t believe that one semester of observation and little work is sufficient enough to go out into the real world. I’m referring to students who are finishing their bachelor’s degree to teach at a high school, for example, and then go to a classroom certain days and time of the week. They aren’t there enough to see other things that go on while they’re absent. Also, I think students should be given more time to spend in the classroom.

CCCC Position Statement

The Conference on College Composition and Communication reminds me about the previous argument others such as Bartholomae and Greenberg were making and that is the reason for the meeting to measure, judge and evaluate- students’ writing proficiencies (390). The one thing that stood out for me was that at least some educators realize the need to have “‘hypothetical or utopian’ assessment, which includes realities of assessment for both teachers and students” (390). In addition, the authors here included an actual case study, Mica, which makes it much more real and points out that she is only one type of student who needs help in succeeding in college. Given the discussions we’ve had in class, I can see how writing assessment can be abused in exploiting graduate students, for instance, or to reward or punish faculty members because I’ve heard about faculty members being reprimanded for something they believe was right and that was to help a student, of course it’s deeper than that but I won’t go into it. I will not list all the assumption because they are in the book, but I will note that they seem to be legit; for instance, the second notes that language by definition is social. We have discussed that topic in class, as well as in other classes that I’ve taken. Sadly, other institutions, I would imagine, do not consider or accept that definition and that is where the student loses out because they can easily be discouraged to continue higher education or feel like they don’t belong to the mainstream. As for the suggestions for students and faculty, they seem to work with other topics we’ve read and discussed. For example, students having to write in different genres, prepare real life writing material, are informed, and evaluate their writing. And for faculty, some of the suggestions including considering a students background, participate and evaluate often, making sure it makes sense, whatever that it is, how can one even make sure that a faculty know what sense is? I’ve seen a couple of professors where they are out of it, they don’t seem to know what the heck is going on or do they choose to be senseless? I don’t know, continuing, lastly administrators and higher education governing boards should definitely be much more informed in some of the things that are going on, if not in everything. How can they really know what students need if they don’t work in the field. It’s like having a chef figure out a problem NASA has just by assuming things, or vise versa, NASA trying to make eloquent dinner dishes for a president. I think one can understand my point. Finally, this article really touches on some of the obvious issues at hand, or for that matter issues that have existed for a long time and who knows if they will ever be more than half perfect or at least tailored for students for each institution.



CCCC Position Statement

The assumptions listed in this article are fundamental to understanding the foundation for the CCCC statement. I am struck by the second point, “language by definition is social” (393). Since language is taught by others and learned through a variety of resources, it is important to understand that even though one may be instructed in formal language, the speaker may use various dialects and colloquialisms. The speaking of language is a whole other story for lots of people (versus formal writing and learning).

This may sound like a simple concept but once a student leaves the classroom, the learner then has the responsibility of using what is learned or adapting to the language of his environment. It becomes a challenge to integrate the skills taught in the classroom compared to the “sum of a variety of skills employed in a diversity of contexts” (393) in which literacy is not just a means to pass the subject of language but by which to define character and understanding.

The position statement states the outcome of assessments, such as, standardized tests, tends to be negative because the tests evaluate “wrong” instead of what the student does well (394). Is there any other type of test that the student could take? It comes down to memorization. Ultimately, a student not only has to understand the rules of language, he must memorize them in order to regurgitate the skills on an exam. There is no other way to do well on a standardized exam – you cannot guess, it is not an opinion, and there are rules and exceptions to the rules. It is said that English is the hardest language to learn because it is comprised of so many other languages. Its roots are diverse given the nature of the founding of the country, specifically America. Since this may be the case, then it only makes sense that learning the language will entail extra work past speaking well. There are rules. There has to be assessment to ensure the student understands and repeat the rules on an exam. I do not see any other way to approach this dilemma.

Failure: The Student’s or the Assessment’s?

Mica set out to improve her writing and to define what it means to be a basic writing student. So far, we have seen that the definition of a basic writer is a difficult concept to define. The pilot program Mica participated in sounds typical in that seeks to aid “underprepared” students. The authors are attempting to take a closer look at how in Mica’s “failed” (402) attempt in the pilot program, they overlooked the struggles of “culturally diverse students” (402).

The authors originally state that Mica’s writing “misses the mark” (404). Further study reveals that Mica has implications in her writing that force the teachers to look at academic discourse in a different way. Just because Mica does not exemplify grammar, does not mean she wrote a bad essay, so to speak. There are clearly other factors to take into consideration then just grammar, style, and mechanics.

In comparing what Mica wrote with some of Mike Rose’s testimony in Lives on the Boundary, Harley and Cannon (authors), believe “medieval goddess Grammatica” functions metaphorically (407). I agree. There is so much more to writing than grammar. I love grammar but there is nothing like the skill of being able to process information, take it apart and express it clearly – perfect grammar or not. But, like the authors say, social power is identified with academic discourse (412). There is a point to rethinking assessment of writing and looking more closely as “texts contextually” (413); however, I do not think usage, grammar and syntax should be considered consequently.

Those Crazy Gates...

This article ties in to the others becasue it approaches the all important issue of cultural perspective of the student in writing for assessmen. This can be compared to Mica's experience in the Harley and Cannon article. There could be a need to grade differently for ESOL or student's who are able to pass the placement exam and be placed in college-level classes but still lack the ability to grasp formal essay writing skills because of dialectical variances.

There is the the tie-in issue of the multiple choice section in which the student is tested mostly on rules, puntuation, and the like. This is compared to the essay in which the biases of the culture can come into play more dominantly. The student will express in writing the world as they see it through their experiences. Is there an objective way to grade this? Essay writing is not so clear to grade as multiple choice. The skills may be integrated but ultimately, there is not multiple choice in writing. The student has to make communication choices and generally it will be through the eyes of their dialect.


The CCCC Posiiton Statement was interesting in light of a few things that are relevant to us all as a grad students. It does seem, as Joanna pointed out, counter to a lot of practices of universities. There are parts of the Statement which really seem to counter the notion of having a comprehensive exit exam for reasons other than to evaluate the curriculum (such as the comps we have to take at the end of the program). Joanna already pointed those out, so I won't do it again, but I did have the same thoughts. I think everyone probably clicked to those things in a grad student fantasy land of us collectively pointing out some research or position of a powerful group of researchers that would get the English Department to rescind the requirement of comps before we have to take them.

If I have a problem with this document, it is that it is based on assumptions and not on anything really concrete. John brought up some good points about how those assumptions do not really prepare students for anything but life as students. Of course, that would be what a university would want - in addition to promoting the idea of lifelong learning, lifelong university students mean more money for the university.

My own ideas are that this document is fourteen years old, and assessment in many institutions of higher learning still does not reflect its assumptions. Furthermore, assessment in public education and the powers-that-be that enable it (legislators, lobbyists, the testing industry to name a few) apparently laugh in its face because the TAKS test is hardly based on any of the assumptions outlined in the CCCC piece. If this is a serious document and CCCC is a serious organization, there sure are a lot of real world scenarios where this statement is not taken seriously. The actions of the education world are not reflecting the words of the CCCC when it comes to summative assessment.

The Harley and Cannon piece, "Failure: The Student's or the Assessment's," while very emotional, did not convince me that the standards needed to be adjusted in the basic writing program. I do think it is healthy to look at what we ask of students and whether the students are meeting those expectations, but the problems that the instructors failed Mica for should have been addressed throughout the semester through formative assessment. Mica's work from her portfolio, as presented, should never have been so unstructured and unpolished if both the teachers and student had been actively addressing the problems in her writing.

They are right in some ways. Mica has wonderful voice and guts in her writing. It is alive. However, it does not look good, and it is hard to read. It lacks the balance, precision, and clean look that is needed in proficient academic discourse. She is excellent at some things and horrendous at others. If you were to put it in numbers, on a scale of 0-100, you might give her a 100 for content and a 0 for form. The average is still a 50. You cannot substitute one for the other, and I think that is the point of a good, balanced system of assessment.

We often downplay the importance of structure and grammar in writing and say "Wow, look at the content." That's fine when you are reading excerpts from papers, or even one or two papers, and going "I can understand this if I do all the mental corrections while reading it," but what happens when you are reading one or two hundred portfolios? That structure and grammar can get to be pretty damn important by the fifth or sixth paper in just one student's portfolio. Then imagine grading 99 or 199 more portfolios.

That is one reason why I really feel like the disparity between the number of teachers and the number of students and the need for speed and efficiency in communication is at the heart of the standardization of language in schools. If you can read and grade something easily, it better facilitates your understanding of it; it becomes more grader-friendly.

The stuff that is challenging to read basically has a target painted on it that says, "Look at me, I am special, and I don't think I have to do what everybody else does!" That's great in literature, but it is murder on academic writing because the grader needs that adherence to standards that everybody else recognizes and adheres to in the name of efficient communication in order to read a paper quickly and assess whether or not a student can write effectively in the dominant discourse of the academy.

Also, if we don't teach it and assess it in basic writing, it will just fall upon the teacher of Freshman Comp to teach and assess it.

Another problem brought up is that Mica did not write in the specified genres of the assignments. Argue as we might about how writing does not always fall into whatever genre, the point is she did not do the assignments requested. If I was asked to do a deconstructive analysis of Othello and I turned in the absolute best instructions ever written on how to rewire a laptop computer, guess what? FAIL.

It is both sad and interesting that this particular student failed, this student who seemed to want so badly to learn things but was not taught those things in a way that she was able to learn them. There are things that are hard to gather from this article though. How hard was Mica really trying (was she just putting on an act as some students will do, or was she completely sincere)? If she was sincere, and if most of the other students managed to meet the standards, why didn't she? How hard were the instructors working with students on a one-on-one basis if they needed help with things that they might end up failing the course over (since Mica clearly needed that help)? There are a lot of blanks that need filling in to really evaluate this case study, but it did make its point, that we need to constantly be aware of who is making what errors and why. It also makes me think that we need to pay way more attention to formative assessments of students' writing in order to intercept problems as they occur rather than at the end of a course.

The article, "These Crazy Gates," takes a good look at how tracking students into basic writing can be problematic. They also really hit the nail on the head with the comments about how Standard English facilitates a faster, smoother grading process. It really does, but is non-adherence to it really sufficient reason to fail someone on a written final assessment, especially when the mistakes in Standard English and lapses into AAVE are minimal as they were in Rowena's case? I do not think anyone would compare the frequent SAE errors in Mica's writing from the previous piece to the minimal ones in Rowena's save their origin (AAVE).

Anyway, there is not much to be argued when nearly all scholars will state that, at some point before graduation, everybody is responsible for knowing how and when to speak and write in SAE. There seems to be controversy on when that should occur. I feel that it is better sooner than later. The sooner you master the basics in any area the faster you will progress toward doing great things in that area.

I think we may get away from our task at hand when we start pandering to students too much and telling them that their mistakes in the dominant discourse are okay because of their background in another discourse (even though that background may be their entire home culture). They are still mistakes in academia, and they still need to be corrected quickly and effectively. Students are here to learn new things (if that includes a new dialect of English, then it includes a new dialect of English), not to simply continue practicing old things that will only serve to keep them from advancing in the university and beyond it.

After all, nothing we have read in this class has been primarily in AAVE, Tex-Mex, Elvish or anything other than SAE. You will not find any laws written in anything but advanced SAE, and God help you if you fill out a job application in anything other than SAE. My point is that although we may pass as politically correct individuals who offend no one (and, likewise, perhaps sincerely compliment no one) by saying "Every dialect is acceptable, and everything will work out for you," we fail at our responsibility and jobs as teachers of the things that our students need to learn to get many of the things they have come to college wanting, namely a degree that will get them a good job, good money, and respect in a society that is heavily structured by socioeconomic class. If they have to learn another dialect to do it, I doubt many of them would think twice about it, just like I didn't think twice about busting my brain to learn college algebra. You have to do some things whether you like them or not.

We kid ourselves when we say that there are a lot of students who go to college for a liberal arts education and to be themselves. I'm reminded of Good Will Hunting when Will tells the Harvard grad student, "You're going to one day figure out that you spent a hundred and twenty five grand on an education you could have gotten for a buck twenty-five in late charges at the public library," and the guy replies, "Yeah, but I'll have a degree."

Anyway, I think most people realize that most books can be read for free. They aren't here for the overpriced books and tuition, and they aren't here because they like waking up early or going to class at night when they could be home with their families, significant others, or out living it up.

Students are, by and large, in college for the jobs they want to do and the money they want to earn. It is one of the main avenues to a higher social status and a more comfortable lifestyle for most people.

If we say, in basic writing, or in any other composition class, that any dialect other than SAE will be equally respected by employers and other citizens of the United States, we will be telling our students a boldfaced lie. That is one big reason that SAE is so pushed on students and why they have to master it to succeed. ___________________________________________________________________________

Jennifer G

Wow this collection of pieces on assessment was fantastic. Since we are working on the idea of assessment of students, programs, and exit assessment, I found all three very helpful. I am going to divide my post up between Entrance assessment, and I suppose exit as well and the CCCC’s article, Program assessment and by Harley and Cannon, and Exit assessment with “Those Crazy Gates”. It is important to note, as always, that these overlap in areas.


This article was a basic and simple guide to a more balanced way of assessing students. They list 10 assumptions for assessment, focusing on linguistic studies (which we have seen in “Student’s Rights to Their Own Language”) as well as some common sense ideas. I liked the comment “there is no test which can be used in all environments for all purposes, and the best “test” for any group of students may well be locally designed.”(393) I think this is logical and practical. It is the idea of taking region and context into account. I find it frustrating when these things are considered separate from the exam, and agree that one size fits all does not work well. I have given this example before, but my daughter was taking a multiple choice exam on verbs, and she had to match the verb to the sentence that it belonged in. When I asked her how she had done, she got frustrated and said she got all but one, because one of them didn’t make any sense. She said it was the word shovel, and it was used as a verb, and the only sentence left had to do with a sidewalk. I laughed, because I got it; she didn’t however. How is someone who has never “shoveled” a sidewalk, supposed to understand that sentence? This is such a minor example, but region, context, social constructs, all play a role in how something is perceived.

They go on to discuss this very thing later on, and that there has to be knowledge of the community and the region to properly assess the writing or the testing. I understand that this could become a political nightmare, as money would have to be appropriated to fund local testing and creation; that is if it were to remain somewhat standardized. This could get costly and complicated, I see some of the reasons for some outsourcing, as well as the “credibility” that some of the big name tests have gained. (SAT, ACT, GRE,) These are all names people can identify and when listed with scores and can rate where everyone stands (so to speak). It is not accurate, but people like what they know.

I was surprised by the fact that the authors were not against testing, in fact they note that when done correctly it can help a program grow. My confusion with this section comes in something that I am not even sure I should be concerned about, and that is the appropriation of funds for testing, and the misuse of those funds. The authors list this as #9 on page 395. I knew that some places had bonuses and dismissals based on the testing, but I had never given consideration to the funding received for testing.

I could go on with this chapter for ages, there were a number of lists and great information and ideas, but I will save some for class.

Harley and Cannon:

Assessing student failure, and program assessment

(well not word for word, but it’s what I got out of it.)

There were some good ideas in this reading as well, although it was not as solid in its presentation as the other piece. I found the portions that broke down the portfolio requirements and how they were graded very very helpful. I enjoyed the candor with which the author presented the information. I gathered from this selection that constant reevaluation based on failure as well as success in very important. Even though the authors noted that they would not have changed the young womans standing, nor passed her, they would have handled things differently. I think it was interesting noting the students desire to grasp control of the language and the instructors were trying for a more open and empowering environment, thus the student felt a bit cheated. I also liked the references to “secrets” that we keep. It made me think of assignments my students have struggled with and it makes me wonder why we don’t show them the grading criteria sheet, or break things down more. Some of the items listed for the students, in the selection, were a bit abstract. I probably would have asked for clarification on one or two points. Why do we do that? I am once again torn by the idea of preparing them for further work and keeping everything within context, but at least the authors seem to be struggling with the same thing.

Those Crazy Gates:

This was interesting, and I wonder, if the study were done here, how the numbers would add up? This one brought me back to my constant struggle between preparedness and keeping everything in context. What is the goal of the basic writing course? Should there be specific guidelines? Should it just be (Shows improvement?) Is it fair to the student to let them finish their training elsewhere, in other words , in sophomore and junior level classes? If they are struggling to get out of basic writing is it not going to be a struggle in everything else? As you can see this article brought up more questions than answers… The authors points about the assessments of the essays were interesting, and the idea that the student could not reapply for 3 years seemed rather harsh. I still don’t like the idea of a class without credit, and am not clear or sure of how mainstreaming works (completely), but I think the students should have some way of getting credit for their work, thus validating their effort. I think the author raise some great questions as to the multi-cultural use of language and the effect on the students and instructors. What is the proper way to proceed with any multicultural student? Not everyone is looking to see the intelligence in the mistake… I guess my fear is that in passing the students on who have not mastered (to some degree) the Standard English, is it going to be harder on them later, and cost them more in the long run, if they are greeted by people less tolerable and accepting? I know the authors are not advocating a lack of Standard English training, but the idea of moving them on so that it has been mastered by graduation seems to me like passing the buck, something that has been done with these students from high school on (Someone else will take care of it or they will get it later). This is my struggle, not that I want to be mean or hold them back, but I would hate to see them proceed underprepared.


I had some trouble with Mica’s story. If they knew all semester that she was having problems and was vocalizing these problems, was she really beyond help? I have not encountered that so I do not know how to deal with that type of situation. Normally, the students do not ask for help, stop coming to class, and/or just do not turn in any work at all; therefore, you do not know they are in trouble. After reading Mica’s personal essay, it is clear that many of her “problems” stem from her inability to conform to the language of the academy. Yes, she clearly has some organization and grammar issues but some of those issues seem to fall where they do largely because of her vernacular. When thinking about their assessment of her, in essence, they are saying that she must change a part of herself in order to get a good grade in this class. She does change her “voice” in at least one of the later essays but apparently still does not make the cut. It really was a sad story because Mica felt cheated. She didn’t feel that she got the help that she was more than willing to ask for the entire semester – or at least until she got tired of asking, I guess. I wish there had been more discussion of what they did try to do to help her. If there were, I might understand a little better. They studied her but did they try different things to try to get through to her – did they feel that she had any growth at all? Did they suggest that she visit a tutoring center – or was this pilot course supposed to be the place to go for all your writing needs?

The issue of voice is interesting when I think about the TAKS. Does the TAKS want students to use personal voice or does it want them to be uniform so that they will all pass and “write” correctly? Mica’s story shows us that not everyone’s voice is going to be accepted. The 4C’s article made me wonder about free writes in the classroom. I do not grade free writes. I use them as tools for brainstorming, venting, reflecting, etc. If I did assess these pieces of writing, would that be fair? Would that be considered poor teaching because I am forcing them to write without time to reflect, talk, revise, etc.?

I wonder if writing assessment should have more choices. (Maybe it does and I don’t know) The 4C’s article discusses that “ideally, such literacy must be assessed by more than one piece of writing, in more than one genre, written on different occasions, for different audiences…” (393). We know that standardized testing is not going anywhere anytime soon but perhaps the options for these tests should be altered. Students should be allowed to choose from a variety of topics. They might know where their strengths are so they can pick a certain genre that they are more comfortable writing. It does not necessarily have to be something that is easy or fun but it might just come down to the difference between persuasive and expository. Students might work better if they are not backed into a corner.

These articles really revisit the question “who is a basic writer?” The students depicted in these case studies obviously have something to say but they are not saying it the way it is expected for them to say it. (I am assuming that they are writing the way they hear it in their heads.) Also, there must be another measure of writing because some of these errors could just come from lack of proofreading. If the only other "undesirable" is the students’ vernacular, I can see where one essay can be turned in with tons of errors (I have done it myself because I completely forgot to re-read it and just grabbed it off the printer.) I understand that students are supposed to be prepared to function in a society that acknowledges certain types of written discourse, but they should also be allowed to incorporate their personal language where relevant without the fear of being penalized or “held back.”

Andrea Montalvo

CCCC Position Statement:

I thought this was interesting because I’ve always wondered “who” is ultimately responsible for assessment/placement: the teachers or the students themselves. I think, just as with the question “what is basic writing?” there is not one solid answer. The list of assumptions was interesting because they were developed by a small group of people in the field. Not that this is a problem, but it did spark my curiosity a little. I agree with a few of these assumptions, such as the seventh one pertaining to standardized tests, “…usually developed by large testing organizations, tend to be for accountability purposes, and when used to make statements about student learning, misrepresent disproportionately the skills and abilities of students of color” (394). This section also talks about “good” and “bad” writing, and how these tests differentiate between the two. I also like the idea that “assessment drives pedagogy” because I assume many teachers want to assess their students progress so they know where they stand. I was aware that there were things the students and faculty should do, however I was unaware (perhaps of my own accord) of the role of administration and legislation in assessment. I did enjoy reading these lists because I learned that the weight of assessment should not just be placed on the shoulders of students and faculty.

Failure: The Student’s or the Assessment’s?

Mica’s story was intriguing. I liked how the authors included the assignments used in the pilot program so we could see just how they assessed their students. In her papers, Mica used Black English Vernacular, which made it difficult to read and assess her writing, according to the authors. Rather than focusing on what Mica did wrong, they compared her style to that of Mike Rose in Lives on the Boundary. What she did was shift abruptly from sentence to sentence like Rose does in the beginning of his book. The authors note, “reexamining and questioning our assessment of Mica’s portfolio has left us with more questions than answers,” and I feel that after reading this essay I feel the same. I think its due to the fact that they focus on a single student, rather than multiple cases. Perhaps this may have been more helpful (in my opinion anyway).

Those Crazy Gates…

This article includes the study of Kyle, a white male, and Rowena, a black female, students with similar SAT verbal scores and in the same basic writing course. Because Rowena used AAVE, she had more trouble than Kyle and had to repeat basic writing two more times before she could move on. “A poorly designed assessment system is destructive when it determines who should or should not exit noncredit courses,” the author states (96). The authors also mention that black writers have to deal wit white graders who are not familiar with their vernacular, which affects how they fare in college.

Jennifer Marciniak

CCCC Position Statement – TDW Chapter 13 “Assessment”

I think CCCC has outlined a courageous and progressive plan. I highly agree with a lot of what is stated here, and I have tried to implement as much of it as I can into my classes, as the program at TAMUCC is very big on collaboration.

Bernstein says that the CCCC statement presents their view of how assessment should be in “… hypothetical (if not utopian) terms” (390), and I think we need to remember that. When reading what students, faculty, administrators, and legislators should do, it is a good idea, but I think that at this point, especially in this economy it is definitely more a hope for educational reform than an across the board change. I don’t think I am being negative, but being realistic. I refer mostly No. 4, and to No. 7 in the Faculty list, the latter of which states: “encourage policy makers to take a more qualitative view toward assessment, encouraging the use of multiple measures, infrequent large-scale assessment, and large-scale assessment by sampling of a population rather than individual work whenever appropriate” (397). Since I am not all that educated in standardized testing procedures today, my reaction to this may be a little ignorant, but the cost involved here is where the legislators are going to drag their heels, right? Or do we already do this to some degree in Texas? I am not sure. I like the idea, I think it is a good idea to sample the population, but monetarily, some of these qualitative ideas may very well go by the wayside in lieu of cheaper quantitative testing.

I agree, too, with No. 1 and No. 3 of the Administrator List, that they should take the time to consult with those who are in the classroom about the “needs of students, not just the needs of the institution” (397). But again, see the monetary issue come up in public schools, and a voice saying, “well, we need to do what is best for the majority, we cannot afford to give every child what they need and assess for every single culture and language.” I can see conservatives having the most problem with this, and using it and its high price tag as a jumping off point to tout even more loudly the English Only initiative.

So in conclusion, I think CCCC comes across with some fabulously progressive ideas. Now, since this was initiated in 1995, I would like to see where this has taken assessment in schooling. Is Texas doing away with the TAKS in any response to this? What have other states done with public schooling to further the CCCC call for a more student-friendly, positive assessment process?

"Failure: The Student’s or the Assessment’s?" and “Those Crazy Gates and How They Swing”

I really enjoyed reading Mica’s story and thought the fact that she actually got her ideas and concerns across on some level of critical thinking were pretty impressive for someone termed a basic writer. In “Crazy Gates” I felt the same way about Rowena’s story. Her work was much more personal than Kyle’s and had more detail. I think Agnew and McLaughlin? hit the nail on the head with this reaction to grading and AAVE students: “Ideally, faculty should be trained to see the difference between just plain poor, out of control writing, and the writing that contains AAVE patterns” (97) I agree. After Katrina, TAMUCC has seen an influx of minority students, which obviously include African Americans. We have had a lot of productive instruction about working with marginalized (mostly Latino) and ESL (mostly Asian) learning scenarios, but AAVE has, at least in my term here, has not been a big part of my pedagogy. I noticed this lack in my education as a TA last semester when grading papers heavy with AAVE and not sure what to do about them. The one I had (from Chicago) was not at all like Rowena or Mica. There was a lot of detail missing. It had a lot of AAVE, which I recognized, but even more striking was the lack of managing ideas and following directions. After the first portfolio, we had a few conferences about it, he got a tutor, and he made DRAMATIC differences. The way I knew it was still him writing was that his AAVE was still there, but not as pronounced. If there had not been any traces of AAVE, then I would have definitely questioned if he was doing the work himself. I wonder if any scholars talk about that, the sudden disappearance of AAVE at the request of a teacher who is really hard-nosed about grammar and spelling?

The problem I see with Harley/Cannon’s reaction is that they blame themselves for failing Mica, but then come back and say they would do it again anyway because “she did not meet your expectations” (415) So which is it? The use of the AAVE here is important understand and also take into consideration during writing, but we MUST remember that in teaching 1301 or basic writing that we are trying to prepare students for being successful in all areas of writing, not just the personal narrative. Like I mentioned above, I have seen AAVE in my classes, and at first I really try and build upon the critical thinking and connection of ideas through personal narrative (i.e., discourse community papers for Portfolio I), but in order to help a student succeed through written communication in their respective major, albeit business, music, or biology, certain vernaculars are not looked upon as being academic or professional. This is where it becomes sticky, and just like with the two authors, I am not sure yet the best way to “unstick” vernacular issues in order to move a student toward a more academic writing style without shutting down their desire and willingness to learn.

Garrett's Response

With this week’s readings, and especially the CCCC Position statement, I felt, like the article the Student’s Rights to Their Own Language, that the points were better in theory than probably in practice. I agree with Joanna’s assertion on their section of having assessment “contextualized in terms of why, where, and for what purpose it is being undertaken; this context must also be clear to the students being assessed and to all others…involved” (393). Well, how exactly do you do this, and is this being done? And I agree with Joanna that this is not done with the TAKS, or any standardized test for that matter. This is part of the issue we had with the test question we read from two weeks ago from a standardized test: “Write how someone can be connected to a special place.” What the hell does that even mean? So often, the students are asked to write in a vacuum, wholly divorced from context, that is utterly at odds with the critical thinking we are trying to get them to do about audience and purpose in college.

I agree that language is of course social, and this reaffirms what I said about the reading I had to do last week about technical writing and the Holocaust, but how exactly can faculty “play key roles in the designing of writing assessments…[that] must be sensitive to cultural, racial, class, and gender differences” (396). I know what this means, that no group should be out of the loop because of some sort of bias in the grading system, but how do, how can you, even begin to take this into account when, by nature, grading is a standardized method that supposed holds all students up to the same yardstick? I was a little miffed by this. Again, good in theory, tough in practice.

With “Failure: The Student’s or the Assessment’s?” I dug the “pass/fail” portfolio system they talk about on 403 because it gets at the meat of what makes a good project while avoiding the messiness of grades. But again, we get back into the argument of how to assess different groups differently where “we need to understand that assessment is complexly situated, and different audiences may require different evaluations” (413). Isn’t there something to be said about maintaining some sort of standard with grading? Otherwise, I think your classroom may become a mess. Of course, this gets into the debate of who created the standards, and why are they privileged. I just can’t win!

Holly C.

As the English/Language Arts TAKS test was yesterday, I am not certain if I will be able to keep this post from being a rant about the TAKS test, especially since the subject of this week’s readings was the placement exams.

The first portion of this week’s assigned readings were on the CCCC’s position on assessment. I find it interesting that the assessment system for Texas (at least that’s how I’m understanding the statement on page 391) became the assessment system model for the implementation of the No Child Left Behind Act. I always thought that the model used in Texas became more stringent following No Child Left Behind. Any time I am ranting about No Child Left behind, I always say that it is what made the focus on preparation for state accountability exams what it was, but it appears that I should actually be ranting about Texas.

The CCCC does feel that writing assessment can be something that is abused, and that it can be used to exploit certain upper education populations. Interestingly enough, one of these populations are graduate students. The CCCC also discusses how the other exploited population of the writing assessment is faculty, because it can be used to either reward or punish faculty members. I am not certain whether or not this reward or punishment of faculty members is the same as it feels for high school teachers, but I am assuming that what the CCCC is saying here is that faculty members may be rewarded or punished based on the performance of their students on written assessment.

The CCCC lists ten assumptions about what writing assessment should be. These include the statement that assessment “tends to drive pedagogy” and that “standardized tests [...] tend to be for accountability purposes [...] [and] tend to misrepresent disproportionately students of color” (394). The two additional readings we did for this week certainly support this assertion. I would like to make the assertion, however, that people of color are not the only individuals that have the potential to be misrepresented by standardized tests. Right now at my high school, for instance, our principal has stated that, after people of color, the next set of people doing poorly on the TAKS test are not of any one ethnicity. In the case of this next group, socio-economic status is the defining factor.

The second two readings for this week were case studies of African-American students struggling with the writing assessments that would have them placed in Basic Writing Classes. One of Mica’s issues primary issues was that she could see no purpose in learning the content of her Basic Writing class. Given the fact that it sounds like most of the Basic Writing class she took was concerned with trying to get her prepared for the writing assessment, I can see why she felt this way. The other thing I took from this article was shock and concern over the information that it sounds like they put on her paper and also a disgusted sense of awe about what their portfolio assessment sounded like. Of the four requirements that make up the basic writing portfolio, the one that really surprised me was the Expository piece, an essay that says to go to Mike Rose’s life on the boundaries for resources to write the paper. The expository piece that they are wanting sounds more like a five paragraph essay with its thesis and supporting evidence than anything Rose would ask for in his own writing class.

The other reading on Kyle and Rowena was interesting because of its discussion of how Rowena’s insertion of her own voice into her work was the very thing dragging her work down. This is especially ironic to me since the papers the students write for the TAKS have to have the inclusion of personal voice to receive a perfect score. Kyle’s essay was also preferred because it followed the five paragraph essay format, a format that, at least from my experience with upper education writing, seems to be frowned upon. I know that five paragraph essays were okay for the 1301 and 1302 classes I took, but did not help me very much for the classes I took in my junior and senior years of college.



This document is good and nice, I have only one question: “Seventh, standardized tests, usually developed by large testing organizations, tend to be for accountability purposes, and when used to make statements about student learning, misrepresent disproportionately the skills and abilities of ‘’’students of color’’’.” What does that mean? Are we seriously still calling them “students of color?” How does that even fit in with the statement? It’s just a random addendum at the end. Actually, I guess I could also say that I like a lot of the statements under what “Students should.” They should write portfolios, have writing grounded in real life stuff, have multiple reviews of their papers instead of one angry Texan WASP reading it and deciding where they’re placed, and they should get feedback on their writing. That’s how writing works.


I have so much to say here that I don’t think I can even cover it. Almost every margin of this chapter is filled with big angry comments because the more I read of it the more angry I got. I’m just going to go ahead and include some of my comments here.

“It shifts from direct to indirect discourse; from Mica as narrator, to Mica as a character thinking aloud, to Mica speaking directly to other characters or her unborn child. But we dismiss this complexity…” (404)

Complexity? NO! It’s wrong. She lacks the ability to switch from writing like this to writing well. It’s bad writing, and it’s not complex, it’s simple. Further:

“Tense shifts occur seemingly at random” (404)


“The missing … reflect Black English Vernacular” (405)

No, they reflect bad writing. This could just as easily come out of the whitest hillbilly in the mountains of Missouri.

“Her preference for situating her ideas in personal terms is seen in several other essays discussed later in this paper” (405).

It’s not her preference for situating ideas in personal terms, it’s her inability to situate ideas in impersonal terms.

“One feature of tonal semantics… is the use of repetition, alliterative word play, and a striking and sustained use of metaphor, something seen throughout Mica’s work (134). Mica writes about a jumbled, chaotic, and intensely personal time that demands a strong emotive voice. That Mica has achieved such a voice is a mark, not of a basic writer, but of an accomplished one” (405).


After reading all of this I propose that we immediately stop the teaching of basic writing. Basic writers have managed to achieve a childlike innocence that even the most accomplished of writers would envy. Seriously people, they can’t advocate anything with this other than the idea that we should lower our standards and consider the most basic of writing immediately magical and good.

Tammy Graham

The CCCC Position Statement on Assessment, although written ten years ago, points to some up-to-date and valid information. The sixth point addresses the “assessment instrument” as do the other readings. The main issues emerging here are that in most placement/exit exams, the students must write “without time to reflect” (CCCC), and that students like Mica, who are assessed by a portfolio method, “puzzle” the instructors whenever they fail. From the writing samples provided in Agnew and McLaughlin?, as well as Harley and Cannon, it seems to me that all the students are confused about their audiences and purposes. For example, in the A/M reading, “Rowena” starts out her essays in a proper academic format, but it seems that as she goes along, she switches from that academic voice to a more conversational, relaxed tone. In Mica’s writing, the voice is also conversational and relaxed, as she most likely sees her instructors a friendly audience; although, in actuality they are her judges. This may be why she feels “angry” and cheated. It just seems that their inability to produce the desired style of writing may conflict with their own ideas of whom/what is their audience/purpose. I would say it goes against the first and third points in the CCCC. These students do not speak in an academic style, and may not have had the background or “social factors” (A/M) necessary to understand this style of speaking much less how to consistently produce it in writing. Their learned patterns of speaking are different (AAVE vs. “standard” English). Which brings us back to the main point that the “assessment instrument” (CCCC) and “failure in assessing” (H&C) reflect a bias toward a certain type of student, from a certain type of household. Another point is that the assessment should not expect an incoming freshman student to write perfectly anyway. That’s absurd. In the first few years, “assessment” should be used “primarily as a means of improving learning” (5th point of CCCC), not as a means to keep students from an education. As far as the need for BW classes, I think that reassessing assessment could solve that issue.