Ok look.... I have a lot I would like to say about NCARB's testing format and what I perceive to be real problems with the way this test is structured, but, I don't want this thread to just be me ranting. Rather, my goal with this thread is to see how many of you feel the same way I do and HOPEFULLY either A) get enough of you involved in this thread that NCARB actually decides to change something, or B) get something useful from one of you that can help me on future tests.
Therefore, please engage in this thread. Just write something (Agree, Disagree, Somewhat Agree, <insert paragraph here>)
I want to hear from all of you out there if you are seeing/feeling the same way I am with regard to the way NCARB has structured these tests. Full disclosure: I am writing this after having just "preliminarily failed" the PA exam for the 3rd time, so if this article comes off like I am frustrated... I am. That said, I had a very respectful discussion with an NCARB representative over the phone today just to discuss the issues outlined below, and, well he was atleast respectful and sympathetic to what I was going through if nothing else.... he did provide me a email to some sort of helpful links for future study..... but, there really wasn't anything else...which is what I think is the issue as you will read. See below for a list of some of the barriers/challenges I feel are detrimental to the testing experience and not only my opportunity to succeed but yours as well. (Listed in order of most egregious)
- The policy that prevents test takers from using a piece of paper or pencil. Not even if it is provided by the testing organization. I am so tempted to rant on this but simply put... this is just wrong... the "whiteboard" is not an effective tool. No one, nowhere uses this archaic of a tool in our modernizing profession as a way of working out math problems. This tool is not a "feature" but a "hindrance" to anyone trying to succeed on these tests and become a licensed professional. I can't overstate enough how needless this policy is that NCARB can't simply allow test takers a blank piece of paper and a #2 pencil. The test is literally making the same thing in a clunky form that takes up screen real estate and becomes a real strategic barrier to taking the test when someone has to simply take the test in order to learn how to effectively navigate back and forth between reading the essays (otherwise known as problems) and then jotting down info on this whiteboard.
- The uselessness of these vague score reports. In short, I can see how someone might argue, "well at-least we get something back rather than just the word FAIL", but, to be honest that is a weak argument in my opinion and I just think these score reports are not in the slightest way helpful. NCARB reorganizing the words; evaluate, identify, prioritize, analyze, sustainability, environmental, etc. etc. into a bunch of similar sentences and compiling that into PDF does nothing to help narrow down the topic area(s), or sub-topic area(s), people are really struggling in. Those terms in the score report are so broad that I am sure one could read entire textbooks and not cover all the categories, sub-topics, and specialized topics that would fall under a term like; sustainability, environmental, qualitative & quantitative attributes of a site. Wow NCARB! Thanks for clearing it up for me.
- Score reports & semantic word games. This probably should be my #1 most egregious item since it is the one I am most upset / discouraged / hopeless about. Does anyone else feel like NCARB is needlessly confusing you and playing semantic tricks on you with vague terminology, poorly worded questions and even more poorly worded answers? As I am sure some of you know what I am talking about, you may feel like you have fully studied a content area and are a near master at it, however, then you get into the test and you're forced to decipher "hidden meaning", or, try and interpret the "underlying meaning" of specific descriptive words that NCARB has intentionally placed into the questions and answers as a way of INTENTIONALLY tripping you up! This is discouraging to me to say the least because I no longer feel like I am being tested about my knowledge of particular content but rather playing this odd game (a game that sometimes feels like a gamble) where I have to not only know the content but also try and figure out what the test writer means when they use certain wording...and then... "analyze", "evaluate", "synthesize" how this new wording changes my view of the content I am confident I know. But OK.. FINE NCARB.. I understand you need to make the tests a bit more difficult to see if testers can truly think through all the variables of a situation and conclude the "best" answer or "most appropriate solution" given certain variables. That's all well and good. I do see value in that. Fine, I will just have to play this semantic game and do my best if I want to get licensed, right? Challenge accepted. BUT, but what makes this issue so infuriating, what I do not find valuable and honestly leaves me feeling a bit helpless is, again, these super vague score reports that do nothing to help me as the test taker know if I am interpreting descriptive words about the subject matter correctly. To be very specific, there are questions where the wording of the question, or more likely the answer(s), IS the stumbling block and not my lack of knowledge on the content in question. I find myself wasting valuable testing time debating internally on what the test writer is trying to convey by using certain words over another. And, in many cases it feels like I am having to just take a gamble that my definition / interpretation of how a word is being used matches with the intent of the test writer's. Ok..so NCARB says, "Correct, that's how we made the test. That was intentional. So what?" And my response is, "How am I supposed to improve my test scores, correct my mistakes and get the answers correct on the 4th time I take this test if NCARB isn't going to tell me which problems I got correct, or more specifically which semantic word games I interpreted the underlying meanings correctly and which ones I didn't? I will never know if I am making the correct judgements and therefore I feel this is a major flaw in this testing format. I feel I am being tricked into second guessing information that I know to be true based on word games, word games that you won't tell me if I ever got them correct, so next time I take the test I could possibly know exponentially more about the content being asked....but....if I fail to interpret the semantics correctly and how those change / or don't change the correct answer then well... good luck gambling!
Ok so this did turn in to a bit of a rant. But I am bothered by this and I feel that if NCARB was more forthright in helping you understand what you got wrong and what you got correct there wouldn't be an issue. But it doesn't feel like they are doing that. I mean come on, I, like many of you, have sacrificed a lot of my money and time (including our family's time) to study for these tests and work to become a licensed professional, and well, I simply believe that NCARB's practice of withholding scoring information, making everything vague and simply not providing USEFUL feedback is the ultimate issue here that continues to cost us testers more time and more money. The wording in the tests is a problem, but, the lack of real feedback is the real issue.
Please sign in to leave a comment.