Building efficiency factor
This is the question from Case Study #1 in the new Demo Exam
The client has reduced the original project budget by two million dollars. They would like to keep the program space as planned and instead reduce the building efficiency factor. The building is estimated to cost $500 per square foot.
-
Hey Navita,
I can help with this! Since the building is estimated to cost $500 SF, you'll first need to find out how many square feet the building needs to be reduced by to reach $2 million in savings.
2,000,000/500 = 4,000 SF reduction
Next, subtract 4,000 SF from the GSF of the building - 108,525 GSF - 4,000 Sf = 104,525 GSF
To meet the budget, the building can't be over 104,525 GSF. Since the NSF of the building is not changing (72,350 NSF), you'll need to figure out which efficiency factor meets this requirement.
104,525 GSF/72,350 NSF = 1.44
Answer 1.4
Hope this helps!
-
Hey Robin,
You're right, efficiency ratios are under 1.0 and are typically expressed as a percentage. The example from the demo exam does not express this value as a ratio or even call it a ratio. The efficiency ratio of the above item would be 71% assuming an efficiency factor (grossing factor or space factor) of 1.4. If you navigate to page 4 of the Program Elements resource within the case study, you will see that the current building is designed with a efficiency factor of 1.5.
-
Thank you, Nick. I guess what is confusing me is that there are discrepancies of terminology. I would like to nail this down in case I come across a similar problem in the future that is a “fill-in-the-blank” instead of a multiple choice. The multiple choice answers to this problem indicate that we are talking about a >1.0 answer, but not all questions are multiple choice.
“Problem Seeking” uses the term “Building Efficiency Factor” with respect to <1.0 numbers (see below) .... but <1.0 numbers are instead customarily used with the term “Building Efficiency”. Then below that they use the term “factor” in discussing percents.
The reason this is important in a fill-in-the-blank question is that we need to be able to determine whether we are being asked for NET/GROSS or GROSS/NET.
I understand the math involved in this question, but the terminology seems very fuzzy to me. In researching online it gets even more confusing. As you stated, “space factor” is used, but there are also “load factor” and “grossing factor”. They seem to be fairly interchangeable, but the bottom line is that we need to know exactly what is being asked and since “Problem Seeking” is one of the main study resources, I would like to request some further clarification, as one answer is the inverse of the other. (Am also assuming that this could possibly come up in PPD, as I have passed PA.)
Are we supposed to assume that the >1.0 answer (GROSS/NET) is the one requested merely because of the terminology in the bottom line of the last page of the Program Elements ... or because the multiple choice answers are in that format ... and what if it would be a fill-in-the-blank question instead?
From the book:
We would typically describe this .51 efficiency as 51% efficiency and it represents NET/GROSS, right? But here they are using the same terminology ("building efficiency factor") as the demo exam question which is requesting a number >1.0 (GROSS/NET), so how are we to know what is requested?
Again from the book:
Thank you.
-
Hey Robin,
What I can tell you is if you get a question on this topic, there will be information within the item or case study that assists you in making this determination. In the original example, the case study program resource used an efficiency factor of 1.5. That information is critical when determining the correct answer of 1.4.
One thing to remember, when an ARE item is authored, there are many architects that must approve the item prior to it making it onto the ARE. All architects must agree that the terminology and language is correct. If there is disagreement between the architects on terminology, they typically include clarifying or additional information to ensure candidates are not confused on how to answer the question. This applies to all items that contain terminology that isn't consistent across the architectural profession.
Another example of this would be vapor retarders/barriers. There is great debate on this topic, specifically within the transition zone of the US. Similar to building efficiency factor/ratio/percentage. etc., we would include information that cues in candidates on how to respond to the question.
Based on your research and understanding of this content, you are going to be fine on the exam.
-
NCARB should be commended for moving the exam up Bloom’s Taxonomy—it’s unquestionably more content-driven—and less vocabulary-driven—than it used to be. And kudos for moving 56 separate jurisdictions to approve the exam changes so folks could test at home—and moving them during a pandemic when state boards weren’t meeting. But on this topic. . . c’mon, Nick. Just because there’s consensus among your volunteers, or debate among your volunteers, doesn’t mean there’s consensus or debate among experts.
Google building efficiency factor, which is, according to NCARB, an accepted and common real estate development term, and what comes up first? A thread on this forum! It is not an accepted term of building efficiency, but rather an artifact of this exam, and this exam only.
And while there may be “great debate” among your volunteer test-maker architects, there is absolutely no debate in the building science community on vapor retarders/barriers. The exam’s test items on vapor migration rely on 50-year-old research that wasn’t that good to begin with, was applicable only to cold weather, and only made sense at a time when envelopes leaked so much air and heat that moisture within walls dried out through energy flows alone. That’s why the ARE vapor migration questions always situate the building in Miami or Minneapolis. . . none of the volunteer architect test makers seems to know what to do in a mixed climate with warm summers and cold winters.
The vapor barrier example doesn’t support NCARB on the building energy factor mistake, rather it’s a second example of the same problem: reliance on non-experts to make technical exam questions. In this way, bad content echoes back again and again, far past its shelf life, as volunteer architects who studied bad content later make bad questions that suffer from the same misunderstandings for the next generation of architects.
No one expects NCARB to be perfect on every test item—making excellent tests is hard and good-faith errors (and technical glitches) happen. But the system is set up wrong. If you establish a regime where bad content is reflexively and incestuously confirmed, “we know it’s right because other architects say its right”. . . I don’t care what the point binomial is, if you put garbage in, you’ll get garbage out.
-
To be clear, I think that the consensus among architects is probably right 90% of the time, which is pretty good. But that means, in a typical 6-exam suite, that you'll see about 50 test items (10%) with meaningful errors. . . which itself isn't that bad unless when NCARB is confronted with something that might be wrong they dismiss it out-of-hand because "the volunteer architects told us it was right."
For instance, if non-architect experts were consulted more on these exams, I suspect there'd be far less fetishizing the technical aspects of exterior walls. Compared to roofs, parapets, corners, foundations, structural connections, apertures, overhangs, balconies, and floor-ceiling assemblies, walls are generally among the least likely component to rot, deteriorate, leak, transmit community noise, thermally bridge, transfer air, drive moisture, promote condensation, suffer from UV sunlight damage, take on heat in the summer, lose heat in the winter, deflect, or otherwise fail. Architects (and the public, and HGTV) valorize walls, defining the whole building by the wall cladding because of its outsized impact on the building's exterior appearance: "it's a brick buildings, it's a metal panel building, it's a stone building, it's a wood siding building, it's a precast concrete building" etc. . . . but building scientists know better.
-
I have been spinning my head on this, I studied building efficiency from several NCARB recommended resources all referencing building efficiency as the net assignable area / gross area, if I already know this information while taking the test, how am I supposed to think in 2-4 minutes under the test stress to look into the references to find if the question means something else by a term that I already know and studied? Especially as mentioned above if it was fill in the blanks, not a multiple choice.
-
Hi Carolina,
You can find all the answers within the ARE 5.0 Multidivision Practice Exam, which is available in the Practice Exam Dashboard. To access the Practice Exam Dashboard, log in to your NCARB Record, click the Exams tab, and navigate to the Additional Resources section at the bottom of the page.
-
As an emerging professional I agree with the comments of user:Micheal Erman. Its so so difficult to study for a test with fluid technical vocabulary. Can there not be something like an appendix the the AHPP online that regulates professional language and the exam is based on THAT single source. so when these guys are writing the exam they can align the vocabulary with THAT single source. so the test makers and test takers are at least speaking the same language
-
A lot of the study materials here are not rigorous at all. Some straightforward terminology we studied in the school just became so confusing in the ARE test material.
I am just coming from a NCARB post that people confused by FAR in a question. The ARE practice material ridiculously use FAR x Buildable Floor Area (after setback) to get the "max. buildable floor area". They meant to say "Max buildable story of the lot" instead of FAR, but those material creator are confused themselves, they confused more test takers, the quality like this is totally not acceptable...
-
I have responded here
Please sign in to leave a comment.
Comments
16 comments