Credit: Michael Kirkham


Imagine two architects talking about the amount of natural light needed in an Alzheimer’s clinic and the need for reliable research. How do they know if they can trust the array of data sets and reports in front of them? How can they assess the potential impact of natural light on patients, the kinds of glazing that are available, the thermal effects of that glazing, or any number of other cascading questions?

Not easily, it turns out. Research comes in different forms—and some of it might not even qualify as research at all. The process of verifying research represents one of the most prominent gaps between the practice of architecture and the architectural academy. The two different systems proffer two different ways of confirming what’s reliable and what’s not.

In the academy, shared knowledge driven by peer review supports a research agenda that an architect or instructor may use to qualify for tenure, create multi-semester studio projects, or simply illustrate a point about weight loads, for instance. In architectural practice, on the other hand, knowledge is a commodity and a utility that drives the enterprise of design and project delivery.

Knowledge—defined here as information that enriches and advances architecture—can and should be produced by different means. But is there a way to standardize the process by which that knowledge is deemed trustworthy, accurate, and therefore useful to academicians and their students as well as to practitioners?

In 2012 and 2013, the AIA held two research summits at which attendees addressed the knowledge gap between the academy and practice. Recognizing that there must be two ways of verifying research and knowledge—because architecture’s academy and its practice are necessarily two different worlds—there must also be two hierarchical, multistep processes rendered as two different worlds (and, for the purposes of illustration, as triangles). Both show the steps that must be taken to verify research.

So what’s the big problem? Why does practice-based research differ so wildly from academy-based research? It’s the added step of “industry review” on the practice side. And industry review is a product of architects publicly sharing their knowledge within the architecture, engineering, and construction (AEC) industry for comment. Research based on this type of peer review is reliable, but practitioners may choose to proceed on the basis of relatively speculative findings. While the findings are applicable, they may not be established within a strict application of scientific method.

All is not lost, though. There’s a third triangle that accounts for the one thing both practice-based and academy-based research crave—trustworthiness—and voilà, you have an integrated research pyramid. So how do you evaluate trustworthiness? The CARS methodology (credibility, accuracy, reasonableness, support) comes in handy here:

Credibility
Author and credentials listed
Well-edited in terms of grammar and spelling
Positive well-balanced tone
Relevance

Accuracy
Date
Recently published or considered seminal
Succinct
Original

Reasonableness
Tone or language that implies unbiased attitude
No conflict of interest
Specific points of fact
Reliable

Support
Source for data or statistics provided or referenced
Documentation provided or referenced
Corroborating sources listed
Well-balanced point of view

The research conducted in both practice and academic settings depends on different educational contexts as well as professional experiences. But devotion on the parts of the practicing architect and the academician to determine trustworthiness can maintain the richness that each research environment offers. It can also establish the integrity of knowledge-based research so that architects may pass through the different stages of their careers with a set of universal skills.