“The architectural culture has not had a robust tradition around research, which means that much of the research that goes on in offices for projects rarely gets tested, generalized, and shared.” So said the University of Minnesota’s Thomas Fisher, Assoc. AIA, recently on this site. Moreover, he pointed out, “The architecture culture has also framed success in terms of individual design contributions rather than in terms of who does the best discovery and communication of new knowledge.”
Silly me, and here I thought good research in architecture should indeed be based on design, as that is the knowledge and skill set on which our discipline is based. Fisher is not the only the only person to doubt whether that is true. “Design is not research, that is just speculation,” huffed Jérôme Chenal, professor in the architecture department at the prestigious École Polytechnique Fédérale de Lausanne in Switzerland at a conference I attended in Morocco several weeks ago, which had been assembled by Hassan Radoine, director of the new School of Architecture at Mohammed VI Polytechnic University. “If we are going to make good architecture,” Chenal said, “we have to be able to do serious research, collect data, and prove what we are saying.” And Chenal has the right to make such statements: He has a Ph.D., based on years of research into African urbanism, and leads a team that is continuing this work in ways that will be important not only for the field, but for African cities in general. Yet, I found myself attacking him: “Good research in architecture means something different. It is …” And I trailed off, saved only by others joining the attack. It made me wonder whether I truly know what the academic standards are or should be in architecture schools.
Où l’on parle de #villesafricaines @UM6P_ pic.twitter.com/KABBrpuvWB
— jerome chenal (@jchenal) April 6, 2018
This is important because research is central to what makes an architecture school have an academic purpose and clarity beyond teaching the tricks of the trade. As it happens, what makes a good architecture school is once again under debate, as it is almost every year when the various rankings of such institutions come out just in time for prospective students to make their decisions about which one of them to attend. (See also the survey by Designintelligence, although U.S. News & World Report no longer ranks architecture.)
As somebody who leads one of those schools, I am not exactly neutral, although thus far we at the School of Architecture at Taliesin have not participated in any of the rankings. (They tend to be “pay to play.”) I base my evaluations mainly on design, which is to say, on visual evidence: the student work I see, either in person or in publications, coming out of various schools. I judge those products according to my accumulated knowledge about both the intrinsic and the extrinsic (social, environmental, economic) value of a design, keeping in mind that I do not think that the sole criterion for student work should be whether or not it can be constructed.
I am sure that I have many biases, both conscious and unconscious, and I know that am a sucker for work that is presented with flair, but I also think architecture schools have developed one of the most transparent methods of evaluating work: the public critique. When this system is working well, the opinions and insights of various critics can be measured against each other as each of them is forced to not only point out the basis of the evaluation in the work but also to defend their interpretations against sometimes opposing viewpoints. What I would love would be some way to evaluate those critiques so that we can judge how the work presented embodies true research, and how good it is. That would be of more value to me in judging the success of a school than the opinions of employers, graduates, and alumni, or research standards developed for other fields that are currently used for most of the ranking components.
That last point, especially, brings up the interesting question of what we mean by the academic standards of architecture. In part, as I have tried to make clear, I do think that we can come up with a method by which we can measure whether work is coherent and proficient, as well as strong in its ability to analyze and turn into projects—those complex realities in which architecture must operate. Certainly, the National Architectural Accrediting Board (NAAB) has developed ways to categorize some of these criteria, while art and architecture history and theory have articulated others. However, architecture schools are part of a tertiary education system and are thus expected to produce work that can be evaluated by methods that are, at least in part, comparable to those in other fields. Architecture does not have the luxury, as do art programs, for instance, to argue for its unique quality, at least not wholly, as long as it pretends to be a pursuit of all aspects—technical, social, and aesthetic.
In general, the quality of a university is measured by its teaching and its research. The latter has long been the easiest to quantify, or at least so we have thought: the amount and nature of publications and subsequent citations took care of that. The method for such evaluation has consisted, like in critiques, of peer panels that have developed standards of judgment both internal (Is the experiment replicable?) and external (What does it do for us?). When it comes to the quality of teaching, however … as somebody who has taught for almost four decades and is the son of two life-long university professors, I have to admit I have never seen a good “metric” for grading teachers. Having said that, a teacher’s effectiveness is evident when you see the effect she or he can have not just on good students, but especially on mediocre, plain bad, and at-risk students. By “evident,” I mean that not only does the student work answer to the criteria I have outlined above, but the students become stronger parts of the school community and in some ways are more motivated, directed, and just plain happy.
Recently, however, even the quantifiable part of university evaluation has come under discussion. It has become clear both that participants long ago learned how to game the system and that the system itself—whether because of the growth in scale and complexity of a fully global academic culture or because it was flawed in the first place—has turned out to be full of holes and gray areas. Certainly, it seems clear that attempts to apply standards from science or the humanities to architecture are flawed because they weigh publication over building or just designing, discourage experimentation and speculation, and favor emerging “sciences,” such as evidence-based design, that as of now are too vague and unsupported to deserve that designation.
So, we are stuck. Architecture schools do already have a way of educating students to become good designers and we have a way to evaluate how well they do that. I also believe that design can and should be research, and that we need to judge it by criteria that value its particular form of speculation.
How do we articulate and measure our standards? We can start with categories like those the NAAB has developed (and perhaps look at examples from other countries that offer some fruitful perspectives). First, however, we need to do a better job defining for ourselves what we mean when a design is good and speculative or experimental. What are those intrinsic and extrinsic measures? Now that is a piece of research that I would love to see.
And let’s not forget, in the immortal words of Graham Nash:
Teach your parents well,
Their children's hell will slowly go by,
And feed them on your dreams
The one they picks, the one you'll know by.