Howard Gardner

Good Work and Good Citizenship: Do They Presuppose a Democratic Society?

© Howard Gardner 2025, reproduced from howardgardner.com

Even when one seeks to be broad—if not universal—it’s challenging to transcend one’s customary concerns and ordinary points of reference. 

I learned this lesson dramatically when over forty years ago, I put forth the theory of multiple intelligences (often abbreviated as MI theory). Extensive research in a number of disciplines had convinced me of what seemed to be a seemingly reasonable conclusion: The psychometric view of intelligence—whatever its empirical virtues and convenience—is far too narrow. It fails to encompass the range of human abilities, gifts, and talents that have been valued all over the globe across the millennia. 

Yet, even decades later, the bulk of the psychometric community still embraces the concept of a single intelligence—captured accurately by a single instrument—and steadfastly refuses to countenance alternative formulations about human cognition and intellectual breadth. This is due, I think, to the convenience of the IQ test, the continuing widespread use of the singular term “intelligence”, and the vested interests of test-makers and test-users. (Perhaps if I had created seven or eight tests, they would now constitute the consensual-conventional wisdom.)

Thirty years ago, my colleagues and I embarked on a comparable journey. Mihaly Csikszentmihalyi, William Damon, and I sought to ascertain the key components of good work. We asked: What does it mean to carry out work—over the course of a lifetime—that would be held in high esteem by knowledgeable contemporaries? Through substantial empirical research as well as considerable reflection and interchange with knowledgeable colleagues, we eventually identified the three principal components of Good Work:

It’s carried out with high competence—It is Excellent

It is deeply involving and meaningful for the practitioner—It is Engaging

It seeks to discern and live up to the highest moral standards—It is Ethical

We’ve captured this formulation visually via the triple helix of the Three Es.

While our research efforts focused primarily on the realm of work, my colleagues and I have proposed that the role of good citizen can be similarly delineated. What does it mean to be a good citizen of one’s community—say, locally and nationally? Meritorious good citizens are cognizant of the major rules and regulations of their community, they care enough to become and remain involved in the relevant political processes, and to complete the picture, they strive to carry out their roles in an ethical manner.

Components of Good Citizenship

As I write, in the spring of 2025, all of this seems far less clear than it was in 2005, when my colleagues and I first proposed this scheme, or in 2015, when we began to contemplate what it means to be a good citizen.

To be Specific

My colleagues and I blithely assumed that workers and citizens would be living in a reasonably democratic society. For the purposes of our research, we focused primarily on the United States. But we could easily have had in mind workers and citizens of Western Europe, and many other regions of the world—from Canada to Japan to Costa Rica.

Not, to be sure, that these societies would have concurred about the precise characteristics of a good worker or a good citizen—far from it. The ideal lawyer or barrister is not the same in the United States, England, or France—let alone Japan or Indonesia. Nor, for that matter, are the governments of these three societies interchangeable—in several ways, Australia conceives of citizenship differently from Germany or Mexico. But in broadest brush strokes, it’s assumed across these societies that the life of the worker is governed principally by the codes of that profession and it’s not unduly influenced by the current features and instantiations of executive, judicial, or legislative branches.

By the same token, whatever the differences in the legal codes with respect to voting, taxation, and other publicly known (and legally revisable) processes—it’s assumed that civic virtues (and vices) can be delineated and observed—and that their realization or their violation can be recognized, rewarded, and/or sanctioned.

Historical exercise

Mao Zedong posters at celebration of the Communist revolution (1949) / AP

If I’d been asked about what it meant to be a good citizen or a good worker in Nazi Germany, Stalinist Russia, or Maoist China (in the height—or depth—of the Cultural Revolution), I would have been stymied! And coming out of my cerebral fog, I would have had to admit that I had stumbled into a “category error”. How could one be a good doctor, if one were enjoined not to treat—or even ordered to torture or murder—a Jew in Nazi Germany? How could one be a good journalist in Maoist China, when all kinds of topics and postures were strictly off limits? Or to shift to the civic realm, what does it mean to vote when the elections are fixed, or to follow the law when it changes irregularly and capriciously, and the decision to punish occurs ex cathedra—not after proper adjudication by independent authorities.

Virtually impossible even to conceptualize! Under such circumstances, it involves a huge stretch even contemplating Good Work or Good Citizenship. Fortunately, at the time of this writing in April 2025, we do not quite face such a quagmire in the United States or in most other developed societies. 

And yet! At this hour, the Federal Government of the United States, as well as national governments in countries like Hungary or Argentina, are questioning long-held assumptions—and even widespread consensus—about what it means to carry out Good Work or Good Citizenship. As citizens of such nations, we are being naïve or even derelict if we simply assume that our long-standing assumptions about work and civic participation will necessarily prevail and endure.

Daniel, Richard, and Jamie Susskind on AI discussion panel / WJR

Ironically, this issue first entered my consciousness a decade ago, when I encountered the writings of members of the remarkable Susskind family in Britain. Daniel, Richard, and Jamie Susskind all foresaw a time when many decisions that had long been made via deliberation among human beings would increasingly be executed—with seeming authority—by computational systems. And while these three scholars were reasonably confident that the computational systems would preserve the fundamental values of pre-computational times, I was much less confident that would be the case. (See my blog on the future of the professions, linked here.) Now some years later, Jamie Susskind himself has issued a warning published in the Financial Times about a society in which major decisions about human endeavors have been effectively handed over to General Intelligence Computational systems.

As I write, the threat to the Three Es of Good Work is patent. It’s perhaps most evident in the practice of journalism. At the height of the hegemony of American journalism—in the wake of the Sullivan decision by the Supreme Court (1964)—mainstream journalists were given considerable leeway in what they wrote about, whom they wrote about, and how they wrote about these matters. Today, however, the status of professional journalists is being seriously challenged by numerous pseudo-journalistic outlets that do not follow (if they are even aware of) the principal values of mainstream journalism and by governmental officials who seem determined to censor and even prosecute practitioners whose writings and reporting they happen to find objectionable.

Nor are journalists alone. The queue of vulnerable professionals grows steadily. Lawyers or law firms who take on unpopular causes are being threatened with massive lawsuits. Judges who rule against the governing party are being threatened with proceedings of impeachment. In some states, doctors who play a role in abortion or in sex-change operations can be charged with a crime, and professors who treat topics that are sensitive or take positions that go against the prevailing “conventional wisdom”—whether it be “pro” or “anti”-DEI—are subject to punitive measures and may even lose their jobs. Not to mention the threats against students, particularly if they are not citizens of the United States, or against educational institutions, whose functioning depends upon their tax-exempt status and the protection of their endowment funds.

The concept—and the reality—of worker-as-professional, as well as the concept and the reality of person-as-citizen are both hard-earned victories. We must acknowledge that those victories are never permanent and ought never simply to be assumed or presumed. Those of us who believe in and cherish these forms of “the good” must continue to support them, to sustain them, and to speak out when they appear to be vulnerable, in jeopardy, or even abandoned altogether. 

This essay confirms my commitment to do so.

 

REFERENCES

Gardner, H., Csikszentmihalyi, M., & Damon, W. (2001). Good work: When excellence and ethics meet. Basic Books.

Gardner, H. (Ed.). (2010). Good work: The theory in practice. Basic Books.

New York Times Co. v. Sullivan, 376 U.S. 254 (1964).

Susskind, J. (2022). The digital republic: On freedom and democracy in the 21st century (First Pegasus Books cloth ed.). Pegasus Books.

CHAT GPT: FIRST MUSINGS

Howard Gardner © 2023

How will ChatGPT—and other Large Language Instruments—affect our educational system—and our broader society? How should they?

I’m frequently asked questions like these—and they are much on my mind.

Something akin to ChatGPT—human or super-human levels of performance—has long been portrayed in science fiction: I’m familiar with the American, British, French, and Russian varieties. But few observers expected such excellent performance so fast, so impressively, so threatening (or enabling)—depending on your stance.

As suggested by historian Yuval Harari, we may be approaching the end of the Anthropocene era.

We can anticipate that large language instruments—like Open AI’s ChatGPT and DALL-E—will continually improve.

They will be able to do anything that can be described, captured in some kind of notation. Already they are able to conduct psychotherapy with patients, write credible college application essays, and create works of visual art or pieces of music in the style of well-known human creators as well as in newly invented styles. Soon one of their creations may be considered for the Nobel Prize in physics or literature, the Pulitzer Prize for musical composition or journalism.

Of course, superior AI performance does not—and need not—prevent human beings from engaging in such activities. We humans can still paint, compose music, sculpt, compete in chess, conduct psychotherapy sessions—even if AI systems turn out to outperform us in some or most ways.

Open AI introduced ChatGPT 3 in 2020 and DALL-E in 2021

We can also work in conjunction with AI programs. A painter may ask DALL-E to create something, after which the painter may alter what the program has furnished. A researcher may present ChatGPT with a hypothesis and ask the system to come up with ways to test that hypothesis—after which the researcher can carry out one or more of these approaches herself. Such activities can alternate, going back and forth between the human provision and the computational program.

We fear what could go wrong—and rightly so. AI systems like ChatGPT have not undergone a million-plus years of evolutionary history (including near extinction or sudden vaults in skill); such recently devised systems do not inhabit our planet in the same way that the hominid species has. They are not necessarily—and certainly not existentially—afraid of cataclysmic climate change, or nuclear war, or viruses that prove fatal to homo sapiens. Indeed, such systems could spread misinformation rapidly and thereby contribute to destructive climate change and the probability of nuclear war (recall “The Doomsday Machine” featured in the dystopic movie Dr. Strangelove). These destructive outcomes are certainly possible, although (admittedly) such calamities might happen even had there been no digital revolution.

And what about the effects of Large Language Instruments on our schools, our broader educational system?

Many fear that systems like ChatGPT will make it unnecessary for students to learn anything, since ChatGPT can tell them everything they might want or need to know—almost instantaneously and almost always accurately (or at least as accurately as an 20th century encyclopedia or today’s “edition” of Wikipedia!). I think that AI will have a huge impact on education, but not in that way.

Now that machines are rivalling or even surpassing us in so many ways, I have an ambitious and perhaps radical recommendation. What education of members of our species should do—increasingly and thoughtfully—is to focus on the human condition: what it means to be human, what our strengths and frailties are, what we have accomplished (for good or evil) over many centuries of biological and cultural evolution, what opportunities are afforded by our stature and our status, what we should avoid, what we should pursue, in what ways, and with what indices of success...or of concern.

But to forestall an immediate and appropriate reservation: I don’t intend to be homo sapiens centric. Rather, I want us to focus on our species as part of the wider world, indeed the wider universe. That universe includes the biological and geological worlds that are known to us.

Psychologist-turned-educator (and my teacher) Jerome Bruner inspired me. His curriculum for middle school children, developed nearly sixty years ago, centered on three questions:

Bruner in the Chanticleer 1936, Duke University (Source: Wikipedia)

  • What makes human beings human?

  • How did they get to be that way?

  • How can they be made more so?

To approach these framing topics intelligently, we need disciplinary knowledge, rigor, and tools. We may not need to completely scuttle earlier curricular frameworks (e.g., those posed in the United States in the 1890s by the “Committee of Ten” or the more recent “Common Core”); but we need to rethink how they can be taught, modelled, and activated to address such over-arching questions.

We need to understand our human nature—biologically, psychologically, culturally, historically, and pre-historically. That’s the way to preserve the planet, all of us on it. It’s also the optimal way to launch joint human-computational ventures—ranging from robots that construct or re-construct environments to programs dedicated (as examples) to economic planning, political positioning, military strategies and decisions.

To emphasize: this approach is not intended to glorify; homo sapiens has done much that is regrettable, and lamentable. Rather, it is to explain and to understand —so that, as a species, we can do better as we move forward in a human-computer era.


Against this background, how have I re-considered or re-conceptualized the three issues that, as a scholar, I’ve long pondered?

  1. Synthesizing is the most straightforward. Anything that can be laid out and formulated—by humans or machines—will be synthesized well by ChatGPT and its ilk. It’s hard to imagine that a human being—or even a large team of well-educated human beings—will do better synthesis than ChatGPT4, 5, or n.

    We could imagine a “Howard Gardner ChatGPT”—one that synthesizes the way that I do, only better—it would be like an ever-improving chess program in that way. Whether ChatGPT-HG is a dream or a nightmare I leave to your (human) judgment.

  2. Good work and good citizenship pose different challenges. Our aspirational conceptions of work and of membership in a community have emerged in the course of human history over the last several thousand years—within and across hundreds of cultures. Looking ahead, these aspirations indicate what we are likely to have to do if we want to survive as a planet and as a species.

    All cultures have views, conceptions, of these “goods,” but of course—and understandably, these views are not the same. What is good—and what is bad, or evil, or neutral—in 2023 is not the same as in 1723. What is valued today in China is not necessarily what is admired in Scandinavia or Brazil. And there are different versions of “the good” in the US—just think of the deep south compared to the East and West coasts.

    ChatGPT could synthesize different senses of “good,” in the realms of both “work” and “citizenship.” But there’s little reason to think that human beings will necessarily abide by such syntheses—the League of Nations, the United Nations, the Universal Declaration of Human Rights, the Geneva convention were certainly created with good will by human beings—but they have been honored as much in the breach as in the observance.

A Personal Perspective

We won’t survive as a planet unless we institute and subscribe to some kind of world belief system. It needs the prevalence of Christianity in the Occident a millennium ago, or of Confucianism or Buddhism over the centuries in Asia, and it should incorporate tactics like “peaceful disobedience” in the spirit of Mahatma Gandhi, Martin Luther King, or Nelson Mandela. This form of faith needs to be constructed so as to enable the survival and thriving of the planet, and the entities on it, including plants, non-human animals, and the range of chemical elements and compounds.

Personally, I do not have reservations about terming this a “world religion”—so long as it does not posit a specific view of an Almighty Figure—and require allegiance to that entity. But a better analogy might be a “world language”—one that could be like Esperanto or a string of bits 00010101111….

And if such a school of thought is akin to a religion, it can’t be one that favors one culture over others—it needs to be catholic, rather than Catholic, judicious rather than Jewish. Such a belief-and-action system needs to center on the recognition and the resolution of challenges—in the spirit of controlling climate change, or conquering illness, or combatting a comet directed at earth from outer space, or a variety of ChatGPT that threatens to “do us in” from within….Of the philosophical or epistemological choices known to me, I resonate most to humanism—as described well by Sarah Bakewell in her recent book Humanly Possible.

Multiple Intelligences (MI)

And, finally, I turn to MI. Without question, any work by any intelligence, or combination of intelligences, that can be specified with clarity will soon be mastered by Large Language Instruments—indeed, such performances by now constitute a trivial achievement with respect to linguistic, logical, musical, spatial intelligences—at least as we know them, via their human instantiations.

How—or even whether —such computational instruments can display bodily intelligences or the personal intelligences is a different matter. The answer depends on how broad a formulation one is willing to accept.

To be specific:

Taylor Swift at 2019 American Music Awards (Source: Wikipedia)

  • Does a robotic version of ChatGPT need to be able to perform ballet à la Rudolf Nureyev and Margot Fonteyn? And must it also show how these performers might dance in 2023 rather than in 1963?

  • Does it need to inspire people, the way Joan of Arc or Martin Luther King did?

  • Should it be able to conduct successful psychotherapy in the manner of Erik Erikson or Carl Rogers ?

  • Or are non-human attempts to instantiate these intelligences seen as category errors— the way that we would likely dismiss a chimpanzee that purported to create poetry on a keyboard?

The answers, in turn, are determined by what we mean by a human intelligence—is it certain behavioral outputs alone (the proverbial monkey that types out Shakespeare, or the songbird that can emulate Maria Callas or Luciano Pavarotti, Mick Jagger or Taylor Swift)? Or is it what a human or group of humans can express through that intelligence to other human beings—the meanings that can be created. conveyed, comprehended among members of the species.

I’m reminded of Thomas Nagel’s question: “What is it like to be a bat?” ChatGPT can certainly simulate human beings. But perhaps only human beings can realize—feel, experience, dream—what it’s like to be a human being. And perhaps only human beings can and will care—existentially—about that question. And this is what I believe education in our post-ChatGPT world should focus on.


For comments on earlier versions of this far-ranging blog, I am grateful to Shinri Furuzawa, Jay Gardner, Annie Stachura, and Ellen Winner.

REFERENCES:

Bakewell, S. (2024). Humanly possible: Seven hundred years of humanist freethinking, inquiry, and hope. Vintage Canada.

Nagel, T. (1974). what is it like to be a bat? The Philosophical Review. https://doi.org/10.4159/harvard.9780674594623.c15

Wikimedia Foundation. (2023, August 21). Man: A course of study. Wikipedia. https://en.wikipedia.org/wiki/Man:_A_Course_of_Study

Remembering Bob Asher (1929-2023)

by Howard Gardner

Robert (Bob) Asher was a “good worker.” Indeed, he exemplified the three attributes of that praiseworthy descriptor. He was excellent at his work; he was completely engaged in his work; and he carried out his work in a moral and ethical way.

Robert Asher, left, receives the Herzl Award from Lester Crown (2008)

We came to know Bob because of his founding role in the Israel Academy of Arts and Sciences. Working with the incomparable Raffi Amram, Bob played a major role in launching the school, and he remained its steadfast supporter for several decades.

When one carries out educational research (as we have done for half a century), those who participate often express a polite interest in learning what was found. And almost always, that’s it.

Bob was totally different. When we approached him about studying IASA, he was extremely helpful, making the necessary introductions and connections. He followed the work throughout the course of our study. And once the study had been completed, he gently prodded us to share the results so he could help bring about changes and improvements in the school. Again, this has rarely happened on our watch.

Usually, a philanthropist, a founder, and one who follows through, are three different roles in education, but just as Bob Asher captures “the three E’s of good work,” he synthesized three crucial roles in education.

He will be missed. We hope that all in education, wherever they are, can learn from his inspiring example.

Ethics and American Colleges: A Troubled Saga—or Our Humpty Dumpty Problem

by Howard Gardner

Sometimes, you know something—or think that you know something—and then you confront the limits of your knowledge. Or, to put it less kindly, then you have an experience that reveals your ignorance.

As someone with some knowledge about the history of higher education in the United States, I knew that nearly all colleges had begun as religious institutions. I was also aware that in the last century or so, the religious mission had waned and that, indeed, the overwhelming majority of colleges and universities of which I’m aware are essentially, or primarily secular.

If you had asked me a decade ago for my views about this situation I would have been quite accepting. I am secular myself; Harvard, the school with which I have long been associated, shed its religious ties many years ago.

But as a result of a ten-year study of American higher education, carried out with my long-time colleague Wendy Fischman, I now think quite differently about this situation. I now believe that there’s a lot to be said in favor of colleges and universities that have a stated mission. Moreover, that mission might well be religious—though it could also have other aspirations, for example, training members of the military (West Point and Annapolis) or foregrounding certain demographies, such as historically Black institutions.

I’ve come to this conclusion because—to put it sharply—too many of our students do not understand the major reason(s) why we have non-vocational institutions of higher learning. Many students are inertial (“Well, after you finish high school, you go to college”) or transactional (“you go to college so that you can get a good job”). Of course, some institutions describe themselves as primarily vocational—whether that vocation is engineering, or pharmacy, or nursing—and that’s fine. Truth in advertising! But if you call yourself a liberal arts school or a general education school, you have taken on the obligation to survey a wide swathe of knowledge and expose students to many ways of thinking: in our terminology, to get students to explore and to be open to transformation of how they think of themselves and how they make sense of the world.

Of course, many viable missions are non-sectarian and worth making central to one’s education. For example, a school might organize itself around democracy/civics; or community service; or global understanding. Indeed, the recently launched London Interdisciplinary School is directed toward understanding and solving global problems while San Francisco-headquartered Minerva University seeks to expose students to global knowledge and experience.

Not so for most schools!

In the course of our research, Wendy Fischman and I have made a discovery—one related to the quickly-sketched history of higher education in this country. Our interviews with over 1000 students drawn from ten different schools revealed an ethical void: even when asked directly, most students do not recognize any experiences that they would consider ethical dilemmas. And accordingly, they give no indication of how they think about them, reflect on them, attempt to take concrete steps toward constructive solutions and resolutions. Accordingly, in our current work, we strive to make ethical understanding and decision making central in the experience of college students.

Back to my recently discovered area of ignorance:

I have long known, and admired from afar, Julie Reuben’s 1996 book The Making of the Modern University. Drawing particularly on documents from eight major American colleges/universities, this elegant historical study reviews the century of dramatic change in the teaching, curricula, and over-arching conception of higher education in the United States.

I can’t presume to capture the highlights of a 300-page book—one based on careful study of numerous academic and topical sources and documented in hundreds of footnotes. But I can assert that over the course of a century, after many attempts at compromise, most institutions of higher education in the United States became essentially secular; they dropped explicit religious study from their teaching and their curricula and at the same time dropped any explicit focus, on ethical issues in the school’s explicit (or even tacit) mission.

So at the risk of caricature, here ‘s the rough set of stages (no doubt, overlapping) through which America higher education passed:

  1. Most schools are religious in orientation, students take religious courses, the faculty and the president take on responsibility for religious “formation”: many students are training for the ministry; truth is seen as indissociable from the good. A concern with ethics is subsumed under the religious focus.

  2. American colleges are deeply affected by the examples of major universities in Europe: flagship American campuses add doctoral studies, professional degrees, technically trained faculty across the disciplinary terrain, but these institutions still seek to maintain a religious formative creed; accordingly Darwinian ideas are highly controversial.

  3. Curricula offer more choices; sciences play an ever-larger role (focus on method as well as findings)—Darwinian ideas are increasingly accepted; with increasing competition for outstanding faculty, the role of the president becomes less ethically-centered, less involved in curricula, more political, administrative, fund-raising.

  4. Explicitly religious courses and curricula wane (students also show less interest in these topics); there is tension between religious and intellectual orientations; efforts are made to foster ethical and moral conduct and behavior without explicit ties to specific religion(s); morality is seen as a secular, not just a religious preoccupation.

  5. Science is increasingly seen as value-free; educators look toward social sciences and humanities for the understanding of ethical and moral issues, and their inculcation (as appropriate) in students; morality is seen increasingly in behavioral rather than belief terms.

  6. The pursuit of the true, long a primary educational goal, is now separated—quite decisively—from the inculcation of a sense of beauty or of morality (the good)—and schools aspire to cultivate these latter virtues; these virtues can be acquired both in class and via extra-curricular activities (also via dormitory life); faculty are held accountable for their own ethical behavior.

  7. Faculty and curricula are no longer seen as primary vehicles for a sense of morality and ethics; accordingly, ethically-oriented curricula are either actively removed or simply wane from the offerings of secular schools.

  8. Behold—the modern, secular university.

All of this happens over—roughly—a century.

In this country, we are now left with a higher education system where ethics and morality are seen as “someone else’s concerns”. As well, we have students—and (as our study documents) other constituencies as well—whose ethical antennae are not stimulated, and may even have been allowed to atrophy.

Hence the Humpty-Dumpty challenge: can these values, these virtues, be re-integrated in our system of higher education?

Were we to live in a society where ethics and morality were well handled by religious and/or civic institutions, the situation ascribed to higher education would not be lamentable. Alas, that’s not the case! And while it is impractical and perhaps even wrong-headed to expect our colleges and universities to pick up all the slack, they certainly need to do their part.

And that includes us!

For helpful comments on an earlier draft, I think Shinri Furuzawa and Ellen Winner. For support of our current work, we thank the Kern Family Foundation.

References

Fischman, W., and Gardner, H. (2022). The Real World of college: What higher education is and what it can be. MIT Press.

Reuben, J. A. (1996) The Making of the Modern University: Intellectual transformation and the marginalization of morality. University of Chicago Press.

Does a Research Oath for Doctoral Students Make Sense?

by Howard Gardner

The French Office for Research Integrity recently announced a new policy. Going forward, all students who receive—as well as all who expect to receive—a doctorate in any field will be required to take an ethical oath. The wording: “I pledge to the greatest of my ability, to continue to maintain integrity in my relationship to knowledge, to my methods, and to my results.” On two occasions, these individuals need to affirm that, as holders of a doctoral degree, they will adhere in their work to the highest ethical standards.

The case for such a requirement is straightforward. In recent years, across the broad range of physical, natural, and social sciences, there have been numerous cases in which holders of doctorates have behaved in ways that disgrace their profession and may also damage human beings. Two cases that have recently received publicity:

  1. Widespread claims that amyloid deposits cause dementia—and hence can be addressed by palliative drugs—have been based on faulty or ambiguous evidence.

  2. Widespread claims that the blood thinner Xarelto can help to heal cardiac damage—it can actually have deleterious effects—have also been withdrawn because of data manipulation.

Moving beyond the medical sector, in my own field of psychology, the haphazard collection, misinterpretation, and fudging of data have been widespread. In response, all sorts of new requirements and checkpoints have been introduced—to what avail, remains to be seen. In light of such accumulating evidence of malfeasance, an oath is, so to speak, a no-brainer.

But it is almost as easy to make the case against such oaths. Numerous fields—ranging from those dating back to the time of Hippocrates to those new areas of work whose claim to be a profession are debatable—have ethical principles and/or oaths. These are easily accessible and sometimes administered solemnly. And yet, rarely does one hear of severe consequences for those who clearly have violated these precepts. Indeed (and this is not meant as a judgment), practitioners nowadays are far more likely to be penalized or chastised if they misbehave toward a colleague or make injudicious remarks than if they fail to honor the core strictures of their profession. And those whose malpractice has been confirmed at one institution all too often find a comparable position at other (though perhaps less prestigious) institutions.

As one who has held a doctorate for over a half century, I have a clear perspective on this matter. Far more important than any kind of oath, whenever and however administered, are the practices and norms that students witness in the course of their training. This immersion begins early in education (dating back well before matriculation at college or university) and reaches its apogee in the years of doctoral training. Particularly crucial are the standards, models, words, deeds of teachers, especially doctoral advisers; the values and ambitions of peers—other doctoral students in the cohort; and the atmosphere among young and senior professionals who work alongside the candidate in the lab, at the library, in class, or in the lunchroom.

Of course, there will always be exceptions. There will be graduates who, despite the positive models readily visible in their training, proceed to violate their professional oaths and norms. (I can think of colleagues who, lamentably, failed to learn from estimable role models). There will also be graduates who, despite a flawed adviser, lab atmosphere, and/or peer group, hold the highest standards for themselves and others. Bravo for them!

But we cannot and should not wait for outliers (or, if you prefer, out-liars!) We cannot count on physicians healing themselves or researchers reading and re-reading the oath that they have sworn to uphold. Instead, as teachers and mentors, we need to apply a critical lens to our own practices and models; and, if they are flawed in any way, we must strive to correct them. If future doctorates encounter positive models, we can rest assured that most of them will follow in the footsteps of their mentors. And then, should such an oath be required, it will serve—not as a prayer but as a celebration.

 

For helpful suggestions, I thank Courtney Bither and Ellen Winner