Howard Gardner

CHAT GPT: FIRST MUSINGS

Howard Gardner © 2023

How will ChatGPT—and other Large Language Instruments—affect our educational system—and our broader society? How should they?

I’m frequently asked questions like these—and they are much on my mind.

Something akin to ChatGPT—human or super-human levels of performance—has long been portrayed in science fiction: I’m familiar with the American, British, French, and Russian varieties. But few observers expected such excellent performance so fast, so impressively, so threatening (or enabling)—depending on your stance.

As suggested by historian Yuval Harari, we may be approaching the end of the Anthropocene era.

We can anticipate that large language instruments—like Open AI’s ChatGPT and DALL-E—will continually improve.

They will be able to do anything that can be described, captured in some kind of notation. Already they are able to conduct psychotherapy with patients, write credible college application essays, and create works of visual art or pieces of music in the style of well-known human creators as well as in newly invented styles. Soon one of their creations may be considered for the Nobel Prize in physics or literature, the Pulitzer Prize for musical composition or journalism.

Of course, superior AI performance does not—and need not—prevent human beings from engaging in such activities. We humans can still paint, compose music, sculpt, compete in chess, conduct psychotherapy sessions—even if AI systems turn out to outperform us in some or most ways.

Open AI introduced ChatGPT 3 in 2020 and DALL-E in 2021

We can also work in conjunction with AI programs. A painter may ask DALL-E to create something, after which the painter may alter what the program has furnished. A researcher may present ChatGPT with a hypothesis and ask the system to come up with ways to test that hypothesis—after which the researcher can carry out one or more of these approaches herself. Such activities can alternate, going back and forth between the human provision and the computational program.

We fear what could go wrong—and rightly so. AI systems like ChatGPT have not undergone a million-plus years of evolutionary history (including near extinction or sudden vaults in skill); such recently devised systems do not inhabit our planet in the same way that the hominid species has. They are not necessarily—and certainly not existentially—afraid of cataclysmic climate change, or nuclear war, or viruses that prove fatal to homo sapiens. Indeed, such systems could spread misinformation rapidly and thereby contribute to destructive climate change and the probability of nuclear war (recall “The Doomsday Machine” featured in the dystopic movie Dr. Strangelove). These destructive outcomes are certainly possible, although (admittedly) such calamities might happen even had there been no digital revolution.

And what about the effects of Large Language Instruments on our schools, our broader educational system?

Many fear that systems like ChatGPT will make it unnecessary for students to learn anything, since ChatGPT can tell them everything they might want or need to know—almost instantaneously and almost always accurately (or at least as accurately as an 20th century encyclopedia or today’s “edition” of Wikipedia!). I think that AI will have a huge impact on education, but not in that way.

Now that machines are rivalling or even surpassing us in so many ways, I have an ambitious and perhaps radical recommendation. What education of members of our species should do—increasingly and thoughtfully—is to focus on the human condition: what it means to be human, what our strengths and frailties are, what we have accomplished (for good or evil) over many centuries of biological and cultural evolution, what opportunities are afforded by our stature and our status, what we should avoid, what we should pursue, in what ways, and with what indices of success...or of concern.

But to forestall an immediate and appropriate reservation: I don’t intend to be homo sapiens centric. Rather, I want us to focus on our species as part of the wider world, indeed the wider universe. That universe includes the biological and geological worlds that are known to us.

Psychologist-turned-educator (and my teacher) Jerome Bruner inspired me. His curriculum for middle school children, developed nearly sixty years ago, centered on three questions:

Bruner in the Chanticleer 1936, Duke University (Source: Wikipedia)

  • What makes human beings human?

  • How did they get to be that way?

  • How can they be made more so?

To approach these framing topics intelligently, we need disciplinary knowledge, rigor, and tools. We may not need to completely scuttle earlier curricular frameworks (e.g., those posed in the United States in the 1890s by the “Committee of Ten” or the more recent “Common Core”); but we need to rethink how they can be taught, modelled, and activated to address such over-arching questions.

We need to understand our human nature—biologically, psychologically, culturally, historically, and pre-historically. That’s the way to preserve the planet, all of us on it. It’s also the optimal way to launch joint human-computational ventures—ranging from robots that construct or re-construct environments to programs dedicated (as examples) to economic planning, political positioning, military strategies and decisions.

To emphasize: this approach is not intended to glorify; homo sapiens has done much that is regrettable, and lamentable. Rather, it is to explain and to understand —so that, as a species, we can do better as we move forward in a human-computer era.


Against this background, how have I re-considered or re-conceptualized the three issues that, as a scholar, I’ve long pondered?

  1. Synthesizing is the most straightforward. Anything that can be laid out and formulated—by humans or machines—will be synthesized well by ChatGPT and its ilk. It’s hard to imagine that a human being—or even a large team of well-educated human beings—will do better synthesis than ChatGPT4, 5, or n.

    We could imagine a “Howard Gardner ChatGPT”—one that synthesizes the way that I do, only better—it would be like an ever-improving chess program in that way. Whether ChatGPT-HG is a dream or a nightmare I leave to your (human) judgment.

  2. Good work and good citizenship pose different challenges. Our aspirational conceptions of work and of membership in a community have emerged in the course of human history over the last several thousand years—within and across hundreds of cultures. Looking ahead, these aspirations indicate what we are likely to have to do if we want to survive as a planet and as a species.

    All cultures have views, conceptions, of these “goods,” but of course—and understandably, these views are not the same. What is good—and what is bad, or evil, or neutral—in 2023 is not the same as in 1723. What is valued today in China is not necessarily what is admired in Scandinavia or Brazil. And there are different versions of “the good” in the US—just think of the deep south compared to the East and West coasts.

    ChatGPT could synthesize different senses of “good,” in the realms of both “work” and “citizenship.” But there’s little reason to think that human beings will necessarily abide by such syntheses—the League of Nations, the United Nations, the Universal Declaration of Human Rights, the Geneva convention were certainly created with good will by human beings—but they have been honored as much in the breach as in the observance.

A Personal Perspective

We won’t survive as a planet unless we institute and subscribe to some kind of world belief system. It needs the prevalence of Christianity in the Occident a millennium ago, or of Confucianism or Buddhism over the centuries in Asia, and it should incorporate tactics like “peaceful disobedience” in the spirit of Mahatma Gandhi, Martin Luther King, or Nelson Mandela. This form of faith needs to be constructed so as to enable the survival and thriving of the planet, and the entities on it, including plants, non-human animals, and the range of chemical elements and compounds.

Personally, I do not have reservations about terming this a “world religion”—so long as it does not posit a specific view of an Almighty Figure—and require allegiance to that entity. But a better analogy might be a “world language”—one that could be like Esperanto or a string of bits 00010101111….

And if such a school of thought is akin to a religion, it can’t be one that favors one culture over others—it needs to be catholic, rather than Catholic, judicious rather than Jewish. Such a belief-and-action system needs to center on the recognition and the resolution of challenges—in the spirit of controlling climate change, or conquering illness, or combatting a comet directed at earth from outer space, or a variety of ChatGPT that threatens to “do us in” from within….Of the philosophical or epistemological choices known to me, I resonate most to humanism—as described well by Sarah Bakewell in her recent book Humanly Possible.

Multiple Intelligences (MI)

And, finally, I turn to MI. Without question, any work by any intelligence, or combination of intelligences, that can be specified with clarity will soon be mastered by Large Language Instruments—indeed, such performances by now constitute a trivial achievement with respect to linguistic, logical, musical, spatial intelligences—at least as we know them, via their human instantiations.

How—or even whether —such computational instruments can display bodily intelligences or the personal intelligences is a different matter. The answer depends on how broad a formulation one is willing to accept.

To be specific:

Taylor Swift at 2019 American Music Awards (Source: Wikipedia)

  • Does a robotic version of ChatGPT need to be able to perform ballet à la Rudolf Nureyev and Margot Fonteyn? And must it also show how these performers might dance in 2023 rather than in 1963?

  • Does it need to inspire people, the way Joan of Arc or Martin Luther King did?

  • Should it be able to conduct successful psychotherapy in the manner of Erik Erikson or Carl Rogers ?

  • Or are non-human attempts to instantiate these intelligences seen as category errors— the way that we would likely dismiss a chimpanzee that purported to create poetry on a keyboard?

The answers, in turn, are determined by what we mean by a human intelligence—is it certain behavioral outputs alone (the proverbial monkey that types out Shakespeare, or the songbird that can emulate Maria Callas or Luciano Pavarotti, Mick Jagger or Taylor Swift)? Or is it what a human or group of humans can express through that intelligence to other human beings—the meanings that can be created. conveyed, comprehended among members of the species.

I’m reminded of Thomas Nagel’s question: “What is it like to be a bat?” ChatGPT can certainly simulate human beings. But perhaps only human beings can realize—feel, experience, dream—what it’s like to be a human being. And perhaps only human beings can and will care—existentially—about that question. And this is what I believe education in our post-ChatGPT world should focus on.


For comments on earlier versions of this far-ranging blog, I am grateful to Shinri Furuzawa, Jay Gardner, Annie Stachura, and Ellen Winner.

REFERENCES:

Bakewell, S. (2024). Humanly possible: Seven hundred years of humanist freethinking, inquiry, and hope. Vintage Canada.

Nagel, T. (1974). what is it like to be a bat? The Philosophical Review. https://doi.org/10.4159/harvard.9780674594623.c15

Wikimedia Foundation. (2023, August 21). Man: A course of study. Wikipedia. https://en.wikipedia.org/wiki/Man:_A_Course_of_Study

Remembering Bob Asher (1929-2023)

by Howard Gardner

Robert (Bob) Asher was a “good worker.” Indeed, he exemplified the three attributes of that praiseworthy descriptor. He was excellent at his work; he was completely engaged in his work; and he carried out his work in a moral and ethical way.

Robert Asher, left, receives the Herzl Award from Lester Crown (2008)

We came to know Bob because of his founding role in the Israel Academy of Arts and Sciences. Working with the incomparable Raffi Amram, Bob played a major role in launching the school, and he remained its steadfast supporter for several decades.

When one carries out educational research (as we have done for half a century), those who participate often express a polite interest in learning what was found. And almost always, that’s it.

Bob was totally different. When we approached him about studying IASA, he was extremely helpful, making the necessary introductions and connections. He followed the work throughout the course of our study. And once the study had been completed, he gently prodded us to share the results so he could help bring about changes and improvements in the school. Again, this has rarely happened on our watch.

Usually, a philanthropist, a founder, and one who follows through, are three different roles in education, but just as Bob Asher captures “the three E’s of good work,” he synthesized three crucial roles in education.

He will be missed. We hope that all in education, wherever they are, can learn from his inspiring example.

Ethics and American Colleges: A Troubled Saga—or Our Humpty Dumpty Problem

by Howard Gardner

Sometimes, you know something—or think that you know something—and then you confront the limits of your knowledge. Or, to put it less kindly, then you have an experience that reveals your ignorance.

As someone with some knowledge about the history of higher education in the United States, I knew that nearly all colleges had begun as religious institutions. I was also aware that in the last century or so, the religious mission had waned and that, indeed, the overwhelming majority of colleges and universities of which I’m aware are essentially, or primarily secular.

If you had asked me a decade ago for my views about this situation I would have been quite accepting. I am secular myself; Harvard, the school with which I have long been associated, shed its religious ties many years ago.

But as a result of a ten-year study of American higher education, carried out with my long-time colleague Wendy Fischman, I now think quite differently about this situation. I now believe that there’s a lot to be said in favor of colleges and universities that have a stated mission. Moreover, that mission might well be religious—though it could also have other aspirations, for example, training members of the military (West Point and Annapolis) or foregrounding certain demographies, such as historically Black institutions.

I’ve come to this conclusion because—to put it sharply—too many of our students do not understand the major reason(s) why we have non-vocational institutions of higher learning. Many students are inertial (“Well, after you finish high school, you go to college”) or transactional (“you go to college so that you can get a good job”). Of course, some institutions describe themselves as primarily vocational—whether that vocation is engineering, or pharmacy, or nursing—and that’s fine. Truth in advertising! But if you call yourself a liberal arts school or a general education school, you have taken on the obligation to survey a wide swathe of knowledge and expose students to many ways of thinking: in our terminology, to get students to explore and to be open to transformation of how they think of themselves and how they make sense of the world.

Of course, many viable missions are non-sectarian and worth making central to one’s education. For example, a school might organize itself around democracy/civics; or community service; or global understanding. Indeed, the recently launched London Interdisciplinary School is directed toward understanding and solving global problems while San Francisco-headquartered Minerva University seeks to expose students to global knowledge and experience.

Not so for most schools!

In the course of our research, Wendy Fischman and I have made a discovery—one related to the quickly-sketched history of higher education in this country. Our interviews with over 1000 students drawn from ten different schools revealed an ethical void: even when asked directly, most students do not recognize any experiences that they would consider ethical dilemmas. And accordingly, they give no indication of how they think about them, reflect on them, attempt to take concrete steps toward constructive solutions and resolutions. Accordingly, in our current work, we strive to make ethical understanding and decision making central in the experience of college students.

Back to my recently discovered area of ignorance:

I have long known, and admired from afar, Julie Reuben’s 1996 book The Making of the Modern University. Drawing particularly on documents from eight major American colleges/universities, this elegant historical study reviews the century of dramatic change in the teaching, curricula, and over-arching conception of higher education in the United States.

I can’t presume to capture the highlights of a 300-page book—one based on careful study of numerous academic and topical sources and documented in hundreds of footnotes. But I can assert that over the course of a century, after many attempts at compromise, most institutions of higher education in the United States became essentially secular; they dropped explicit religious study from their teaching and their curricula and at the same time dropped any explicit focus, on ethical issues in the school’s explicit (or even tacit) mission.

So at the risk of caricature, here ‘s the rough set of stages (no doubt, overlapping) through which America higher education passed:

  1. Most schools are religious in orientation, students take religious courses, the faculty and the president take on responsibility for religious “formation”: many students are training for the ministry; truth is seen as indissociable from the good. A concern with ethics is subsumed under the religious focus.

  2. American colleges are deeply affected by the examples of major universities in Europe: flagship American campuses add doctoral studies, professional degrees, technically trained faculty across the disciplinary terrain, but these institutions still seek to maintain a religious formative creed; accordingly Darwinian ideas are highly controversial.

  3. Curricula offer more choices; sciences play an ever-larger role (focus on method as well as findings)—Darwinian ideas are increasingly accepted; with increasing competition for outstanding faculty, the role of the president becomes less ethically-centered, less involved in curricula, more political, administrative, fund-raising.

  4. Explicitly religious courses and curricula wane (students also show less interest in these topics); there is tension between religious and intellectual orientations; efforts are made to foster ethical and moral conduct and behavior without explicit ties to specific religion(s); morality is seen as a secular, not just a religious preoccupation.

  5. Science is increasingly seen as value-free; educators look toward social sciences and humanities for the understanding of ethical and moral issues, and their inculcation (as appropriate) in students; morality is seen increasingly in behavioral rather than belief terms.

  6. The pursuit of the true, long a primary educational goal, is now separated—quite decisively—from the inculcation of a sense of beauty or of morality (the good)—and schools aspire to cultivate these latter virtues; these virtues can be acquired both in class and via extra-curricular activities (also via dormitory life); faculty are held accountable for their own ethical behavior.

  7. Faculty and curricula are no longer seen as primary vehicles for a sense of morality and ethics; accordingly, ethically-oriented curricula are either actively removed or simply wane from the offerings of secular schools.

  8. Behold—the modern, secular university.

All of this happens over—roughly—a century.

In this country, we are now left with a higher education system where ethics and morality are seen as “someone else’s concerns”. As well, we have students—and (as our study documents) other constituencies as well—whose ethical antennae are not stimulated, and may even have been allowed to atrophy.

Hence the Humpty-Dumpty challenge: can these values, these virtues, be re-integrated in our system of higher education?

Were we to live in a society where ethics and morality were well handled by religious and/or civic institutions, the situation ascribed to higher education would not be lamentable. Alas, that’s not the case! And while it is impractical and perhaps even wrong-headed to expect our colleges and universities to pick up all the slack, they certainly need to do their part.

And that includes us!

For helpful comments on an earlier draft, I think Shinri Furuzawa and Ellen Winner. For support of our current work, we thank the Kern Family Foundation.

References

Fischman, W., and Gardner, H. (2022). The Real World of college: What higher education is and what it can be. MIT Press.

Reuben, J. A. (1996) The Making of the Modern University: Intellectual transformation and the marginalization of morality. University of Chicago Press.

Does a Research Oath for Doctoral Students Make Sense?

by Howard Gardner

The French Office for Research Integrity recently announced a new policy. Going forward, all students who receive—as well as all who expect to receive—a doctorate in any field will be required to take an ethical oath. The wording: “I pledge to the greatest of my ability, to continue to maintain integrity in my relationship to knowledge, to my methods, and to my results.” On two occasions, these individuals need to affirm that, as holders of a doctoral degree, they will adhere in their work to the highest ethical standards.

The case for such a requirement is straightforward. In recent years, across the broad range of physical, natural, and social sciences, there have been numerous cases in which holders of doctorates have behaved in ways that disgrace their profession and may also damage human beings. Two cases that have recently received publicity:

  1. Widespread claims that amyloid deposits cause dementia—and hence can be addressed by palliative drugs—have been based on faulty or ambiguous evidence.

  2. Widespread claims that the blood thinner Xarelto can help to heal cardiac damage—it can actually have deleterious effects—have also been withdrawn because of data manipulation.

Moving beyond the medical sector, in my own field of psychology, the haphazard collection, misinterpretation, and fudging of data have been widespread. In response, all sorts of new requirements and checkpoints have been introduced—to what avail, remains to be seen. In light of such accumulating evidence of malfeasance, an oath is, so to speak, a no-brainer.

But it is almost as easy to make the case against such oaths. Numerous fields—ranging from those dating back to the time of Hippocrates to those new areas of work whose claim to be a profession are debatable—have ethical principles and/or oaths. These are easily accessible and sometimes administered solemnly. And yet, rarely does one hear of severe consequences for those who clearly have violated these precepts. Indeed (and this is not meant as a judgment), practitioners nowadays are far more likely to be penalized or chastised if they misbehave toward a colleague or make injudicious remarks than if they fail to honor the core strictures of their profession. And those whose malpractice has been confirmed at one institution all too often find a comparable position at other (though perhaps less prestigious) institutions.

As one who has held a doctorate for over a half century, I have a clear perspective on this matter. Far more important than any kind of oath, whenever and however administered, are the practices and norms that students witness in the course of their training. This immersion begins early in education (dating back well before matriculation at college or university) and reaches its apogee in the years of doctoral training. Particularly crucial are the standards, models, words, deeds of teachers, especially doctoral advisers; the values and ambitions of peers—other doctoral students in the cohort; and the atmosphere among young and senior professionals who work alongside the candidate in the lab, at the library, in class, or in the lunchroom.

Of course, there will always be exceptions. There will be graduates who, despite the positive models readily visible in their training, proceed to violate their professional oaths and norms. (I can think of colleagues who, lamentably, failed to learn from estimable role models). There will also be graduates who, despite a flawed adviser, lab atmosphere, and/or peer group, hold the highest standards for themselves and others. Bravo for them!

But we cannot and should not wait for outliers (or, if you prefer, out-liars!) We cannot count on physicians healing themselves or researchers reading and re-reading the oath that they have sworn to uphold. Instead, as teachers and mentors, we need to apply a critical lens to our own practices and models; and, if they are flawed in any way, we must strive to correct them. If future doctorates encounter positive models, we can rest assured that most of them will follow in the footsteps of their mentors. And then, should such an oath be required, it will serve—not as a prayer but as a celebration.

 

For helpful suggestions, I thank Courtney Bither and Ellen Winner

The Difference an “n” Makes: A Good Project Puzzle

The other day, I was speaking to a friend about what we, as older scholars, should be saying and doing at this time–a historical moment fraught with political, pandemic, and personal challenges, and, perhaps, with opportunities.

He began speaking, and said “As XX said…” As soon as he had enunciated XX, I had in mind one French person, but he was actually about to quote another one.

Person 1: Jean Monnet—was a French economist and political leader. He is one of my heroes; in fact in my book on Leading Minds, I devote a chapter to him.   

The First World War convinced Monnet that the centuries-long pattern of European nations (and then states) entering into predictable combat was extremely destructive; it ought not be allowed to continue. And so, for the next half century, he led the efforts to create a United Europe. And though Europe experienced another even more devastating war, and there continued to be opposition, Monnet was able to witness the creation of the European Union and to observe a peace that has held for more than half a century… alas, whether it will remain at peace is uncertain at this fraught moment.

Two quotations capture the thinking and the program of action of Jean Monnet:

“Europe has never existed. One must genuinely create Europe.”

“There will be no peace in Europe if the States rebuild themselves on the basis of national sovereignty, with its implications of prestige politics and economic protection... The countries of Europe are not strong enough individually to be able to guarantee prosperity and social development for their peoples. The States of Europe must therefore form a federation or a European entity that would make them into a common economic unit.”

But my friend actually had in mind Claude Monet, quite likely better known to the public-at-large. The great painter epitomizes the breakthrough that was Impressionism; he is probably more valued than any of his artistic colleagues. Like his younger namesake, Claude Monet was also tremendously upset by the warring countries. But— a generation older—Monet felt that his greatest contributions could be made by continuing to paint the way that he had, even when he became frail. And indeed, for well over a century, his artistry has given pleasure—and sometimes inspiration—to millions of viewers around the world. As Claude Monet put it,

”Yesterday I resumed work. It’s the best way to avoid thinking of these sad times…if these savages must kill me, it will be in the middle of my canvases, in front of all my life’s work.”

Those of us who are involved in the study and the stimulation of the good—good persons, good workers, good citizens—face a similar choice. We cannot delude ourselves into thinking that our efforts alone will have an effort on the larger scene—though that does not mean that we should not try. But it is legitimate for us to wonder—should we continue to carry out research and writing; should we drop our work and become political activists; or should we attempt to lead the life that we admire and respect and hope that our example will also influence others.

To make it more vivid, we can personify this dilemma: Does it make sense to devote our selves to a political activist life, like Senators John Kerry or John McCain; to the creation of powerful art, like novelist Toni Morrison or painter Helen Frankenthaler; to the improvement of world health, like the recently deceased Paul Farmer; or attempt to straddle between art and public life, like cellist Yo-Yo Ma?

We need to understand the world; but we also need to know ourselves.

Afterthought:

I felt a bit foolish that I was thinking of one Monnet, while my friend had in mind the other large personality—Monet. But I am not the only one who was confused. Decades ago, Pamela Harriman was being considered to be Ambassador to France. Senator Jesse Helms opposed the nomination because he thought that Harriman belonged to a political Society— the Jean Monnet Society that supported a united Europe; but actually Harriman belong to an artistic society that honored painter Claude Monet. She was confirmed (click here to learn more).

One “N” –or, in social science terms—an “N” of one—can make a big difference!

Reference

Gardner, H. (1995). Leading minds : An anatomy of leadership. Basic Books.