The poor, the dead, and God are easily forgotten

Peter Brown’s “Remembering the Poor and the Aesthetic of Society” (Journal of Interdisciplinary History) presents a wonderful analysis of charity through a lens of history and society:

Looking at the medieval and (largely) early modern societies described herein with more ancient eyes reveals patterns of expectations that are familiar from the longer history of the three major religions studied in this collection. First and foremost, those who founded and administered the charitable institutions of early modern Europe and the Middle East plainly carried in the back of their minds what might be called a particular “aesthetic of society,” the outlines of which might be blurred by the quotidien routines of administration. This “aesthetic of society” amounted to a sharp sense of what constituted a good society and what constituted an ugly society, namely, one that neglected the poor or treated them inappropriately.

Europeans and Ottomans alike instantly noticed when charitable institutions were absent. Of the great imarets of the Ottoman empire, Evliya the seventeenth-century traveler, wrote, “I, this poor one, have traveled 51 years and in the territories of 18 rulers, and there was nothing like our enviable institution.”

The article delves into comparisons of social norms of charity—of which I have quoted before:

Divided as European Protestants and Catholics were in their ideas about the good society, the differences between Christian Europe and the Ottoman Empire were even more decisive, subtle though they sometimes could be. Christian Europe concentrated on a quality of mercy that was essentially asymmetrical. It strove to integrate those who, otherwise, would have no place in society. As the founder of Christ’s Hospital wrote in the sixteenth century, “Christ has lain too long abroad . . . in the streets of London.” To him, those deserving of mercy were “lesser folk,” and those who “raised them up” were “like a God.” In Catholic countries, much charity was “redemptive,” directed to tainted groups who might yet come to be absorbed more fully into the Christian fold—including Jews, some of whom might yet be converted, and prostitutes, some of whom might yet be reformed. In the more bracing air of Protestant Hadleigh, however, “reform” meant making sure that those who were “badly governed in their bodies” (delinquent male beggars) were brought back to the labor force from which they had lapsed. For both Catholics and Protestants, the “reform” of errant groups was a dominant concern.

By contrast, in Ottoman society, receiving charity brought no shame. To go to an imaret was not to be “brought in from the cold.” Rich and poor were sustained by the carefully graded bounty of the sultan: “Hand in hand with the imperial generosity is that of a strictly run establishment, carefully regulating the movements of its clients and the sustenance each received.” The meals at the Ottoman imaret are reminiscent of the Roman convivium, great public banquets of the Roman emperors, in their judicious combination of hierarchy and outreach to all citizens. Nothing like it existed in Christian Europe.

So who cares? (This is always a good question to throw at the dewey-eyed young-ins):

One issue concerning the “aesthetic of society” that deserves to be stressed is often taken for granted in studies of poverty: Why should the poor matter in the first place? The heirs to centuries of concerted charitable effort by conscientious Jews, Christians, and Muslims are liable to forget that concern for the poor is, in many ways, a relatively recent development in the history of Europe and the Middle East, not necessarily shared by many non-European and non-Middle Eastern societies.

The Greco-Roman world had no place whatsoever for the poor in its “aesthetic of society.” But ancient Greeks and Romans were not thereby hardhearted or ungenerous. They were aware of the misery that surrounded them and often prepared to spend large sums on their fellows. But the beneficiaries of their acts of kindness were never deaned as “the poor,” largely because the city stood at the center of the social imagination. The misery that touched them most acutely was the potential misery of their city. If Leland Stanford had lived in ancient Greece or in ancient Rome, his philanthropic activities would not have been directed toward “humanity,” even less toward “the poor,” but toward im- proving the amenities of San Francisco and the aesthetics of the citizen body as a whole. It would not have gone to the homeless or to the reform of prostitutes. Those who happened, economically, to be poor might have benefited from such philanthropy, but only insofar as they were members of the city, the great man’s “fellow-citizens.”

The emergence of the poor as a separate category and object of concern within the general population involved a slow and hesitant revolution in the entire “aesthetic” of ancient society, which was connected primarily with the rise of Christianity in the Roman world. But it also coincided with profound modiacations in the image of the city itself. The self-image of a classical, city-bound society had to change before the “poor” became visible as a separate group within it.

Similarly, in the context of the Chinese empire’s governmental tradition, the victims of famine were not so much “the poor” as they were “subjects” who happened to need food, the better to be controlled and educated like everyone else. This state-centered image had to weaken considerably before Buddhist notions of “compassion” to “the poor” could spread in China. Until at least the eleventh century, acts of charity to the poor ranked low in the hierarchy of official values, dismissed as “little acts” and endowed with little public resonance. They were overshadowed by a robust state ideology of responsibility for famine relief, which put its trust, not on anything as frail as “compassion,” but on great state warehouses controlled (it was hoped) by public-spirited provincial governors.

If the phrase “aesthetic of society” connotes a view of the poor deemed fitting for a society, one implicit aspect of it notably absent from the ancient world and China was the intense feeling—shared by Jews, Christians, and Muslims—that outright neglect of the poor was ugly, and that charity was not only prudent but also beautiful. Despite the traditional limitations of charitable institu- tions—their perpetual shortfall in meeting widespread misery, their inward-looking quality, and the overbearing manner in which they frequently operated—they were undeniably worthwhile ventures. The officials who ran them and the rich who funded them could think of themselves as engaged in “a pro- foundly integrative activity.” This widespread feeling of contributing to a “beautiful” rather than an “ugly” society still needs to be explained.

Why remember the poor? There are many obvious answers to this question, most of which have been fully spelled out in recent scholarship. Jews, Christians, and Muslims were guardians of sacred scriptures that enjoined compassion for the poor and promised future rewards for it. Furthermore, in early modern Europe, in particular, charity to the poor came to mean more than merely pleasing God; it represented the solution to a pressing social problem. To provide for the poor and to police their movements was a prudent reaction to what scholars have revealed as an objective crisis caused by headlong demographic growth and a decline in the real value of wages.

Yet even this “objective” crisis had its “subjective” side. Contemporaries perceived the extent of the crisis in, say, Britain as amplified, subjectively, by a subtle change in the “aesthetic of society.” The poor had not only become more dangerous; their poverty had become, in itself, more shocking. As Wrightson recently showed, forms of poverty that had once been accepted as part of the human condition, about which little could be done, became much more challenging wherever larger sections of a community became accustomed to higher levels of comfort. When poverty could no longer be taken for granted, to overlook the poor appeared, increasingly, to be the mark of an “ugly” society. Moreover, that the potentially “forgettable” segments of society were usually articulate and well educated, able to plead their cause to their more hardhearted contemporaries, had something to do with how indecorous, if not cruel, forgetting them would be.

Paul’s injunction to “remember the poor” (Galatians 2:10) and its equivalents in Jewish and Muslim societies warned about far more than a lapse of memory. It pointed to a brutal act of social excision the reverberations of which would not be confined to the narrow corridor where rich and poor met through the working of charitable institutions. The charitable institutions of the time present the poor, primarily, as persons in search of elemental needs— food, clothing, and work. But hunger and exposure were only the “presenting symptoms” of a deeper misery. Put bluntly, the heart of the problem was that the poor were eminently forgettable persons. In many different ways, they lost access to the networks that had lodged them in the memory of their fellows. Lacking the support of family and neighbors, the poor were on their own, floating into the vast world of the unremembered. This slippage into oblivion is strikingly evident in Jewish Midrash of the book of Proverbs, in which statements on the need to respect the poor are attached to the need to respect the dead. Ultimately helpless, the dead also depended entirely on the capacity of others to remember them. The dead represented the furthest pole of oblivion toward which the poor already drifted.

Fortunately for the poor, however, Jews, Christians, and Muslims not only had the example of their own dead—whom it was both shameful and inhuman to forget—but also that of God Himself, who was invisible, at least for the time being. Of all the eminently forgettable persons who ringed the fringes of a medieval and early modern society, God was the one most liable to be for- gotten by comfortable and conadent worldlings. The Qur’an equated those who denied the Day of Judgment with those who rejected orphans and neglected the feeding of the poor (Ma’un 107:1–3). The pious person, by contrast, forgot neither relatives nor strangers who were impoverished. Even though he might have had every reason to wish that they had never existed, he went out of his way to “feed them . . . and to speak kindly to them” (Nisa’ 4.36, 86).

The poor challenged the memory like God. They were scarcely visible creatures who, nonetheless, should not be forgotten. As Michael Bonner shows, the poor, the masakin of the Qur’an and of its early medieval interpreters, are “unsettling, ambiguous [persons] . . . . whom we may or may not know.” In all three religions, charity to the easily forgotten poor was locked into an entire social pedagogy that supported the memory of a God who, also, was all-too-easily forgotten.

The poor were not the only persons in a medieval or an early modern society who might become victims of forgetfulness. Many other members of Jewish, Christian, and Islamic societies—and often the most vocal members—found themselves in a position strangely homologous to, or overlapping, that of the poor, and they often proved to be most articulate in pressing the claims of the poor. They also demanded to be remembered even if, by the normal standards of society, they did nothing particularly memorable.

Seen with the hard eyes of those who exercised real power in their societies, the religious leaders of all three religions were eminently “forgettable” persons. They contributed nothing of obvious importance to society.

And of course, I respect any scholar who manages to connect their paper to their ability to continue drawing a salary:

The manner in which a society remembers its forgettable persons and characterizes the failure to do so is a sensitive indicator of its tolerance for a certain amount of apparently unnecessary, even irrelevant, cultural and religious activity. What is at stake is more than generosity and compassion. It is the necessary heedlessness by which any complex society can and a place for the less conspicuous elements of its cultural differentiation and social health. Scholars owe much to the ancient injunction to “remember the poor.”


American Press Subsidies

A brief history of the United State’s subsidies to journalism and the press, from The Nation’s “How to Save Journalism” by John Nichols and Robert McChesney:

Even those sympathetic to subsidies do not grasp just how prevalent they have been in American history. From the days of Washington, Jefferson and Madison through those of Andrew Jackson to the mid-nineteenth century, enormous printing and postal subsidies were the order of the day. The need for them was rarely questioned, which is perhaps one reason they have been so easily overlooked. They were developed with the intention of expanding the quantity, quality and range of journalism–and they were astronomical by today’s standards. If, for example, the United States had devoted the same percentage of its GDP to journalism subsidies in 2009 as it did in the 1840s, we calculate that the allocation would have been $30 billion. In contrast, the federal subsidy last year for all of public broadcasting, not just journalism, was around $400 million.

The experience of America’s first century demonstrates that subsidies of the sort we suggest pose no threat to democratic discourse; in fact, they foster it. Postal subsidies historically applied to all newspapers, regardless of viewpoint. Printing subsidies were spread among all major parties and factions. Of course, some papers were rabidly partisan, even irresponsible. But serious historians of the era are unanimous in holding that the extraordinary and diverse print culture that resulted from these subsidies built a foundation for the growth and consolidation of American democracy. Subsidies made possible much of the abolitionist press that led the fight against slavery.

Our research suggests that press subsidies may well have been the second greatest expense of the federal budget of the early Republic, following the military. This commitment to nurturing and sustaining a free press was what was truly distinctive about America compared with European nations that had little press subsidy, fewer newspapers and magazines per capita, and far less democracy. This history was forgotten by the late nineteenth century, when commercial interests realized that newspaper publishing bankrolled by advertising was a goldmine, especially in monopolistic markets. Huge subsidies continued to the present, albeit at lower rates than during the first few generations of the Republic. But today’s direct and indirect subsidies–which include postal subsidies, business tax deductions for advertising, subsidies for journalism education, legal notices in papers, free monopoly licenses to scarce and lucrative radio and TV channels, and lax enforcement of anti-trust laws–have been pocketed by commercial interests even as they and their minions have lectured us on the importance of keeping the hands of government off the press. It was the hypocrisy of the current system–with subsidies and government policies made ostensibly in the public interest but actually carved out behind closed doors to benefit powerful commercial interests–that fueled the extraordinary growth of the media reform movement over the past decade.


Two tales of Island 94

The title promises two tales; both are actually the same story, told with different levels of detail and suspense. The first is from _The Earthquake that America Forgot _by Dr. David Steward and Dr. Ray Knox. (These authors have a whole series of books about the New Madrid earthquakes, all of which seem to include the tongue-in-cheek warranty above.)

One hundred miles south of the epicenter of the first great 8.6 quake was Island #94. Vicksburg, Mississippi is near that location today. It was known as “Stack Island” but also as “Crow’s Nest and “Rogue’s Rest.” It was inhabited by pirates. Crow’s Nest was well-suited for staging an ambush on boats that could be spotted approaching the island several miles away in both directions and because boats had to thread a long narrow channel called “Nine-Mile Reach” on one side of the island.

It just so happened that on Sunday night, December 15, a Captain Paul Sarpy from St. Louis had tied up his boat for the night on of the the north end of Island #94. He was accompanied by his crew and family.

In those days the river was too treacherous to ply by night. There were shifting sand bars and many snags and stumps that could damage, sink or capsize a boat. There was no U.S. Army Corps of Engineers, as we have today, to dredge the river, mark it with buoys and stabilize the channel with jetties and dikes. Neither were there any lights on the banks for pilots to sight in making their turns through the many bends of the river. River maps were also poor and unreliable. Zadok Cramer’s Navigator was the newest and the best, but it was only updated every year or two, while the river changed constantly. Therefore, it was the custom for boatman to tie up at night and travel only by daylight.

Captain Sarpy didn’t know about the pirates. When he landed, he and several crew members went ashore to get some exercise and stretch their legs. As they strolled through the trees, they came upon an encampment of pirates. Tehy had not been seen. Hiding where they stood, they listened, overhearing the pirates’ plans to ambush a boat with a valuable cargo and “a considerable sum of money” that was supposed to pass by any day, now, on its way from St. Louis to New Orleans. As they eavesdropped with intense concentration, crouching behind the bushes, they overheard a name. “Sarpy.” It was Sarpy’s boat they were after!

Slipping away without being discovered. Sam and his men went back to their boat. Late that night, under cover of darkness, they drifted around the island undetected by the pirates. They tied up downstream hust far enough to make a quick getaway when daylight came.

During the night strong vibrations shook their boat. At first they were afraid the pirates had found them and were boarding for plunder. But no one came aboard. The tremors continued, accompanied with the agitation of large waves that rocked their boat.

That morning, in the dim dawn light as the fog and mist began to clear, Sarpy’s sailors looked upstream. Island #94 had completely disappeared—pirates and all!

For a hundred years there was no Stack Island, no Island #94. Since then the river has redeposited another body of sand in that location which carries the same name and number today. But it is not the same isle where Captain Sarpy so narrowly escaped a brutal ambush and was saved by an earthquake.

Further to the north, at the end of Long Reach, near the present-day town of Osceola. was Island #32. You won’t find it on any modern navigation maps. There is an Island #31 and an Island #33 but no Island #32. It, too, was to disappear in the darkness of an early December morning—but not yet. As of December 16, 1811, Island #32 was still there, but its was days were numbered. In less than a week, it, too, would follow the fate of Island #94.

I suspect the previous story is quite close to the St. Louis Globe Democrat’s “The Last Night of Island Ninety-Four” which is mentioned in our next story, coming from Jay Feldman’s When the Mississippi Ran Backwards:

A tale published in the St. Louis Globe-Democrat in 1902 purported to tell the story of “The Last Night of Island Ninety-Four.” According to this account, on the evening December is, a Captain Sarpy was enroute from St. Louis to New Orleans in his keelboat, the Belle Heloise with his wife and daughter and a large sum of money. At nightfall, the keelboat tied up at Island 94. This island had been a long-standing lair for river denizens of every stripe, including Samuel Mason, the notorious river pirate who had been apprehended in Little Prairie a decade earlier, only to escape while being transported on the river. Two years before Sarpy’s trip, however, a force of 150 keelboatmen had invaded the island and cleaned out the den of thieves, after which the island became a safe haven, and now, Sarpy thought to use the island’s abandoned blockhouse to lodge his family and crew for the night.

As Sarpy and two of his men explored the island, however, they overheard talking in the blockhouse and, peering in the windows, listened as a group of fifteen river pirates discussed plans to fall upon the Belle Heloise the following morning. Sarpy and his crewmen hurried back to the boat and quietly pushed off, tying up at a hidden place in the willows on the west bank about a mile below Island 94.

The following morning, after weathering a night of earthquakes, Sarpy looked upstream to see that Island 94 had disintegrated—the entire landmass was gone, and presumably, its criminal inhabitants along with it.

Whether or not the story is true, Island 94 did indeed disappear.


Notes of the first water

Above is from the addendum of Zadok Cramer’s The Navigator from which I have quoted previously. Written buoyantly, it  makes jokes of specie (‘new notes of the “first water”’ refers to the breadth of bank notes available at the time) and law (“club law” is the lynch mob). The text:

[i2] STACK ISLAND, not long since, was famed for a band of counterfeiters, horse thieves, robbers, murderers, &c. who made this part of the Mississippi a place of manufacture and deposit. From hence they would sally forth, stop boats, buy horses, flour, whiskey, &c. and pay for all in fine new notes of the “first water.” Their villages, after many severe losses sustained by innocent, good men, unsuspecting the cheat, became notorious, and after several years search and pursuit of the civil, and in some cases the club law, against this band of monsters, they have at length disappeared.


Radical volunteerism, or not

From the NY Times:

Teach for America, a corps of recent college graduates who sign up to teach in some of the nation’s most troubled schools, has become a campus phenomenon, drawing huge numbers of applicants willing to commit two years of their lives.

But a new study has found that their dedication to improving society at large does not necessarily extend beyond their Teach for America service.

In areas like voting, charitable giving and civic engagement, graduates of the program lag behind those who were accepted but declined and those who dropped out before completing their two years, according to Doug McAdam, a sociologist at Stanford University, who conducted the study with a colleague, Cynthia Brandt.

The reasons for the lower rates of civic involvement, Professor McAdam said, include not only exhaustion and burnout, but also disillusionment with Teach for America’s approach to the issue of educational inequity, among other factors.

Third paragraph, “those who were accepted but declined”: that’s me.

Also, as someone who promotes the “service makes you a better citizen”-line, I am really intrigued to read this:

Professor McAdam’s findings that nearly all of Freedom Summer’s participants were still engaged in progressive activism when he tracked them down 20 years later have contributed to the widely held notion that civic advocacy and service among the young make for better citizens.

“Back in the ’60s, if you signed up for Freedom Summer, it was perceived to be countercultural,” said Professor Reich, who taught sixth grade in Houston as a member of the Teach for America corps. “But unlike doing Freedom Summer, joining Teach for America is part of climbing up the elite ladder — it’s part of joining the system, the meritocracy.”


Umberto Eco on Modernism

Widely quoted; from _The Name of the Rose _(paragraphs breaks are mine for readability):

The postmodern reply to the modern consists of recognizing that the past, since it cannot really be destroyed, because its destruction leads to silence, must be revisited: but with irony, not innocently.

I think of the postmodern attitude as that of a man who loves a very cultivated woman and knows that he cannot say to her ‘I love you madly’, because he knows that she knows (and that she knows he knows) that these words have already been written by Barbara Cartland.

Still, there is a solution. He can say ‘As Barbara Cartland would put it, I love you madly’. At this point, having avoided false innocence, having said clearly that it is no longer possible to speak innocently, he will nevertheless have said what he wanted to say to the woman: that he loves her in an age of lost innocence.

If the woman goes along with this, she will have received a declaration of love all the same. Neither of the two speakers will feel innocent, both will have accepted the challenge of the past, of the already said, which cannot be eliminated; both will consciously and with pleasure play the game of irony… But both will have succeeded, once again, in speaking of love.

Or you can have mumblecore (relation: Bujalski lives in JP).


Modern and post-modern science

A question from the Librarything discussion boards:

Early Modern Science - 17th century - is fairly easy to label. But when did science become “modern”? And is there such a thing as “post-modern” science either to sociologists or scientists, or both? Latour has written a bit on this, I guess, but I’m not sure if many practising scientists would see themselves as practising constructivism rather than scientific objectivity.

This was my response:

Breaking things into “modern” and “post-modern” is always difficult. If I were to break out the cusps of science, I would put them as:

  • Formalized logic (the Greeks)

  • Formalized logic and scientific methodology rediscovered (15-17th century depending on how rigorous you want to be)

  • scientific tools that augmented our senses (like the microscope): 17th century

  • application of “science” to human society itself (sociology, scientific management, eugenics): late 19th century

  • automated scientific tools (computers, DNA sequencers, etc.) and logic machines: mid-late 20th century

If I were to pick a “modern” science moment, I would probably pick Carl Friedrich Gauss during the mid-late 18th century. More than anyone, I think Gauss really got the idea that he understood both the power and limits of science (he could be “critical” of science, which I think is the defining part of modernity): “There are problems to whose solution I would attach an infinitely greater importance than to those of mathematics, for example touching ethics, or our relation to God, or concerning our destiny and our future; but their solution lies wholly beyond us and completely outside the province of science.”

As for post-modern science, I would place that somewhere in the mid-1950s (or maybe 1960s with the radical technology movement) with the realization that we now have the power to destroy ourselves (or reinvent ourselves with genetic alteration). I would add the development of environmental science, systems thinking (cybernetics) and the creation of logical machines to that, too. The defining piece of post-modernism is to integrate the critique into the process. This has seen positive social aspects: environmental science (especially ecosystem science) and systems thinking that seek to balance scientific knowledge with its technological application. And the negative: techno-utopianism that seeks to divorce information from biology (pushing people towards acting like rational machines rather than humans)

I’m not perfectly happy with my response: instead of “criticism” as the defining part of modernity I should have written “self-criticism” (or “self-consciousness”).  Also, talking about science is always difficult since it’s not clear whether it’s the philosophy, the process, the practice, or the artifacts (technology) being referred to.


Criticism of Actor-Network Theory

Once you have a basic definition, I find it easiest to learn about something through its criticisms: what is criticised is usually what is unique. This is the Wikipedia’s criticism of Actor-Network Theory, which was developed by previously mentioned Bruno Latour:

Actor-network theory insists on the agency of nonhumans. Critics maintain that such properties as intentionality fundamentally distinguish humans from animals or from “things”. ANT scholars respond that (a) they do not attribute intentionality and similar properties to nonhumans; (b) their conception of agency does not presuppose intentionality; (c) they locate agency neither in human “subjects” nor in non-human “objects,” but in heterogeneous associations of humans and nonhumans.

ANT has been criticized as amoral. Wiebe Bijker has responded to this criticism by stating that the amorality of ANT is not a necessity. Moral and political positions are possible, but one must first describe the network before taking up such positions.

Other critics have argued that ANT may imply that all actors are of equal importance in the network. This critique holds that ANT does not account for pre-existing structures, such as power, but rather sees these structures as emerging from the actions of actors within the network and their ability to align in pursuit of their interests. For this reason, ANT is sometimes seen as an attempt to re-introduce Whig history into science and technology studies; like the myth of the heroic inventor, ANT can be seen as an attempt to explain successful innovators by saying only that they were successful. In a similar vein ANT has been criticised as overly managerial in focus.

Some critics have argued that research based on ANT perspectives remains entirely descriptive and fails to provide explanations for social processes. ANT - like comparable social scientific methods - requires judgment calls from the researcher as to which actors are important within a network and which are not. Critics argue that the importance of particular actors cannot be determined in the absence of ‘out-of-network’ criteria. Similarly, others argue that Actor-Networks risk degenerating into endless chains of association ( six degrees of separation - we are all networked to one another). Other research perspectives such as social constructionism, social network theory, Normalization Process Theory, Diffusion of Innovations theory are held to be important alternatives to ANT approaches.

In a workshop called “Actor Network and After”, Bruno Latour stated that there are four things wrong with actor-network theory: “actor”, “network”, “theory” and the hyphen. In a later book however (Reassembling the Social: An Introduction to Actor-Network-Theory), Latour reversed himself, accepting the wide use of the term, “including the hyphen” (Latour 2005:9). He also remarked how he had been helpfully reminded that the ANT acronym “was perfectly fit for a blind, myopic, workaholic, trail-sniffing, and collective traveler” (the ant, Latour 2005:9) – qualitative hallmarks of actor-network epistemology.

The last one is my favorite.


Critical Thinking contextualized

Below is an excerpt from my final paper for the Critical and Creative Thinking course I took this Fall. I based my paper on a quote from the Arthur Costa, author of our textbook, who wrote:

Most authors and developers of major cognitive curriculum projects agree that direct instruction in thinking skills is imperative. Edward de Bono, Barry Beyer, Reuven Feuerstein, Arthur Whimbey, and Mattew Lipman would agree on at least this point: That the teaching of thinking requires that teachers instruct students directly in the processes of thinking. Even David Perkins believes that creativity can be taught—by design.

The question I investigated was if these major authors and developers agree that critical thinking can be taught, on what aspects of teaching do they disagree? To write the paper, I investigated the backgrounds of those 6 individuals and then placed  their major works into an analytical framework. It actually turned out easier than I first expected because in my research I discovered a paper that had already created such a framework: Yoram Harpaz’s “ Approaches to Critical Thinking: Toward a Conceptual Mapping of the Field”. So my work was mostly to synthesize and connect around my question.

The 3 major approaches to Critical Thinking are:

The skills approach: thinking tools are used efficiently—quickly and precisely—in given circumstances. Rather than imparting knowledge to students, they should be trained in proper thinking skills. These thinking skills can include strategies, heuristics, and algorithms as well as seeking precision or efficiency when thinking.

De Bono  (Physician) – CoRT

Beyer (Education/Pedagogy) – Direct teaching of thinking

Fuerstein (Cognitive Psychologist) – Instrumental Enrichment

Whimbey (Instructional Designer) – Problem Solving

Lipman – (Philosophy) Philosophy for Children

The dispositions approach: motivations for good thinking are formed by reasonable choices. The focus is not upon a student’s ability to think, but rather upon motivation or decision making to think critically about a particular situation or action.

Lipman (Philosophy) – Philosophy for Children

Perkins (Mathematics & AI) – Dispositions theory of thinking

Costa (included because his quote is the inspiration for this paper) – Habits of mind

The understanding approach: the ability to locate a concept in the context of other concepts or implement a concept within another context. “Thinking is not a pure activity but activity with knowledge; and when this knowledge is understood, thinking activity is more generative (creates better solutions, decisions and ideas). Understanding therefore, is not (only) a product of good thinking but (also) its source.” [ Harpaz]

Lipman (Philosophy) – Philosophy for Children

Perkins (Mathematics & AI) – Understanding performances

Here’s my conclusion:

Interestingly, the majority of theories and works cited fall under the skills approach. In addition, both Lipman and Perkins fall under more than one approach; Lipman falls under all three. Lipman and Perkin’s wider frame could explain this: both approach the teaching of critical thinking from a frame of philosophy rather than psychology or cognition.

Though it was not known when this paper was first conceived, this paper ascribes to the understanding approach. By placing these individuals’ theories and works within the context of critical thinking pedagogy, and relating them to each other, these theories and works can be both better understood and applied.

I couldn’t have said it better myself.


Transactions vs. Relationships

From John Burne (former editor-in-chief at Businessweek):

Many incumbents resent that most efforts to find information on the Web no longer starts with a brand. It starts with Google which is largely brand agnostic. So, in effect, Google has become this massive transaction machine, and as everyone knows, transactions are the antithesis of relationships. If a brand wants a relationship with its audience, Google is getting in the way. It’s how Google was able to siphon nearly $22 billion last year in advertising from traditional media. And it’s the most obvious proof that media brands have diminished in value. People are more routinely turning to Google to get information, rather than a brand known for its expertise in a given area. They’ll google (yes, I’m using Google as a verb) leadership before going to The Wall Street Journal, Fortune, BusinessWeek, or Harvard Business Review. They’ll google President Clinton before going to The New York Times, Time, or Newsweek. Why? Because they trust Google to serve up unbiased results; because they want to see what is generally available out there and not tied to a brand, and because most brands no longer wield the power and influence they did years ago.