Umberto Eco on Modernism

Widely quoted; from _The Name of the Rose _(paragraphs breaks are mine for readability):

The postmodern reply to the modern consists of recognizing that the past, since it cannot really be destroyed, because its destruction leads to silence, must be revisited: but with irony, not innocently.

I think of the postmodern attitude as that of a man who loves a very cultivated woman and knows that he cannot say to her ‘I love you madly’, because he knows that she knows (and that she knows he knows) that these words have already been written by Barbara Cartland.

Still, there is a solution. He can say ‘As Barbara Cartland would put it, I love you madly’. At this point, having avoided false innocence, having said clearly that it is no longer possible to speak innocently, he will nevertheless have said what he wanted to say to the woman: that he loves her in an age of lost innocence.

If the woman goes along with this, she will have received a declaration of love all the same. Neither of the two speakers will feel innocent, both will have accepted the challenge of the past, of the already said, which cannot be eliminated; both will consciously and with pleasure play the game of irony… But both will have succeeded, once again, in speaking of love.

Or you can have mumblecore (relation: Bujalski lives in JP).


Modern and post-modern science

A question from the Librarything discussion boards:

Early Modern Science - 17th century - is fairly easy to label. But when did science become “modern”? And is there such a thing as “post-modern” science either to sociologists or scientists, or both? Latour has written a bit on this, I guess, but I’m not sure if many practising scientists would see themselves as practising constructivism rather than scientific objectivity.

This was my response:

Breaking things into “modern” and “post-modern” is always difficult. If I were to break out the cusps of science, I would put them as:

  • Formalized logic (the Greeks)

  • Formalized logic and scientific methodology rediscovered (15-17th century depending on how rigorous you want to be)

  • scientific tools that augmented our senses (like the microscope): 17th century

  • application of “science” to human society itself (sociology, scientific management, eugenics): late 19th century

  • automated scientific tools (computers, DNA sequencers, etc.) and logic machines: mid-late 20th century

If I were to pick a “modern” science moment, I would probably pick Carl Friedrich Gauss during the mid-late 18th century. More than anyone, I think Gauss really got the idea that he understood both the power and limits of science (he could be “critical” of science, which I think is the defining part of modernity): “There are problems to whose solution I would attach an infinitely greater importance than to those of mathematics, for example touching ethics, or our relation to God, or concerning our destiny and our future; but their solution lies wholly beyond us and completely outside the province of science.”

As for post-modern science, I would place that somewhere in the mid-1950s (or maybe 1960s with the radical technology movement) with the realization that we now have the power to destroy ourselves (or reinvent ourselves with genetic alteration). I would add the development of environmental science, systems thinking (cybernetics) and the creation of logical machines to that, too. The defining piece of post-modernism is to integrate the critique into the process. This has seen positive social aspects: environmental science (especially ecosystem science) and systems thinking that seek to balance scientific knowledge with its technological application. And the negative: techno-utopianism that seeks to divorce information from biology (pushing people towards acting like rational machines rather than humans)

I’m not perfectly happy with my response: instead of “criticism” as the defining part of modernity I should have written “self-criticism” (or “self-consciousness”).  Also, talking about science is always difficult since it’s not clear whether it’s the philosophy, the process, the practice, or the artifacts (technology) being referred to.


Criticism of Actor-Network Theory

Once you have a basic definition, I find it easiest to learn about something through its criticisms: what is criticised is usually what is unique. This is the Wikipedia’s criticism of Actor-Network Theory, which was developed by previously mentioned Bruno Latour:

Actor-network theory insists on the agency of nonhumans. Critics maintain that such properties as intentionality fundamentally distinguish humans from animals or from “things”. ANT scholars respond that (a) they do not attribute intentionality and similar properties to nonhumans; (b) their conception of agency does not presuppose intentionality; (c) they locate agency neither in human “subjects” nor in non-human “objects,” but in heterogeneous associations of humans and nonhumans.

ANT has been criticized as amoral. Wiebe Bijker has responded to this criticism by stating that the amorality of ANT is not a necessity. Moral and political positions are possible, but one must first describe the network before taking up such positions.

Other critics have argued that ANT may imply that all actors are of equal importance in the network. This critique holds that ANT does not account for pre-existing structures, such as power, but rather sees these structures as emerging from the actions of actors within the network and their ability to align in pursuit of their interests. For this reason, ANT is sometimes seen as an attempt to re-introduce Whig history into science and technology studies; like the myth of the heroic inventor, ANT can be seen as an attempt to explain successful innovators by saying only that they were successful. In a similar vein ANT has been criticised as overly managerial in focus.

Some critics have argued that research based on ANT perspectives remains entirely descriptive and fails to provide explanations for social processes. ANT - like comparable social scientific methods - requires judgment calls from the researcher as to which actors are important within a network and which are not. Critics argue that the importance of particular actors cannot be determined in the absence of ‘out-of-network’ criteria. Similarly, others argue that Actor-Networks risk degenerating into endless chains of association ( six degrees of separation - we are all networked to one another). Other research perspectives such as social constructionism, social network theory, Normalization Process Theory, Diffusion of Innovations theory are held to be important alternatives to ANT approaches.

In a workshop called “Actor Network and After”, Bruno Latour stated that there are four things wrong with actor-network theory: “actor”, “network”, “theory” and the hyphen. In a later book however (Reassembling the Social: An Introduction to Actor-Network-Theory), Latour reversed himself, accepting the wide use of the term, “including the hyphen” (Latour 2005:9). He also remarked how he had been helpfully reminded that the ANT acronym “was perfectly fit for a blind, myopic, workaholic, trail-sniffing, and collective traveler” (the ant, Latour 2005:9) – qualitative hallmarks of actor-network epistemology.

The last one is my favorite.


Critical Thinking contextualized

Below is an excerpt from my final paper for the Critical and Creative Thinking course I took this Fall. I based my paper on a quote from the Arthur Costa, author of our textbook, who wrote:

Most authors and developers of major cognitive curriculum projects agree that direct instruction in thinking skills is imperative. Edward de Bono, Barry Beyer, Reuven Feuerstein, Arthur Whimbey, and Mattew Lipman would agree on at least this point: That the teaching of thinking requires that teachers instruct students directly in the processes of thinking. Even David Perkins believes that creativity can be taught—by design.

The question I investigated was if these major authors and developers agree that critical thinking can be taught, on what aspects of teaching do they disagree? To write the paper, I investigated the backgrounds of those 6 individuals and then placed  their major works into an analytical framework. It actually turned out easier than I first expected because in my research I discovered a paper that had already created such a framework: Yoram Harpaz’s “ Approaches to Critical Thinking: Toward a Conceptual Mapping of the Field”. So my work was mostly to synthesize and connect around my question.

The 3 major approaches to Critical Thinking are:

The skills approach: thinking tools are used efficiently—quickly and precisely—in given circumstances. Rather than imparting knowledge to students, they should be trained in proper thinking skills. These thinking skills can include strategies, heuristics, and algorithms as well as seeking precision or efficiency when thinking.

De Bono  (Physician) – CoRT

Beyer (Education/Pedagogy) – Direct teaching of thinking

Fuerstein (Cognitive Psychologist) – Instrumental Enrichment

Whimbey (Instructional Designer) – Problem Solving

Lipman – (Philosophy) Philosophy for Children

The dispositions approach: motivations for good thinking are formed by reasonable choices. The focus is not upon a student’s ability to think, but rather upon motivation or decision making to think critically about a particular situation or action.

Lipman (Philosophy) – Philosophy for Children

Perkins (Mathematics & AI) – Dispositions theory of thinking

Costa (included because his quote is the inspiration for this paper) – Habits of mind

The understanding approach: the ability to locate a concept in the context of other concepts or implement a concept within another context. “Thinking is not a pure activity but activity with knowledge; and when this knowledge is understood, thinking activity is more generative (creates better solutions, decisions and ideas). Understanding therefore, is not (only) a product of good thinking but (also) its source.” [ Harpaz]

Lipman (Philosophy) – Philosophy for Children

Perkins (Mathematics & AI) – Understanding performances

Here’s my conclusion:

Interestingly, the majority of theories and works cited fall under the skills approach. In addition, both Lipman and Perkins fall under more than one approach; Lipman falls under all three. Lipman and Perkin’s wider frame could explain this: both approach the teaching of critical thinking from a frame of philosophy rather than psychology or cognition.

Though it was not known when this paper was first conceived, this paper ascribes to the understanding approach. By placing these individuals’ theories and works within the context of critical thinking pedagogy, and relating them to each other, these theories and works can be both better understood and applied.

I couldn’t have said it better myself.


Transactions vs. Relationships

From John Burne (former editor-in-chief at Businessweek):

Many incumbents resent that most efforts to find information on the Web no longer starts with a brand. It starts with Google which is largely brand agnostic. So, in effect, Google has become this massive transaction machine, and as everyone knows, transactions are the antithesis of relationships. If a brand wants a relationship with its audience, Google is getting in the way. It’s how Google was able to siphon nearly $22 billion last year in advertising from traditional media. And it’s the most obvious proof that media brands have diminished in value. People are more routinely turning to Google to get information, rather than a brand known for its expertise in a given area. They’ll google (yes, I’m using Google as a verb) leadership before going to The Wall Street Journal, Fortune, BusinessWeek, or Harvard Business Review. They’ll google President Clinton before going to The New York Times, Time, or Newsweek. Why? Because they trust Google to serve up unbiased results; because they want to see what is generally available out there and not tied to a brand, and because most brands no longer wield the power and influence they did years ago.



Academia on the experience of poverty

It takes a lot of words for academia to say “We’re can’t describe the experience of poverty”. This is from “ Using a sustainable livelihoods approach to assessing the impact of ICTs in development” by Sarah Parkinson and Ricardo Ramírez:

…the way development professionals conceptualise development and poverty is very different from how poor people themselves view these. Poor people perceive poverty in a much more complex manner than do development professionals and they employ a range of strategies, not only to maximize income, but also to minimise risk and to protect or increase other things that they value.  Poor people’s priorities are often different from those imputed to them by development experts, and their strategies are often more complex, both in terms of activity and motivation Thus it is argued, the sustainable livelihoods framework [above] provides a conceptualisation that is more appropriate to the perspectives and realities of poor people. (Chambers 1995).

And more:

The focus of “livelihood” in sustainable livelihoods (SL) frameworks is an attempt to move away from narrow definitions of poverty, and as such reframes the broad aim of development as an effort to improve people’s livelihood options.  “Livelihood” refers broadly to a means of making a living, and includes the assets, access to institutions and processes, and strategies that a person utilizes to achieve livelihood outcomes (Ashley and Carney, 1999).    The term “sustainable” refers both to the characteristic of a livelihood to endure the various shocks and uncertainties likely to be encountered in the environment, and to avoid contributing to long-term depletion of natural resources (Chambers 1987).

For the record, I’m much more comfortable with this conceptualization than the standard “poverty is the absence of money”…and this involves a flow chart. Via Peter Miller’s dissertation.


Social Assistance Cynicism

As someone who, for a living, congratulates people on their commitment to serve their country then tells them how to sign up for public assistance, I am not surprised by this NY Times article saying that now 1 in 8 Americans now use Food Stamps. Though unsurprised, I can though appreciate the cynical responses:

I have an idea. Let’s expect people to get food insurance if they want to eat in times of need, instead of giving them food stamps. They can buy food insurance from private, for-profit companies. Those companies can deny benefits for misstatements on their food insurance applications, regardless of how minor the misstatement, whether it was done with fraudulent intent, or how long ago the application was taken. We’ll rely on the food insurance companies’ discretion as to what foods they cover at what percentage rate of prevailing market prices, which is part of the insurance coverage, but which is never actually disclosed to the people buying food insurance, even though it amounts to a substantial portion of the insurance contract and significantly affects the utility of the insurance to the purchaser. The whole thing will be implemented through a big bureaucratic claims process, with some food insurers refusing to provide benefits unless you get your food at the company store. And, get this, any time the food insurance company refuses to cover someone’s food, that person’s only effective recourse is to take the insurance company to court, a process that takes months or years and does not in any way ensure useful contractual enforcement to someone who needs to eat RIGHT NOW.

Actually this problem could be solved by having Food Savings Accounts (FSA). If poor people would just put money into an FSA (pre-tax!) then they could build up an account balance that would carry them through those times when they get hungry and need some food.

Food stamps just promote the consumption of food. If you subsidize something, you get more of it. We should be discouraging that kind of behavior.


Like Wikipedia, but before

This is how the emerging internet is described in The Axemaker’s Gift, published in  1995. Interesting sections to me highlighted by me:

The new systems can present data to the user in the form of a “web” on which all the information contained in a database is interlinked. For example, a simple chain of web data-links might go: “toilet roll, invented in response to sanitation ceramics, resulting from nineteenth-century sewage developments, triggered by a cholera epidemic, whose social effects generated public health legislation, that established pathology labs, able to function clue to tissue-staining techniques, that used aniline dyes, discovered during a search for artificial quinine, in coal-tar that was a by-product of the manufacture of gaslight, that illuminated early workers’ evening classes, in factories spinning cotton from America, processed by Eli Whitney’s gin, after he developed interchangeable musket parts, that made possible the manufacture of machine tools, for production lines that introduced continuous-process techniques, that one day would make toilet rolls.”

Any individual link in this loop of related innovations and events could also provide the start-point for other loops, in which any link could initiate yet other loops and so on.

There are two main attractions to this way of accessing information. First, it is easy to operate because the user can join the web at an entry point matching their level of knowledge and which might therefore might be something as complex as a quantum physics equation or as simple as a toilet roll. Second is the interconnected nature of the web that makes it possible to move from the entry point to anywhere else on the web by a large choice of routes, one of which will best suit the user’s own idiosyncratic interests and level of ability.

At each stage of the journey, any link prepares the user for the next link because of the way in which all links relate. Also, at any link there are a number of alternate routes to take, and it is here that the user can make choices based on personal interest or experience. So it is not inconceivable that a journey might begin with the toilet roll and eventually lead to all the data required for understanding quantum physics, or pottery making, or medieval Latin.

Since there would be no “correct” way to arrive at target data designated, say, by curriculum needs, in the kind of educational process that the web might make possible, the web would offer the user a means to “learn” the target information by arriving at it in their own way. “Knowledge” would then be the experience of having traveled on the web, like the knowledge of a city’s streets. The journey, therefore, would be more valuable than the destination and relationships between data more valuable than the data. It might be that we would eventually come to value intelligence no longer solely by information-retrieval but by the imaginative way a student constructed such a journey.

The attraction of the web is that the user needs no qualifications to enter, and the process of exploring the web is as easy or complex as the user chooses. The web contains the sum of knowledge, so the experience of a journey finks every user in some way to every other user. The number of ways in which a web might be accessed, linked, or restructured could be as many as its users decided.

Use of the web would above all accustom people to become gradually more familiar with the way in which knowledge is not made up of isolated, unconnected “facts,” but is part of a dynamic whole. Experience on the web might also bring greater awareness of the social effects of the introduction of any innovation, thanks to the way the result of interrelating data on the web mirrored that of the way innovation affected the community at large. So each time a user journeyed on the web and made new links between data, the new connections would restructure the web in much the same way they might have rearranged society if they had been applied in real terms. In this sense, the web could become a microcosm for society itself. It could serve as a means to play out scenarios for knowledge manufacture and its potential social effects. Eventually, of course, the web might become the general mode of involvement in all social processes, either in person or through the use of personal electronic “agents.” **The power of the individual is greatly magnified. **


Gatekeeper Code

From a thread on whether BASIC is relevant:

Really, what I think is happening is that BASIC is so easy to learn that even people who shouldn’t be programming can use it. There are people who can’t program very well at all, but they DID learn to program in BASIC, so they think they can program, learn a little PHP maybe, and them come work at your workplace and start fucking shit up. Similar people who are forced into more structured languages right off the bat never get the idea they can program at all, so they never even apply, much less get hired, and you never have to work with them. The complaint is really that BASIC doesn’t act as an effective gatekeeper in weeding out bad candidates.

Full disclosure: I learned to program using BASIC on a TRS-80.

Also, this is related to an argument Steven Pinker makes when ripping apart Malcolm Gladwell’s new book:

Another example of an inherent trade-off in decision-making is the one that pits the accuracy of predictive information against the cost and complexity of acquiring it. Gladwell notes that I.Q. scores, teaching certificates and performance in college athletics are imperfect predictors of professional success. This sets up a “we” who is “used to dealing with prediction problems by going back and looking for better predictors.” Instead, Gladwell argues, “teaching should be open to anyone with a pulse and a college degree — and teachers should be judged after they have started their jobs, not before.”

But this “solution” misses the whole point of assessment, which is not clairvoyance but cost-effectiveness. To hire teachers indiscriminately and judge them on the job is an example of “going back and looking for better predictors”: the first year of a career is being used to predict the remainder. It’s simply the predictor that’s most expensive (in dollars and poorly taught students) along the accuracy-­cost trade-off. Nor does the absurdity of this solution for professional athletics (should every college quarterback play in the N.F.L.?) give Gladwell doubts about his misleading analogy between hiring teachers (where the goal is to weed out the bottom 15 percent) and drafting quarterbacks (where the goal is to discover the sliver of a percentage point at the top).