Modern and post-modern science

A question from the Librarything discussion boards:

Early Modern Science - 17th century - is fairly easy to label. But when did science become “modern”? And is there such a thing as “post-modern” science either to sociologists or scientists, or both? Latour has written a bit on this, I guess, but I’m not sure if many practising scientists would see themselves as practising constructivism rather than scientific objectivity.

This was my response:

Breaking things into “modern” and “post-modern” is always difficult. If I were to break out the cusps of science, I would put them as:

  • Formalized logic (the Greeks)

  • Formalized logic and scientific methodology rediscovered (15-17th century depending on how rigorous you want to be)

  • scientific tools that augmented our senses (like the microscope): 17th century

  • application of “science” to human society itself (sociology, scientific management, eugenics): late 19th century

  • automated scientific tools (computers, DNA sequencers, etc.) and logic machines: mid-late 20th century

If I were to pick a “modern” science moment, I would probably pick Carl Friedrich Gauss during the mid-late 18th century. More than anyone, I think Gauss really got the idea that he understood both the power and limits of science (he could be “critical” of science, which I think is the defining part of modernity): “There are problems to whose solution I would attach an infinitely greater importance than to those of mathematics, for example touching ethics, or our relation to God, or concerning our destiny and our future; but their solution lies wholly beyond us and completely outside the province of science.”

As for post-modern science, I would place that somewhere in the mid-1950s (or maybe 1960s with the radical technology movement) with the realization that we now have the power to destroy ourselves (or reinvent ourselves with genetic alteration). I would add the development of environmental science, systems thinking (cybernetics) and the creation of logical machines to that, too. The defining piece of post-modernism is to integrate the critique into the process. This has seen positive social aspects: environmental science (especially ecosystem science) and systems thinking that seek to balance scientific knowledge with its technological application. And the negative: techno-utopianism that seeks to divorce information from biology (pushing people towards acting like rational machines rather than humans)

I’m not perfectly happy with my response: instead of “criticism” as the defining part of modernity I should have written “self-criticism” (or “self-consciousness”).  Also, talking about science is always difficult since it’s not clear whether it’s the philosophy, the process, the practice, or the artifacts (technology) being referred to.

Criticism of Actor-Network Theory

Once you have a basic definition, I find it easiest to learn about something through its criticisms: what is criticised is usually what is unique. This is the Wikipedia’s criticism of Actor-Network Theory, which was developed by previously mentioned Bruno Latour:

Actor-network theory insists on the agency of nonhumans. Critics maintain that such properties as intentionality fundamentally distinguish humans from animals or from “things”. ANT scholars respond that (a) they do not attribute intentionality and similar properties to nonhumans; (b) their conception of agency does not presuppose intentionality; (c) they locate agency neither in human “subjects” nor in non-human “objects,” but in heterogeneous associations of humans and nonhumans.

ANT has been criticized as amoral. Wiebe Bijker has responded to this criticism by stating that the amorality of ANT is not a necessity. Moral and political positions are possible, but one must first describe the network before taking up such positions.

Other critics have argued that ANT may imply that all actors are of equal importance in the network. This critique holds that ANT does not account for pre-existing structures, such as power, but rather sees these structures as emerging from the actions of actors within the network and their ability to align in pursuit of their interests. For this reason, ANT is sometimes seen as an attempt to re-introduce Whig history into science and technology studies; like the myth of the heroic inventor, ANT can be seen as an attempt to explain successful innovators by saying only that they were successful. In a similar vein ANT has been criticised as overly managerial in focus.

Some critics have argued that research based on ANT perspectives remains entirely descriptive and fails to provide explanations for social processes. ANT - like comparable social scientific methods - requires judgment calls from the researcher as to which actors are important within a network and which are not. Critics argue that the importance of particular actors cannot be determined in the absence of ‘out-of-network’ criteria. Similarly, others argue that Actor-Networks risk degenerating into endless chains of association ( six degrees of separation - we are all networked to one another). Other research perspectives such as social constructionism, social network theory, Normalization Process Theory, Diffusion of Innovations theory are held to be important alternatives to ANT approaches.

In a workshop called “Actor Network and After”, Bruno Latour stated that there are four things wrong with actor-network theory: “actor”, “network”, “theory” and the hyphen. In a later book however (Reassembling the Social: An Introduction to Actor-Network-Theory), Latour reversed himself, accepting the wide use of the term, “including the hyphen” (Latour 2005:9). He also remarked how he had been helpfully reminded that the ANT acronym “was perfectly fit for a blind, myopic, workaholic, trail-sniffing, and collective traveler” (the ant, Latour 2005:9) – qualitative hallmarks of actor-network epistemology.

The last one is my favorite.

Critical Thinking contextualized

Below is an excerpt from my final paper for the Critical and Creative Thinking course I took this Fall. I based my paper on a quote from the Arthur Costa, author of our textbook, who wrote:

Most authors and developers of major cognitive curriculum projects agree that direct instruction in thinking skills is imperative. Edward de Bono, Barry Beyer, Reuven Feuerstein, Arthur Whimbey, and Mattew Lipman would agree on at least this point: That the teaching of thinking requires that teachers instruct students directly in the processes of thinking. Even David Perkins believes that creativity can be taught—by design.

The question I investigated was if these major authors and developers agree that critical thinking can be taught, on what aspects of teaching do they disagree? To write the paper, I investigated the backgrounds of those 6 individuals and then placed  their major works into an analytical framework. It actually turned out easier than I first expected because in my research I discovered a paper that had already created such a framework: Yoram Harpaz’s “ Approaches to Critical Thinking: Toward a Conceptual Mapping of the Field”. So my work was mostly to synthesize and connect around my question.

The 3 major approaches to Critical Thinking are:

The skills approach: thinking tools are used efficiently—quickly and precisely—in given circumstances. Rather than imparting knowledge to students, they should be trained in proper thinking skills. These thinking skills can include strategies, heuristics, and algorithms as well as seeking precision or efficiency when thinking.

De Bono  (Physician) – CoRT

Beyer (Education/Pedagogy) – Direct teaching of thinking

Fuerstein (Cognitive Psychologist) – Instrumental Enrichment

Whimbey (Instructional Designer) – Problem Solving

Lipman – (Philosophy) Philosophy for Children

The dispositions approach: motivations for good thinking are formed by reasonable choices. The focus is not upon a student’s ability to think, but rather upon motivation or decision making to think critically about a particular situation or action.

Lipman (Philosophy) – Philosophy for Children

Perkins (Mathematics & AI) – Dispositions theory of thinking

Costa (included because his quote is the inspiration for this paper) – Habits of mind

The understanding approach: the ability to locate a concept in the context of other concepts or implement a concept within another context. “Thinking is not a pure activity but activity with knowledge; and when this knowledge is understood, thinking activity is more generative (creates better solutions, decisions and ideas). Understanding therefore, is not (only) a product of good thinking but (also) its source.” [ Harpaz]

Lipman (Philosophy) – Philosophy for Children

Perkins (Mathematics & AI) – Understanding performances

Here’s my conclusion:

Interestingly, the majority of theories and works cited fall under the skills approach. In addition, both Lipman and Perkins fall under more than one approach; Lipman falls under all three. Lipman and Perkin’s wider frame could explain this: both approach the teaching of critical thinking from a frame of philosophy rather than psychology or cognition.

Though it was not known when this paper was first conceived, this paper ascribes to the understanding approach. By placing these individuals’ theories and works within the context of critical thinking pedagogy, and relating them to each other, these theories and works can be both better understood and applied.

I couldn’t have said it better myself.

Transactions vs. Relationships

From John Burne (former editor-in-chief at Businessweek):

Many incumbents resent that most efforts to find information on the Web no longer starts with a brand. It starts with Google which is largely brand agnostic. So, in effect, Google has become this massive transaction machine, and as everyone knows, transactions are the antithesis of relationships. If a brand wants a relationship with its audience, Google is getting in the way. It’s how Google was able to siphon nearly $22 billion last year in advertising from traditional media. And it’s the most obvious proof that media brands have diminished in value. People are more routinely turning to Google to get information, rather than a brand known for its expertise in a given area. They’ll google (yes, I’m using Google as a verb) leadership before going to The Wall Street Journal, Fortune, BusinessWeek, or Harvard Business Review. They’ll google President Clinton before going to The New York Times, Time, or Newsweek. Why? Because they trust Google to serve up unbiased results; because they want to see what is generally available out there and not tied to a brand, and because most brands no longer wield the power and influence they did years ago.

Easier drawn than said

Visualizing the demand curve is left as an exercise for the reader.

Academia on the experience of poverty

It takes a lot of words for academia to say “We can’t describe the experience of poverty”. This is from “ Using a sustainable livelihoods approach to assessing the impact of ICTs in development” by Sarah Parkinson and Ricardo Ramírez:

…the way development professionals conceptualise development and poverty is very different from how poor people themselves view these. Poor people perceive poverty in a much more complex manner than do development professionals and they employ a range of strategies, not only to maximize income, but also to minimise risk and to protect or increase other things that they value.  Poor people’s priorities are often different from those imputed to them by development experts, and their strategies are often more complex, both in terms of activity and motivation Thus it is argued, the sustainable livelihoods framework [above] provides a conceptualisation that is more appropriate to the perspectives and realities of poor people. (Chambers 1995).

And more:

The focus of “livelihood” in sustainable livelihoods (SL) frameworks is an attempt to move away from narrow definitions of poverty, and as such reframes the broad aim of development as an effort to improve people’s livelihood options.  “Livelihood” refers broadly to a means of making a living, and includes the assets, access to institutions and processes, and strategies that a person utilizes to achieve livelihood outcomes (Ashley and Carney, 1999).    The term “sustainable” refers both to the characteristic of a livelihood to endure the various shocks and uncertainties likely to be encountered in the environment, and to avoid contributing to long-term depletion of natural resources (Chambers 1987).

For the record, I’m much more comfortable with this conceptualization than the standard “poverty is the absence of money”…and this involves a flow chart. Via Peter Miller’s dissertation.

Social Assistance Cynicism

As someone who, for a living, congratulates people on their commitment to serve their country then tells them how to sign up for public assistance, I am not surprised by this NY Times article saying that now 1 in 8 Americans now use Food Stamps. Though unsurprised, I can though appreciate the cynical responses:

I have an idea. Let’s expect people to get food insurance if they want to eat in times of need, instead of giving them food stamps. They can buy food insurance from private, for-profit companies. Those companies can deny benefits for misstatements on their food insurance applications, regardless of how minor the misstatement, whether it was done with fraudulent intent, or how long ago the application was taken. We’ll rely on the food insurance companies’ discretion as to what foods they cover at what percentage rate of prevailing market prices, which is part of the insurance coverage, but which is never actually disclosed to the people buying food insurance, even though it amounts to a substantial portion of the insurance contract and significantly affects the utility of the insurance to the purchaser. The whole thing will be implemented through a big bureaucratic claims process, with some food insurers refusing to provide benefits unless you get your food at the company store. And, get this, any time the food insurance company refuses to cover someone’s food, that person’s only effective recourse is to take the insurance company to court, a process that takes months or years and does not in any way ensure useful contractual enforcement to someone who needs to eat RIGHT NOW.

Actually this problem could be solved by having Food Savings Accounts (FSA). If poor people would just put money into an FSA (pre-tax!) then they could build up an account balance that would carry them through those times when they get hungry and need some food.

Food stamps just promote the consumption of food. If you subsidize something, you get more of it. We should be discouraging that kind of behavior.

Like Wikipedia, but before

This is how the emerging internet is described in The Axemaker’s Gift, published in  1995. Interesting sections to me highlighted by me:

The new systems can present data to the user in the form of a “web” on which all the information contained in a database is interlinked. For example, a simple chain of web data-links might go: “toilet roll, invented in response to sanitation ceramics, resulting from nineteenth-century sewage developments, triggered by a cholera epidemic, whose social effects generated public health legislation, that established pathology labs, able to function clue to tissue-staining techniques, that used aniline dyes, discovered during a search for artificial quinine, in coal-tar that was a by-product of the manufacture of gaslight, that illuminated early workers’ evening classes, in factories spinning cotton from America, processed by Eli Whitney’s gin, after he developed interchangeable musket parts, that made possible the manufacture of machine tools, for production lines that introduced continuous-process techniques, that one day would make toilet rolls.”

Any individual link in this loop of related innovations and events could also provide the start-point for other loops, in which any link could initiate yet other loops and so on.

There are two main attractions to this way of accessing information. First, it is easy to operate because the user can join the web at an entry point matching their level of knowledge and which might therefore might be something as complex as a quantum physics equation or as simple as a toilet roll. Second is the interconnected nature of the web that makes it possible to move from the entry point to anywhere else on the web by a large choice of routes, one of which will best suit the user’s own idiosyncratic interests and level of ability.

At each stage of the journey, any link prepares the user for the next link because of the way in which all links relate. Also, at any link there are a number of alternate routes to take, and it is here that the user can make choices based on personal interest or experience. So it is not inconceivable that a journey might begin with the toilet roll and eventually lead to all the data required for understanding quantum physics, or pottery making, or medieval Latin.

Since there would be no “correct” way to arrive at target data designated, say, by curriculum needs, in the kind of educational process that the web might make possible, the web would offer the user a means to “learn” the target information by arriving at it in their own way. “Knowledge” would then be the experience of having traveled on the web, like the knowledge of a city’s streets. The journey, therefore, would be more valuable than the destination and relationships between data more valuable than the data. It might be that we would eventually come to value intelligence no longer solely by information-retrieval but by the imaginative way a student constructed such a journey.

The attraction of the web is that the user needs no qualifications to enter, and the process of exploring the web is as easy or complex as the user chooses. The web contains the sum of knowledge, so the experience of a journey finks every user in some way to every other user. The number of ways in which a web might be accessed, linked, or restructured could be as many as its users decided.

Use of the web would above all accustom people to become gradually more familiar with the way in which knowledge is not made up of isolated, unconnected “facts,” but is part of a dynamic whole. Experience on the web might also bring greater awareness of the social effects of the introduction of any innovation, thanks to the way the result of interrelating data on the web mirrored that of the way innovation affected the community at large. So each time a user journeyed on the web and made new links between data, the new connections would restructure the web in much the same way they might have rearranged society if they had been applied in real terms. In this sense, the web could become a microcosm for society itself. It could serve as a means to play out scenarios for knowledge manufacture and its potential social effects. Eventually, of course, the web might become the general mode of involvement in all social processes, either in person or through the use of personal electronic “agents.” **The power of the individual is greatly magnified. **

Gatekeeper Code

From a thread on whether BASIC is relevant:

Really, what I think is happening is that BASIC is so easy to learn that even people who shouldn’t be programming can use it. There are people who can’t program very well at all, but they DID learn to program in BASIC, so they think they can program, learn a little PHP maybe, and them come work at your workplace and start fucking shit up. Similar people who are forced into more structured languages right off the bat never get the idea they can program at all, so they never even apply, much less get hired, and you never have to work with them. The complaint is really that BASIC doesn’t act as an effective gatekeeper in weeding out bad candidates.

Full disclosure: I learned to program using BASIC on a TRS-80.

Also, this is related to an argument Steven Pinker makes when ripping apart Malcolm Gladwell’s new book:

Another example of an inherent trade-off in decision-making is the one that pits the accuracy of predictive information against the cost and complexity of acquiring it. Gladwell notes that I.Q. scores, teaching certificates and performance in college athletics are imperfect predictors of professional success. This sets up a “we” who is “used to dealing with prediction problems by going back and looking for better predictors.” Instead, Gladwell argues, “teaching should be open to anyone with a pulse and a college degree — and teachers should be judged after they have started their jobs, not before.”

But this “solution” misses the whole point of assessment, which is not clairvoyance but cost-effectiveness. To hire teachers indiscriminately and judge them on the job is an example of “going back and looking for better predictors”: the first year of a career is being used to predict the remainder. It’s simply the predictor that’s most expensive (in dollars and poorly taught students) along the accuracy-­cost trade-off. Nor does the absurdity of this solution for professional athletics (should every college quarterback play in the N.F.L.?) give Gladwell doubts about his misleading analogy between hiring teachers (where the goal is to weed out the bottom 15 percent) and drafting quarterbacks (where the goal is to discover the sliver of a percentage point at the top).

Axemaker conclusions

The following is from the conclusion of the Axemaker’s Gift by James Burke and Robert Ornstein:

The first step may be to recognize that we can use our technology as it has been used time and again through history. We can use it to change minds, but this time for our own reasons in our own terms and at our own pace, if we use the coming technologies for what they could be: instruments of freedom. The very interactive nature of the modem world makes it less easy to block such an act and to continue with the old ways of hierarchy and division. But in any case, all that ever kept us in thrall of institutions was our ignorance of the kind of knowledge that could soon now be so easily accessible and understandable that it will be a waste of time to know it. When Gutenberg printed his books, he greatly lessened the power of memory and tradition. The new technologies will lessen the power of arcane, specialist knowledge. And when they do, we will all, in one sense, return to what we were before the first axe.

The culture we live in, based on the sequential influence of language on thought and operating according to the rationalist rules of Greek philosophy and reductionist practice, has wielded tremendous power. It has given us the wonders of the modem world on a plate. But it has also fostered belieh that have tied us to centralized institutions and powerful individuals for centuries, which we must shuck off if we are to adapt to the world we’ve made: that unabated extraction of planetary resources is possible, that the most valuable members of society are specialists, that people cannot survive without leaders, that the body is mechanistic and can only be healed with knives and drugs, that there is only one superior truth, that the only important human abilities lie in the sequential and analytic mode of thought, and that the mind works like an axemaker’s gilt.

Above all (and most recently) we have also been persuaded to think that it is unacceptable to be different or even to acknowledge that differences in abilities exist between us. But our survival may depend on the realization and expression of humanity’s immense diversity. Only if we use what may be the ultimate of the many axemalcer’s gifts—the coming information systems—to nurture this individual and cultural diversity, only if we celebrate our differences rather than suppressing them, will we stand a chance of harnessing the wealth of human talent that has been ignored for millennia and that is now eager, all around the world, for release.

I greatly enjoyed the book, but I understand where the only 1-star  reviewer is coming from:

the suggested solution of a “web supported” world full of small democrartic communities is such hairy-armpit, dope-smoking, hippy rubbish I found myself laughing out loud. I’m fascinated to know who is going to design and construct and distribute the servers to enable this web-supported world, let alone who is going to host and maintain them

Burke got the wealth and attention that enabled this book through the medium of television, and i bet he tours the world for book launches on jumbo-jets. i wonder if the irony of that is lost on him.

It is very difficult to build jumbos or LSI processor chips as a cottage industry

pure twaddle


Newer posts Older posts