P, NP and Panlexicon

This week I updated Panlexicon’s Word of the Day for Twitter’s new authentication API requirements, squeaking in just under the August 30 deadline. Panlexicon has been updated tweeting a unique word every day for nearly a year and a half now.

Also contemporary is P ≠ NP and generating Panlexicon’s daily tweet is an example of both Polynomial (P) and Non-Polynomial (NP) Time operations. Panlexicon’s uniqueness comes from exploration and discovery; when thinking about how to bring Panlexicon to Twitter, ensuring those values came through lead to an interesting computational problem.

For every Word of the Day Panlexicon tweets, it also includes a number of related words. Any word of the day could have hundreds of related words, but the difficulty is in maximizing the number of related words to share in the tweet while still staying within Twitter’s 140 character limit. The more related words I can fit into the tweet, and the more diverse those words are, the more explorational it is and the more likely you will discover something interesting or fun by reading it.

Generating Panlexicon’s Word of the Day Twitter message is an NP-type problem. There are millions of potential solutions that stay within the 140 character limit, but only a few of them are optimal: fitting as many related words into the tweet as possible, while still having a diverse distribution of words lengths. There is no easy way to figure out those optimal solutions without a lot of computational muscle. At the same time checking if the tweet as a whole is less than 140 characters is an N-type problem: it’s just a simple matter of counting up the characters.

So that’s just one example of the mathematical problems Panlexicon faces. Of course, the algorithm I actually use to write Panlexicon’s Word of the Day on Twitter is by no means fully optimizing, but by keeping the broader computational context in mind, I can fake it reasonably well.

Social Media Strategy and the Dodo Bird Effect

This month’s Harper’s Magazine had an article on the smorgasbord of cognitive behavioral therapies: “The War on Unhappiness: Goodbye Freud, Hello Positive Thinking” by Gary Greenberg:

…all these paths lead to the mountaintop, a miracle known to my profession as the Dodo Bird Effect: psychologist Saul Rosenzweig’s discovery, in 1936, that therapeutic orientation doesn’t matter because all orientations work. (Rosenzweig subtitled his paper “Everyone Has Won and All Must Have Prizes,” the verdict pro- nounced by the dodo in Alice’s Adventures in Wonderland.) The Dodo Bird Effect has been borne out by numerous studies since, with one elaboration.

Which made me think of strategy advice from the website “What the Fuck is My Social Media ‘Strategy’?”, with gems like:

Expose new users to the brand through organic conversations


Harness social currency to drive buzz

The commitment, communications and critical thought necessary to meaningfully adopt any strategy will help more than whatever is contained in the plan itself. That, and as the Harper’s article continues:

The single factory that makes a difference in outcome is faith: the patient must believe in the therapist, and the therapist must believe in the orientation. For therapy to work, both parties must have faith, sometimes against all reason, that their expedition will succeed.

Wealth Parade

From The Number’s Game: the commonsense guide to understanding numbers in the news, in politics and life, by Michael Blastland and Andrew Dilnot; on wealth and averages:

A Dutch economist, Jan Pen, famously imagined a procession of the worlds population where people were as tall as they were rich, everyone’s height proportional to their wealth (note wealth, not income). A person of average wealth would be of average height. The procession starts with the poorest (and shortest) person first and ends, one hour later, with the richest (and tallest). Not until twenty minutes into the procession do we see anyone at all. So far, they’ve had either negative net worth (owing more than they own) or no wealth at all, and so have no height. It’s a full thirty minutes before we begin to see dwarfs about six inches tall. And the dwarfs keep coming. It is not until forty-eight minutes have passed that we see the first person of average height and average wealth, when more than three quarters of the world’s population has already gone by.

What delays the average so long after the majority have passed? The answer lies in the effect of those who come next. “In the last few minutes,” wrote Pen, “giants loom up… a lawyer, not exceptionally succesful, eighteen feet tall.” As the hour approaches, the very last people in the procession are so tall we can’t see their heads. Last of all, sid Pen (at a time before the fully formed fortunes of Bill Gates and Warren Buffett), we se John Paul Getty. His height is breathtaking, perhaps ten miles, perhaps twice as much.

One millionaire can shift the average more than many hundreds of poor people, one billionaire a thousand times more. They have not this effect to the extent that 80 percent of the world’s population has less than average.

In everyday speech, “average” is a word meaning low or disdained. With incomes, the average is high. The colloquial use, being blunt, thoughtless, and bordering on a term of abuse, distorts the statistical one, which might, according to the distribution, be high, or low, or in the middle, or altogether irrelevant. It is worth knowing which. If only one thought survives about averages, let it be that they are not necessarily anywhere near the middle, nor representative of what’s typical, and that places often called “the middle” by politicians or the media may be far removed. These ideas have been lazily hitched together for too long. It is time for a divorce.

A more narrative explanation is available from The Atlantic.

Art, I hardly knew ye

Viktor Shlovsky on art, via Art Spiegelman’s “Portrait of the Artist as a Young %@?*! “ in The Best American Comics, 2009:

The purpose of art is to impart the sensation of things as they are perceived and not as they are known. The technique of art is to make objects ‘unfamiliar,’ to make forms difficult to increase the difficulty and length of perception because the process of perception is an aesthetic end in itself and must be prolonged.

What does your computer symbolize?

The introduction to Fred Turner’s From Counterculture to Cyberculture:

In the mid-1990s, as first the Internet and then the World Wide Web swung into public view, talk of revolution filled the air. Politics, economics, the nature of the self—all seemed to teeter on the edge of transformation. The Internet was about to “flatten organizations, globalize society, decentralize control, and help harmonize people,” as MIT’s Nicholas Negroponte put it. The stodgy men in gray flannel suits who had so confidently roamed the corridors of industry would shortly disappear, and so too would the chains of command on which their authority depended. In their place, wrote Negroponte and dozens of others, the Internet would bring about the rise of a new “digital generation”—playful, self-sufficient, psychologically whole—and it would see that generation gather, like the Net itself, into collaborative networks of independent peers. States too would melt away, their citizens lured back from archaic part-based politics to the “natural” agora of the digitized marketplace. Even the individual self, so long trapped in the human body, would finally be free to step outside its fleshy confines, explore its authentic interests, and find others with whom it might achieve communion. Ubiquitous networked computing had arrived, and in its shiny array of interlinked devices, pundits, scholars, and investors alike saw the image of an ideal society: decentralized, egalitarian, harmonious, and free.

But how did this happen? Only thirty years earlier, computers had been the tools and emblems of the same unfeeling industrial-era social machine whose collapse they now seemed ready to bring about. In the winter of 1964, for instance, students marching for free speech at the University of California at Berkeley feared that America’s political leaders were treating them as if they were bits of abstract data. One after another, they took up blank computer cards,punched them through with new patterns of holes—“FSM” and “STRIKE”—and hung them around their necks. One student even pinned a sign to his chest that parroted the cards user instructions, “I am a UC student. Please do not fold, bend, spindle or mutilate me.” For the marchers of the Free Speech Movement, as for many other Americans throughout the 1960s, computers loomed as technologies of dehumanization, of centralized bureaucracy and the rationalization of social life, and, ultimately, of the Vietnam War. Yet, in the 1990s, the same machines that had served as the defining devices of cold war technocracy emerged as the symbols of its transformation. Two decades after the end of the Vietnam War and the fading of the American counterculture, computers somehow seemed poised to bring to life the countercultural dream of empowered individualism, collaborative community,and spiritual communion. How did the cultural meaning of information technology shift so drastically.

As a number of journalists and historians have suggested, part of the answer is technological. By the 1990s, the room-sized, stand-alone calculating machines of the cold war era had largely disappeared. So too had the armored rooms in which they were housed and the army of technicians that supported them. Now Americans had taken up microcomputers, some the size of notebooks, all of them available to the individual user, regardless of his or her institutional standing. These new machines could perform a range of tasks that far exceeded even the complex calculations for which digital computers had first been built. They became communication devices and were used to prepare novels and spreadsheets, pictures and graphs. Linked over telephone wires and fiber-optic cables,they allowed their users to send messages to one another, to download reams of information from libraries around the world, and to publish their own thoughts on the World Wide Web. In all of these ways,changes in computer technology expanded the range of uses to which computers could be put and the types of social relations they were able to facilitate.

As dramatic as they were, however, these changes alone do not account for the particular utopian visions to which computers became attached. The fact that a computer can be put on a desktop, for instance, and that it cant be used by an individual, does not make it a “personal” technology. Nor does the fact that individuals can come together by means of computer networks necessarily require that their gatherings become “virtual communities.” On the contrary, as Shoshanna Zuboff has pointed out, in the office, desktop computers and computer networks can become powerful tools for integrating the individual ever more closely into the corporation. At home, those same machines not only allow schoolchildren to download citations from the public library, they also turn the living room into a digital shopping mall. For retailers, the computer in the home becomes an opportunity to harvest all sorts of information about potential customers. For all the utopian claims surrounding the emergence of the Internet, there is nothing about a computer or a computer network that necessarily requires that it level organizational structures, render the individual more psychologically whole, or drive the establishment of intimate, though geographically distributed, communities?

How was it, then, that computers and computer networks became linked to visions of peer-to-peer ad-hocracy, a leveled marketplace, and a more authentic self? Where did these visions come from? And who enlisted computing machines to represent them?

If that hanging question doesn’t make you want to read the book, I don’t know what will.

I just bought a copy of this book for a coworker. I used to frequently give out copies of Richard Bach’s Illusions to friends_, _ but this is a little heavier reading.

Zilch for a nonprofit

Last week I was contacted through LinkedIn by a stranger asking for help in forming a nonprofit organization. I get these types of requests not infrequently—whether directly through this blog, LinkedIn or Aardvark—or on mailing lists like Mission Based Massachusetts. My response is usually “Why does your cause necessitate its own 501(c)3? Have you considered Fiscal Sponsorship?”

I subscribe to the belief that when you’re working within a formal organization, 50% of your time goes towards maintaining organizational function and only the remainder actually goes towards achieving your external mission. Bringing a mission to scale may require a formal organization eventually, but if you’re trying to **fail faster, is incorporation necessary now?**

The nonprofit sector is already rich with existing organizations and platforms from which you can act. While I don’t share in the delusion that it’s one big lovefest, there are structures in place to incubate unincorporated projects, give tax-exempt status, and even provide administrative, finance, legal and payroll support. Sure, Fiscal Sponsorship usually carries with it an administration fee (shop around), but even at 20% it could be less than the opportunity cost of you doing all that yourself while trying to achieve your mission.

So that’s how the nonprofit sector lets you do more with less: you don’t even need your own nonprofit to participate.

This post is created in conjunction with other members of the Nonprofit Millennial Bloggers Alliance__. Our posts this week (all with “Zilch” in the title), explore perspectives on how nonprofits can do more with less. Check out other members’ posts and get in on twitter conversations regarding these posts by using the hashtag #NMBA.

The medium is the method

Jeff Hawkins, founder of Palm Computing, from Designing Interactions by Bill Moggridge:

I think paper is just this wonderful medium; it’s been honed for a thousand years. It’s really great! To try to do what you do with paper, with just drawing—line width, sketching, and so on—it’s very hard to get a good experience. Where the tablet type of computer really shines is where you’re not trying to capture the paperness of paper, but you’re trying to get the electronic or the back end of it. Form filling is a great example, because actually forms are pretty hard to do on paper. You never have the right space, it’s hard to know where to put things, there’s not enough room for instructions. So, there’s an example where an electronic version of a paper equivalent would be better. But in terms of the general idea that I’m going to sketch, draw and have a free-flowing paperlike experience—I’m skeptical about that.

The move towards empowerment

The New York Review of Books on When Everything Changed: The Amazing Journey of American Women from 1960 to the Present by Gail Collins:

And then came Betty Friedan. Her book, Collins writes, hit in 1963 “like an earthquake.” The shameful, confusing malaise felt by many women after the war now had a legitimate source, and the source had a name: The Feminine Mystique. Friedan busted the myth of the happy housewife so thoroughly that it took decades before women who were happy housewives dared to say anything about it. Women, Friedan said, “were being duped into believing homemaking was their natural destiny.” The dueling desires of motherhood and selfhood were articulated at last, and the feminist movement turned from the clear-cut demands of suffragism and equal pay to the less-defined realm of empowerment.

Accordingly, criticism of Third Wave Feminism is quite similar to that of self-actualization (from Rushkoff’s Life, Inc.):

Instead of fueling people to do something about the world, as the Weathermen and Yippies had hoped, spirituality became a way of changing one’s own perspectives, one’s own experiences and one’s own self. By pushing through to the other side of personal liberation, the descendants of Reich once again found self-adjustment the surest path to happiness.

Task identification and completion

Where do your strengths lie? I’d put myself in the upper half rather than the lower: I’m better at Task Identification than Task Completion. And by “completion”, I mean 1 of the 3 Ds: Do, Delegate or Dump.

Typology versus taxonomy

From “Typologies, taxonomies, and the benefits of policy classification” by Kevin B. Smith (Policy Studies Journal, Sep 2002):

There are two basic approaches to classification. The first is typology, which conceptually separates a given set of items multidimensionally… The key characteristic of a typology is that its dimensions represent concepts rather than empirical cases. The dimensions are based on the notion of an ideal type, a mental construct that deliberately accentuates certain characteristics and not necessarily something that is found in empirical reality (Weber, 1949). As such, typologies create useful heuristics and provide a systematic basis for comparison. Their central drawbacks are categories that are neither exhaustive nor mutually exclusive, are often based on arbitrary or ad hoc criteria, are descriptive rather than explanatory or predictive, and are frequently subject to the problem of reification (Bailey, 1994).

A second approach to classification is taxonomy. Taxonomies differ from typologies in that they classify items on the basis of empirically observable and measurable characteristics (Bailey, 1994, p. 6). Although associated more with the biological than the social sciences (Sokal & Sneath, 1964), taxonomic methods–essentially a family of methods generically referred to as cluster analysis–are usefully employed in numerous disciplines that face the need for classification schemes (Lorr, 1983; Mezzich & Solomon, 1980).

The article then goes on to explain the difficulty of applying the more strict taxonomic classifications to, in this case, policy:

…the empirical qualities of many policies are not immediately apparent. Scholars such as Steinberger (1980), T. A. Smith (1982), and, especially, Schneider and Ingram (1997) make a persuasive case that the very concept of a policy category is a social construction, something rooted in individual perceptions. What distinguishes a redistributive from a regulatory policy is an individual judgment, not an observable, policy-specific equivalent to height or length. This argument is at the heart of critiques of Lowi’s work, and it creates obvious difficulties in making the shift from a typology to a taxonomy.

I also like this lecture outline from a University of Illinois at Urbana Champaign entitled “ What Isn’t in a Name?: Terminological Misapprehensions Between 20th-Century Linguistics” that explains why the terms “taxonomy” and “typology” are not unbiasedly embraced:

II. CASE-STUDY 1: TYPOLOGY vs. TAXONOMY — positively- vs. negatively- valued by linguists; negatively- vs. positively-valued by biologists

  1. Typology as a laudable goal in linguistics:

a. From the Research Centre for Linguistic Typology (RCLT, La Trobe University) mission statement: “putting forward inductive generalisations about human language”.

b. From Association for Linguistic Typology mission statement: “the scientific study of … cross-linguistic diversity and the patterns underlying it”.

c. Existence of societies like the Association for Linguistic Typology, journals like Linguistic Typology or Sprachtypologie und Universalienforschung, and research centers devoted to typology (RCLT, some of the Max Planck institutes (e.g., at Nijmegen and at Leipzig), etc.)

  1. Typology as a tainted term (and concept) in modern biology.

a. In most 20th- (and 21st-) century biology, typology invokes the typological species-concept, an essentialist notion that, along with many other scholars, Mayr (1982) holds responsible for delaying the proposal, defence, and acceptance of legitimate evolutionary ideas prior to Darwin’s 1859 Origin of Species.

b. Mayr 1982:256: In “the essentialist species-concept, … each species is characterized by its unchanging essense (eidos) and separated from all other species by a sharp discontinuity. Essentialism assumes that the diversity of inaminate as well as of organic nature is the reflection of a limited number of unchanging universals (…[cf.] Hull 1975). This concept ultimately goes back to Plato’s concept of the eidos, and this is what later authors had in mind when they spoke of the essence, or ‘nature’, of some object or organism. All those objects [that] belong to the same species … share the same essence”.

c. The link from essence to type is made as follows; cf. Mayr 1982: 256: “The presence of the same essence is inferred on the basis of similarity. Species, thus, were [once] simply defined as groups of similar individuals that are different from individals belonging to other species. Species, thus conceived, represent different ‘types’ of organisms. Individuals… do not stand in any special relation to each other; they are merely expressions of the same eidos. Variation is the result of imperfect manifestations of the eidos”.

  1. Taxonomy/taxonomic as a frequent term of reprobation in linguistics.

a. Recall Chomsky’s 1962, 1964 attacks on Post-Bloomfieldian American structuralist phonemics as involving, not (usually) the classical or autonomous phonemic level, but the taxonomic phonemic level. Here, the intended criticism is rather explicit.

b. Only implicit, though, are criticisms like those that we both heard from our own (ca. 1975) linguistics-professors, exhorting us not to act like Post-Bloomfieldian American structuralists; e.g.: “Make generalizations going beyond the original set of facts that you were given; don’t just rearrange the data!” — recall that Greek taxo-nom-ía originally involved, literally speaking, the ‘arrangement-law…’, or ‘law of arrangement…‘….

  1. Yet taxonomy has long been an extremely positive term in modern biology (and the one positively evaluated use of type in biology involves type specimens, which are employed taxonomically!).

a. Taxonomy is often employed synonymously (e.g., by Mayr) with systematics (and/or classification): “The terms systematics and taxonomy are considered by me as approximately synonymous…[; i]n America…[,] the term taxonomy seems to be preferred…[; i]n the rest of the world…[,] the term systematics seems to be more widely used” (Mayr 1942/1982: 6n.1).

b. And, as for the importance of systematics: “It is the basic task of the systematist to break up the almost unlimited and confusing diversity of individuals in nature into easily recognizable groups, to work out the significant characters of these units, and to find constant differences between similar ones. Furthermore, [(s)]he must provide these units with ‘scientific’ names which will facilitate their subsequent recognition by workers throughout the world…. Even this ‘lowest’ task of the systematist is of tremendous scientific importance. The entire geological chronology hinges on the correct identification of the fossil key species. No scientific ecological survey should be carried out without the most painstaking identification of all the species of ecological significance. Even the experimental biologist has learned to appreciate the necessity for sound, solid identification work” (Mayr 1942/1982: 9).