Automatic analysis

Karl Fogel on The Open Source Report Card:

So, approximately every year or so, someone launches Yet Another Fully Automated Statistical Tool Dashboard Thing that tries to show the skills & activity of a given open source contributor, or the “health” of a given open source project.

The next thing that happens then follows as surely as night follows day, or as unrequited nostalgia follows the latest James Bond release:

People try the tool, and say “Hmm, well, it’s wildly inaccurate about *me* (or about the projects I have expertise in), but maybe it’s useful to someone else.”

And maybe it is. But what’s really going on, I think, is that the developers of these tools are trying to solve too much of the problem.

Investigating the activity of a particular developer, or the health of an open source project, inevitably involves human judgement calls informed by experience and by out-of-band knowledge. These judgements can be tremendously improved by good tool support – one could imagine certain dashboard-like tools that would make such investigations orders of magnitude more convenient and accurate than they currently are.

But because that kind of tool deliberately doesn’t go the last few inches, it’s a lot harder to demo it. There’s no one-size-fits-all final screen displaying conclusions, because in reality that final screen can only be generated through a process of interaction with the human investigator, who tunes various queries leading up to the final screen, based on knowledge and on specific questions and concerns.

And because it’s harder to demo, people are less likely to write it, because, hey, it’s not going to be as easy to post about it on Slashdot or Hacker News :-). Well, also because it’s a lot more work: interactive tools are simply more complex, in absolute terms, than single-algorithm full-service dashboards that load a known data query and treat it the same way every time.

So what I’m saying is: Any tool that tries to do this, but where you just have to enter a person’s name or a project’s name and click “Go”, is going to be useless. If you didn’t shape the query in partnership with the tool, then whatever question the tool is answering is probably not the question you were interested in.

Added: June 10, 2016 - From The Daily WTF’s “The Oracle Effect”:

Work against the Oracle Effect by building software systems that do not provide conclusions, but arguments. Resist throwing some KPIs on a dashboard without working with your users to understand how this feeds into their decision-making. Keep in mind how your software is going to be used, and make sure its value, its reasoning is transparent, even if your code is proprietary. And make sure the software products you use can also answer the important question about every step of their process: Why?


A gentle & conceptual introduction to Node.js

This is the text of a skillshare I delivered at The Sourcery, an awesome commission-less recruiting service. Not only is their model nifty, but they care about actually knowing what they’re recruiting for—so hopefully there isn’t anything too wrong below.

You probably don’t know JavaScript. Like really know javascript.

Javascript is a real programming language: functions, lambdas, closures, objects, inheritance, passing by reference, etc.

Somewhere long, long ago, an arbitrary decision was made that we would “program” a web browser with Javascript. Javascript is good at this (we’ll soon learn why), but there are experimental browsers that use Python (and other languages) to manipulate a web browser window too. There is no inherent reason that Javascript has to be tied to the web browser (or web browsers have to be tied to Javascript).

Unfortunately because the primary context we experience Javascript is in the web browser, we more strongly associate JavaScript with its browser-specific functions/extensions (for manipulating the DOM and listening for UI clicks) than its core language which can exist completely separately from the web browser. Just like we can use Ruby for general purpose programming without using Rails.

Javascript Language

If you only think of Javascript in the context of the browser, you’re really missing out; Javascript as a language is badass: lambdas, closures, inheritance, passing by reference.

Why did JavaScript come to dominate the web browser? JavaScript is really good in a browser because as a language it easily supports event-based and asynchronous programming.

  • Event-based: when programming a web browser, most of the actions you want to wire up are “when the user clicks on this, perform a different action than when they click on that.” You declare what event(s) you want to listen for and the resulting action/function you want to take place / be called when you “hear” that event take place.

  • Asynchronous: when you’re interacting with a web page, you’re downloading new data (or images or video) that might take a second or two (or more for video!) to arrive over the wire. You don’t want to just freeze the browser while you wait for it to load, instead you want it to load in the background, then kick off some action when its complete (“completed loading” is another example of an event).

(Remember, we’re talking about these properties as being inherent to the Javascript language itself, not just the functions/extensions that help it interact with the web browser while its doing these things).

js on browser and server

Turns out that the same properties that make javascript work well for interacting with a web page are also very similar to what’s needed for building a good web server.

  • Event-based: a server will constantly be getting requests (at different urls, on different ports) that need to be responded to in different ways when those events take place

  • Asynchronous: in dealing with a request, a server will need to load other data (from a file, a database, another server like Twitter), and you don’t want your entire server to just lock up while it waits for that external service (we call this I/O for “Input/Output”) to finish. For a typical web request, the majority of time will be spent waiting on I/O.

So how are other languages that aren’t (for the most part) event-based/asynchronous used on web servers? Like Ruby, Python and PHP. The synchronous code runs on top of an asynchronous web server (Apache, Rack, WSGI) that creates a synchronous “thread” to run your Ruby/Python/PHP code. If another request comes in while that first thread is still processing, the web server creates a new thread. Unfortunately, those threads are resource-intensive which means you can only create a limited number of them (based on how powerful your server is). If all your threads are in-use, a new request will have to wait until a previous request is finished and the thread becomes available again.

typical vs node server

So what happens when you build the entire web server in Javascript: You get Node.js! Instead of adding web-browser functions/extensions to the Core Javascript language, Node.js adds server functions/extensions e.g. managing http requests, filesystem operations, talking to databases, etc. While Node.js runs everything on one single “thread”, because Javascript is event-based/asynchronous we can serve hundreds (if not thousands) of simultaneous requests at once because Node.js doesn’t have to freeze/lock for the “I/O” (“In/Out” e.g. database, filesystem, external-service) processes to finish… Node.js can just answer another request (and another) while waiting for first request’s (and subsequent requests’) I/O process to complete, then come back to it and do the next step in responding to the request. Node.js is a master delegator.

So what can you do when you’re able to quickly serve hundreds/thousands of simultaneous connections?

  1. Proxy servers / Service “Glue” / Data Plumbing: routing/piping data between different servers, services or processes

  2. Telemetry / Analytics: catch and analyze data as events take place in your system

  3. Real-time connections to the web-browser: Traditional/Threaded systems try to keep their connections brief (because they can only handle a few at a time). If you don’t have that few-at-a-time constraint, you can leave the connection open much longer and thus easily send data back and forth in real time. Socket.io is a library for doing this.

Example of all 3: visualizing traffic going through a Node.js load-balancer by geolocating the requesting IP address and sending them to a map in the web browser in real-time via Socket.io: http://vimeo.com/48470307

maptail

Alternatives to Node.js: EventMachine (Ruby) or Twisted (Python). Unfortunately, the majority of Ruby/Python libraries aren’t written to be evented/asynchronous, which means you can’t use those libraries in an asynchronous environment (because they will lock it up). Whereas the majority of Node.js/Javascript code *is* written to be evented/asynchronous.

So if Node.js is so badass, why not use it for everything?

  1. CPU Blocking: because Node.js runs on only a single thread, any local processing you do (e.g. NOT database/service calls) locks the thread. For example, processing heavy numerical/algorithmic processing, or generating complicated HTML templates. Node.js works best when you do that data processing somewhere else (for example, in SQL or map/reduce database), and just sending along raw data (like JSON). This is why you’ll often see Node.js powering a thin API (calling out to a database and serving up some JSON) rather than a full-stack MVC implementation (like Rails/Django). This is why Node.js (backend server API) and single-page web apps like Backbone (frontend client-generated UI) are a powerful combination.

  2. Javascript as a language can be a pain in the ass: because JavaScript has spent so much time solely in the browser, it hasn’t gotten the love it deserves. It’s tough to fix things because of backwards compatibility (there are so many different browsers that would need to be updated, and web-compatibility is already hard enough). ECMA Script (the official Javascript “standard”) is evolving. Also, there are many javascript cross-compilers that allow you to write your code in another language, then convert it to javascript; example: CoffeeScript, Clojure, Dart.

Still, the opportunities that Node.js creates are worth it. Other fun stuff/opportunities for Node:

  1. Sharing code between Server and Browser: Node.js being in Javascript (like the browser) creates the opportunity to share code between your server and client (keeping things DRY), which makes it easier to create persist server-like functionality on the client (for example, if you’re on a mobile phone and your connection drops, you can still use the web-app until it reconnects). Meteor.js is provides a framework for this (and much more, they entirely muddle the distinction between server and client)

  2. Pre-rendering browser content on the server: Typically you don’t want to do heavy CPU processing on the Node server, but maybe you’re working with really lightweight clients and you want to “emulate a web-browser” on your more powerful server. Example: Famo.us pre-renders DOM translations in their tech-demo so it will run on devices like the Apple TV

Follow-up Questions:

So if Node.js + Backbone is a “powerful combination”, why don’t we just ditch Rails entirely?

The Rails ecosystem is more mature than Node’s: there are more engineers, more libraries, stronger conventions and a more complete development and deployment pipeline (scaffolding, testing, continuous integration, monitoring, etc.). If you have a startup with a limited development window and a typical product design (“users create accounts, post content, favorite other user’s content, see what was most favorited content”) that you need to quickly/agilely iterate, Rails has that problem solved (this is the strength of Rails over pretty much everything, not a weakness of Node specifically). If you’re looking at a cost curve, the starting cost for Rails will be way lower for a vanilla product. Now if you’re talking about doing something non-typical (realtime interaction) or are operating at a huge scale (where you can swap infrastructure costs for engineering costs), Node is enabling (there are some things you just can’t/don’t want to do without Node) if not affordable. Also, you can use Node in a heterogenous environment (running the load balancer, or analytics server) or integrate a Node-powered service into a more traditional product (for example, real-time chat is powered by Node, but user accounts, relationships and chat history is managed by Rails).

nodejs-cost


Reimagining Chicago’s 311 activity with Super Mayor Emanuel

Super Mayor Emanuel is one of the goofier applications I’ve built at Code for America: supermayor.io

Boring explanation: Using data from Chicago’s Open311 API the app lists, in near-realtime, service requests within the City of Chicago’s 311 system that have recently been created, updated or closed.

Awesome explanation: The mayor runs through the streets of Chicago, leaping over newly-reported civic issues and collecting coins as those civic problems are fixed.

I really like this application, not only because of its visual style, but because it lets you engage with the 311 data in a completely novel way: aurally. Turn up those speakers and let the website run in the background for about 30 minutes. Spinies (new requests) come in waves, and coin blocks (updated/closed requests) are disappointingly rare. Sure, I could have just created a chart of statistics, but I think actually experiencing requests as they come in makes you think differently about both the people who submitted a request and the 311 operators and city staff who are receiving them (just think about what caused those restaurant complaints… or maybe don’t).

The application is built with Node.js, which fetches and caches 311 requests, and a Backbone-based web-app, communicating via socket.io, which manages all of interface and animation. The source is on Github.

 


Put Your Civics Where Your Houseplant Is

The core assumption of engagement applications is that people will do an activity consistently and repeatedly if you just structure the experience and incentivize it correctly — even if it’s asinine.  The justification for civic engagement apps can be similarly foolish: people will perform a potentially beneficial activity that they aren’t currently doing if we give them the ability to do it on the internet (or via SMS, or iPhone, etc.). That’s why I built Civics Garden.

A few months ago a Code for America email thread came around asking how “If you could tell the story of how government works, what would you say?” I pushed back with the idea that one cannot know government without participating in it, and since we are a government of, by, and for the people, the best place to start would be reflecting on one’s own civic life and civic actions.

A tried and true method of reflection is journaling. Hundreds of millions of people keep a journal called Twitter, reflecting and writing on their days experiences, tribulations, meals, and cat sightings (this is not a comprehensive list). If people naturally do this, why not ask them to reflect specifically on their civic actions—voting, volunteering, checking out a library book, picking up a piece of trash, smiling at a stranger—and write down that reflection (however brief) on a regular basis?

Projecting from my own nature, people are fickle, lazy, unreliable creatures. That’s why gamification is the hotness. “How can I get you to look at my ads every day? I’ll make it a game.” Sure, this is no different than historical incentives (“How can I get you to work in my coal mine? I’ll pay you money.”), but now it’s on the internet where advertising is easier to place than coal mines. One form of gamification I really enjoy are virtual pets: like tomagatchi’s you feed with Unit Tests, or flowers you water with foreign language vocabulary. They encourage you to perform an activity because you instinctively (unless you’re a sociopath) feel good when they’re healthy and bad when they’re sickly… despite the fact you know they aren’t actually alive.

Civics Garden combines these concepts: by signing up, users adopt a virtual plant that they keep healthy by regularly “watering” it by writing down their civic deeds. If they go too long without a journal entry, the plant will wither and eventually die. Just like owning real houseplants, it has exhilarating potential.

I built Civics Garden with Node.js and MongoDB using Twitter for authentication. Each new user receives a healthy bamboo plant to caretake by writing short journal entries. To keep people reflecting regularly, one’s plant will wither after two days, and die after four–though users can replant it as many times as necessary. To keep them coming back to the site, users receive a tweet from @civicsgarden to let them know when their plant needs refreshing. As an minimum viable product (MVP), it’s high-on-concept/low-on-looks, but Diana and Emily helped with the graphics.

I’ve tested it internally with Code for America fellows and it should be a rousing success. All of us, as active civic participants performing important civic deeds should be able to briefly but consistently reflect upon and record our actions, right? See for yourself on Civics Garden and test your ability to reflect on a healthy civic commitment.

…or maybe people just don’t like bamboo.


Hard data on the status of Open311

With the recent announcement of 311 Labs and Code for America’s recent podcast featuring me talking about my perspective on Open311, this write-up about Open311Status is probably long overdue.

Open311Status—still a work in progress–polls Open311 servers in 30 cities and aggregates configuration, performance and utilization statistics about those Open311 endpoints. I built Open311Status for two reasons: first, to provide a high-level view of how different endpoints are performing as a developer sanity check (“is my app broken or the server?”); and second, to provide some hard data to the anecdotes and experiences I’ve used in advising the City of Chicago and others in how to improve Open311’s potential for positive impact in their cities and beyond.

In designing Open311Status I took advantage of the huge benefit of the Open311: interoperability. By adopting the Open311 API for exposing civic data, cities are enabling civic developers like myself to build reusable tools and apps. To access data from 30 cities, Open311Status doesn’t need an adapter for each individual city, only a server URL that it can expect to interact and deliver data in the same way as described by the Open311 API documentation. Sure, there are some minor interoperability issues (for example, Toronto doesn’t like datetimes submitted with milliseconds), but these have been minor speed bumps for development, not major deal breakers.

A major limiting factor in the utility of Open311 isn’t these minor technical issues, but in how the Open311 servers are configured. If Open311 is supposed to provide a rich API for developers to interact with there should be a broad number of categories (“types”) of requests that can be submitted, as well as a comprehensive listing of previously submitted requests that can be analyzed or dashboarded. Compare the Boston and Washington, D.C. Open311 implementations: Washington, D.C. currently provides 83 categories of service requests (from “Tree Inspection” to “Roadway Marking Maintenance”) while Boston only provides six. Washington, D.C. provides access to all requests regardless of whether they were entered via Open311, the 311 call center, or city work crews or departments; Boston only displays service requests made via the Open311 API. These configuration decisions have a huge impact on the comprehensiveness of the data and thus the potential applications third party developers can build on top of the Open311.

If server configuration determines the potential for an innovative application, server performance determines whether such an app will be useable. Because a potential Open311 application will be heavily reliant upon a city’s Open311 server, the speed and robustness of that server will determine the responsiveness of the application. At the obvious extreme, if the server goes down, that app will probably be useless. If the server is functioning yet takes several seconds to respond to every request, the application and its users will experience significant slow-downs and frustration. As I’ve witnessed through Open311Status, usually about 10 percent of the servers take longer than one second to respond at any given time. That’s not a stable infrastructure for third party developers to build upon.

Open311Status helps measure the distance between the potential of Open311 and the current reality. It’s cause to celebrate that here are 30 cities (and more to add to Open311Status) who have adopted both open data and open standards; this is hugely important progress! But there is still a lot of work to be done beyond the specification for Open311 to enable a rich and robust third party application developer ecosystem.

Some technical details: Open311Status is built on Node.js with the Express web framework and aggregates data in MongoDB through the Mongoose ORM. The application is single process and uses cron and async modules to poll the cities’ Open311 servers every five minutes. A fun/dumb feature is that individual service requests are streamed to the browser using Socket.io, but because many servers publish service requests an hour (or more!) after they’ve been submitted, Open311Status streams the previous day’s service requests in “real time” as if they were today’s (or rather, “real time minus one day”). Tests are done with Mocha in BDD style with Should using Sinon for mocks and stubs (though coverage could–always–be better). Open311Status is hosted on Heroku.

This post is cross-posted from the Code for America blog.


A Commonplace Book

From Steven Johnson’s Where Good Ideas Come From: The Natural History of Innovation:

Darwin’s notebooks lie at the tail end of a long and fruitful tradition that peaked in Enlightenment-era Europe, particularly in England: the practice of maintaining a “commonplace” book. Scholars, amateur scientists, aspiring men of letters—just about anyone with intellectual ambition in the seventeenth and eighteenth centuries was likely to keep a commonplace book. The great minds of the period—Milton, Bacon, Locke—were zealous believers in the memory-enhancing powers of the commonplace book. In its most customary form, “commonplacing,” as it was called, involved transcribing interesting or inspirational passages from one’s reading, assembling a personalized encyclopedia of quotations. There is a distinct self-help quality to the early descriptions of commonplacing’s virtues: maintaining the books enabled one to “lay up a fund of knowledge, from which we may at all times select what is useful in the several pursuits of life.”

John Locke first began maintaining a commonplace book in 1652, during his first year at Oxford. Over the next decade he developed and refined an elaborate system for indexing the book’s content. Locke thought his method important enough that he appended it to a printing of his canonical work, An Essay Concerning Human Understanding. Locke’s approach seems almost comical in its intricacy, but it was a response to a specific set of design constraints: creating a functional index in only two pages that could be expanded as the commonplace book accumulated more quotes and observations:

When I meet with any thing, that I think fit to put into my common-place-book, I first find a proper head. Suppose for example that the head be EPISTOLA, I look unto the index for the first letter and the following vowel which in this instance are E. i. if in the space marked E. i. there is any number that directs me to the page designed for words that begin with an E and whose first vowel after the initial letter is I, I must then write under the word Epistola in that page what I have to remark.

Locke’s method proved so popular that a century later, an enterprising publisher named John Bell printed a notebook entitled “Bell’s Common-Place Book, Formed generally upon the Principles Recommended and Practised by Mr Locke.” The book included eight pages of instructions on Locke’s indexing method, a system which not only made it easier to find passages, but also served the higher purpose of “facilitat[ing] reflexive thought.” Bell’s volume would be the basis for one of the most famous commonplace books of the late eighteenth century, maintained from 1776 to 1787 by Erasmus Darwin, Charles’s grandfather. At the very end of his life, while working on a biography of his grandfather, Charles obtained what he called “the great book” from his cousin Reginald. In the biography, the younger Darwin captures the book’s marvelous diversity: “There are schemes and sketches for an improved lamp, like our present moderators; candlesticks with telescope stands so as to be raised at pleasure to any required height; a manifold writer; a knitting loom for stockings; a weighing machine; a surveying machine; a flying bird, with an ingenious escapement for the movement of the wings, and he suggests gunpowder or compressed air as the motive power.”

The tradition of the commonplace book contains a central tension between order and chaos, between the desire for methodical arrangement, and the desire for surprising new links of association. For some Enlightenment-era advocates, the systematic indexing of the commonplace book became an aspirational metaphor for one’s own mental life. The dissenting preacher John Mason wrote in 1745:

Think it not enough to furnish this Store-house of the Mind with good Thoughts, but lay them up there in Order, digested or ranged under proper Subjects or Classes. That whatever Subject you have Occasion to think or talk upon you may have recourse immediately to a good Thought, which you heretofore laid up there under that Subject. So that the very Mention of the Subject may bring the Thought to hand; by which means you will carry a regular Common Place-Book in your Memory.

Others, including Priestley and both Darwins, used their commonplace books as a repository for a vast miscellany of hunches. The historian Robert Darnton describes this tangled mix of writing and reading:

Unlike modern readers, who follow the flow of a narrative from beginning to end, early modern Englishmen read in fits and starts and jumped from book to book. They broke texts into fragments and assembled them into new patterns by transcribing them in different sections of their notebooks. Then they reread the copies and rearranged the patterns while adding more excerpts. Reading and writing were therefore inseparable activities. They belonged to a continuous effort to make sense of things, for the world was full of signs: you could read your way through it; and by keeping an account of your readings, you made a book of your own, one stamped with your personality.

Each rereading of the commonplace book becomes a new kind of revelation. You see the evolutionary paths of all your past hunches: the ones that turned out to be red herrings; the ones that turned out to be too obvious to write; even the ones that turned into entire books. But each encounter holds the promise that some longforgotten hunch will connect in a new way with some emerging obsession. The beauty of Locke’s scheme was that it provided just enough order to find snippets when you were looking for them, but at the same time it allowed the main body of the commonplace book to have its own unruly, unplanned meanderings. Imposing too much order runs the risk of orphaning a promising hunch in a larger project that has died, and it makes it difficult for those ideas to mingle and breed when you revisit them. You need a system for capturing hunches, but not necessarily categorizing them, because categories can build barriers between disparate ideas, restrict them to their own conceptual islands. This is one way in which the human history of innovation deviates from the natural history. New ideas do not thrive on archipelagos.


State of the Shirt, March 2012

Day of the Shirt continues to delight me. I added a fun but subtle feature back in January: updating @dayoftheshirt’s Twitter avatar to today’s date. Day of the Shirt makes of point of being “refreshed every day” so since it’s tweeting out daily list of shirts, it makes sense to have a fresh daily avatar too. In addition to delight, Day of the Shirt is moderately successful too: while not hockey-sticks, I’m seeing 50 - 100% unique visitor growth month-over-month; and ~80% of my daily traffic is returning visits. Day of the Shirt went from about an average of 50 unique daily visitors in October, 2011 to an average of 1100 in March. Adding the “Like on FaceBook” widget at the beginning of February boosted new visitors too; in 2 months, Day of the Shirt has nearly twice as many Facebook “Likes” than it does Twitter followers, and Facebook drives ~150% more traffic than Twitter. In terms of the future, Day of the Shirt is reaching the limit of its current architecture—rewriting a static HTML file—which was cute originally. Day of the Shirt launched with 5 daily shirts and now it aggrates 13 daily (and a few semi-weekly) shirts. Putting it on an actual framework would make adding new shirts and testing/updating the templates much simpler and more reliable. I’m a bit caught-up on what I’ll move to (Django is in the lead), but I expect the experience to stay the same.


Methodological Belief

From Peter Elbow’s “The Believing Game”:

The doubting game represents the kind of thinking most widely honored and taught in our culture. It’s sometimes called “critical thinking.” It’s the disciplined practice of trying to be as skeptical and analytic as possible with every idea we encounter. By trying hard to doubt ideas, we can discover hidden contradictions, bad reasoning, or other weaknesses in them–especiallyin the case of ideas that seem true or attractive. We are using doubting as a tool in order to scrutinize and test.

In contrast, the believing game is the disciplined practice of trying to be as welcoming or accepting as possible to every idea we encounter: not just listening to views different from our own and holding back from arguing with them; not just trying to restate them without bias; but actually trying to believe them. We are using believing as a tool to scrutinize and test. But instead of scrutinizing fashionable or widely accepted ideas for hidden flaws, the believing game asks us to scrutinize unfashionable or even repellent ideas for hidden virtues. Often we cannot see what’s good in someone else’s idea (or in our own!) till we work at believing it. When an idea goes against current assumptions and beliefs–or if it seems alien, dangerous, or poorly formulated—we often cannot see any merit in it.*

And from the asterisk:

* I’m on slippery ground when I equate the doubting game with critical thinking, since critical thinking has come to mean almost any and every kind of thinking felt to be good. Consider the opening definition at the website of the Foundation for Critical Thinking:

Critical thinking is the intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action. In its exemplary form, it is based on universal intellectual values that transcend subject matter divisions: clarity, accuracy, precision, consistency, relevance, sound evidence, good reasons, depth, breadth, and fairness.

It entails the examination of those structures or elements of thought implicit in all reasoning: purpose, problem, or question-at-issue; assumptions; concepts; empirical grounding; reasoning leading to conclusions; implications and consequences; objections from alternative viewpoints; and frame of reference. Critical thinking — in being responsive to variable subject matter, issues, and purposes — is incorporated in a family of interwoven modes of thinking, among them: scientific thinking, mathematical thinking, historical thinking, anthropological thinking, economic thinking, moral thinking, and philosophical thinking.

Critical thinking can be seen as having two components: 1) a set of information and belief generating and processing skills, and 2) the habit, based on intellectual commitment, of using those skills to guide behavior. ….People who think critically consistently attempt to live rationally, reasonably, empathically. (Scriven and Paul)

Who could ever resist anything here (except the prose)?

I’d argue, however, that despite all attempts to de-fuse the word “critical,” it nevertheless carries a connotation of criticism. The word still does that work for many fields that use it for a label. For example, in “critical theory,” “critical literacy,” and “critical legal theory,” the word still actively signals a critique, in this case a critique of what is more generally accepted as “theory” or “literacy” or “legal theory”. The OED’s first meaning for critical is “Given to judging; esp. given to adverse or unfavourable criticism; fault-finding, censorious.” Not till the sixth meaning do we get past a censorious meaning to a sense of merely “decisive” or “crucial.”

In the simple fact that “critical thinking” has become a “god term” that means any kind of good thinking, I see striking evidence of the monopoly of the doubting game in our culture’s conception of thinking itself. (“Burke refers to a word like honor as a god-term, because it represents an aspiration towards a kind of perfection. The ultimate term, of course, is God himself.” [Goodhart]


Protest shirts

Regular readers of this blog are aware that posts rarely reference the present, let along the contemporary. But on Day of the Shirt, I felt compunction: a daily t-shirt aggregator is nothing but contemporary—it’s a 1-page website. So I took down Day of the Shirt today to protest SOPA/PIPA legislation.

And, as any organizer can tell you, going on strike takes more time and effort than not striking—cron scripts don’t  just turn themselves off.


2011 in review

2011 was a year of transitions: plenty of new starts and sad endings.

Shuttering the Transmission Project: in August our funding for the Digital Arts Service Corps ended. Despite the disappearance of our funding (and a pointless federal audit to boot), it was also one of the Transmission Project’s best years in terms in terms of our staff, Corps members and the work we did in our twilight. I can’t not mention though the frustration felt by the end of the Digital Arts Service Corps at the same time other media/technology-based national service programs come online.

Building 549 Columbus: with 4 months of unemployment, I had time to help build a cooperative coworking space in the South End.

Sundowned two websites: Both MeetAmeriCorps and MappingAccess were taken down in 2011; it felt good to recognize that they were long neglected.

Many new websites: DrunkenStumble, Print & Share, and Day of the Shirt (it launched in Oct 2010, but I spent a lot of time building traffic in 2011).

**Goodbye Boston: **I moved to San Francisco in November.

Founding the Boston Cyclists Union: following Danielle Martin’s 1st Principle of Community Organizing, “Keep showing up”, I am proud to be a founding board member of the BCU.

BetterBio: that didn’t go so well, but I learned a lot.

Other stuff: I did the layout again for my second edition of Survival News. I made my first serious pitches for a project (“All the Right People”, which was the thoughtchild of my coworkers Howie and Billy). I did a bunch of contract work for WGBH, Utile, LexCreative and SocialContxt. I had the pleasure of filing for unemployment insurance. I presented at the SF Wordpress Meetup (speedgeeking on “wp_filter”). Angelina’s friend Anna and Greg were married in Iowa.  I attended the Code for America Summit, Nonprofit Software Developers Conference, and DrupalCon.

Places I’ve slept

  • York, ME (I woke up there on the 1st)

  • Jamaica Plain, MA

  • Seattle, WA

  • Chicago, IL

  • Okoboji, IA.

  • Lenox, MA

  • San Francisco, CA

  • Poway, CA