The pursuit of excellence

As someone subject to perfectionist tendencies, I enjoyed reading this article on Excellence vs. Perfection. The former is defined by its focus on process, while the latter focuses on results.

While the article highlights perfectionism’s negative impact on self-esteem, I couldn’t help also noticing that the excellence-focused path just seemed a lot less stressful.

“The pursuer of excellence sets realistic but challenging goals that are clear and specific whereas the perfectionist set unreasonable demands or expectations.”

This goes beyond just setting realistic goals for a single task — but also to time management and prioritization: how many tasks can you realistically accept? If you are constantly striving to do or complete more than you realistically can, and unsatisfied until they are all done “properly”, are you setting yourself up for unremitting stress and, ultimately, burnout?

“The pursuer of excellence would examine the situation and make decisions about what is most important to do, where they can set limits by saying “no,” and when they can delegate.”

An excellence-seeker can accept criticism and suggestions, since they aid in the process of improvement. A perfectionist instead sees them as challenges or reminders that they have not reached their goal.

The article also encourages us to slow down and to be patient in our pursuit of excellence.

“The pursuer of excellence finds enjoyment and satisfaction in the pursuit of goals whereas the perfectionist is usually unhappy or dissatisfied. When goals or risks are challenging and achievable and are not attached to the self-concept they can be fun to pursue.”

Overall, this was a welcome perspective. I’m already predisposed to value the process of learning or acquiring skills over the end result. (So far this has worked quite well with my violin practicing, possibly because I’m not aiming for any kind of recital or skill level but instead simply to improve over time.) But it’s easy to get sidetracked by evaluations and grades in a school setting, or deliverables and publications at work. Here then is a nice reminder!

Graded by your peers

I’ve been experimenting with some of the new massively multiplayer online course offerings from Coursera. In the spring, I took Cryptography, and I am now taking Fantasy and Science Fiction: The Human Mind, Our Modern World. These courses are offered for free, for anyone who wants to take them. I’ve been curious about the (eventual) business model, since there will have to be some way to recoup the investment in web site architecture and content. The lectures do seem to be “record once, replay forever”, but it’s still a big effort to do up front.

One way they’ve kept the ongoing costs reasonable, though, is by offloading one of the biggest time consumers in traditional education: grading. The Cryptography course was conducted in an entirely auto-grading mode. The homeworks each week were a series of multiple-choice or fill-in-the-blank questions. The feedback was actually quite good from these exercises — if you got something wrong, there might be a clue as to where to look, and if you got it right, there was usually an explanation, which you could learn from if you’d just gotten lucky with your choice. Further, you could attempt each homework 4 times, a process designed to encourage “mastery” (progressive learning). I know what you’re thinking. Four tries on a multiple-choice test should basically ensure you get 100%, since you could explore all possible options. Not so! They’ve made the process more sophisticated, producing a new mixture of answers for each question each time you attempt it. You really do have to think through the problems each time. I approve.

The F&SF class is different. Our assignments consist of 300-word essays, which can’t be auto-graded (with any real reliability). First I must note that I found WRITING a 300-word essay to be particularly challenging. How can you say anything of substance in 300 words? How can you call out something of interest in a 400-page book using only 300 words? But, as in haiku, the limitations of the medium are themselves a spur to inspiration. So then, how to grade them? Coursera has adopted a peer grading strategy, in which you are assigned to grade a random set of your classmates’ essays, and your essay correspondingly is sent to a random set of peers to be graded. In this class, we’re required to grade four peers, but allowed to grade more. The grading itself is very coarse: you assign one score for Form (grammar, organization, etc.) and one for Content. Each can be given a score of 1 (poor), 2 (average), or 3 (exceptional). You are also required to provide some text feedback.

So far, I found the grades I received from my peers to be fair, but I don’t think I’ve learned much from them. Most of the feedback was compliments, with a few rather surface-level critiques, rather than the kind of feedback you’d get from a professor or TA. But one reason for this is the bizarre organization of this particular class. You are required to do the reading, write an essay blind (on no suggested topic, simply something that “will enrich the reading of an intelligent, attentive fellow student”), and only THEN are you permitted to view the professor’s videos with his analysis of the readings. Perhaps this is intended to reduce “bias” from the instructor, but ultimately all it does is set you up to be evaluated tabula rasa (with respect to the course content), so I don’t see how the assessment has anything to do with what you have learned. These should be pre-tests rather than the sum of the grade. With the current scheme, the lectures themselves unfortunately become less of a priority, because by the time they’re available, you should already be moving on to read and plan an essay on the next reading. That’s a shame, because Dr. Rabkin is clearly a thoughtful and knowledgeable source. I’ve found most of his lectures to be interesting and thought-provoking (even though I disagree violently with some of his analysis of Grimm’s Fairy Tales! Ugh!). So, two weeks in, I’m not very enamored of this kind of peer grading. I hope Coursera continues to experiment with new strategies.

You can check out Coursera’s statement of pedagogy in which they explain their design choices and include references to some external work on their efficacy. It’s mostly reasonable arguments. I’m on board with the mastery learning comment, for example. However, I found the argument for peer grading to be weaker. The main motivation (never articulated) has got to be the challenge of providing feedback for thousands (or hundreds of thousands) of students, which is a scaling issue. Instead they cite research on the benefits of peer review, which are valid, but I think never intended to be the SOLE source of feedback for students, and the strengths of crowd-sourcing, which depends on large numbers for reliability, which four random grades from others in the class don’t provide. I’m not asserting that this is an invalid method of instruction, but I’m not convinced by the evidence they’ve offered.

In working through these courses, I’ve already gotten ideas for how I would experiment creatively with this new teaching medium. Watching slides is boring. Watching a talking head is boring. I love, however, the occasional pauses that require you to answer a question (pop quiz!) to proceed. It’s great for capturing attention that may have been wandering. The Cryptography class made good use of these. The F&SF class doesn’t use them at all. If I were teaching, I’d also bring in props or direct students to relevant websites or otherwise increase the level of activity and interactivity as much as possible. Right now, the only interaction in the F&SF course is through the essays (anonymized) and the discussion forums (which no one can keep up with). I’d like to foster more interaction with the professor, without inundating that person. I think well crafted video lectures can improve on this front.

A cookbook that teaches!

Most cookbooks tell you what to do, but not why. Not so “The New Best Recipe”, in which the superlative is not advertising-speak but instead quite literal: here you will find the recipes that produced the best results in a professional test kitchen.

Initially, the idea of using this mighty tome to create a meal felt like being asked to write an essay based on an encyclopedia. Where even to start? What’s good? Then I started flipping through it, and realized that the point of this book is that it’s ALL good. Unlike most cookbooks, here each item is preceded by a short discussion of what the ideal properties of that item are (“Gingerbread should be tender, moist, and several inches thick. It should be easy enough to assemble just before dinner so squares of warm gingerbread can be enjoyed for dessert.”), followed by a summary of a battery of test experiments that hone in on what’s needed to achieve that ideal (akin to my own experiments with how much baking powder to use in biscuits, but far more extensive). Then comes the final, polished, optimized recipe.

This means that, in addition to getting a really great recipe for gingerbread, you also learn a smattering of fundamental cooking and food science principles in the process. Further, by the time you get to the recipe, you now understand why they made the choices they did (milk over water, molasses over honey, white sugar over brown, etc.). I LOVE IT!

“We start the process of testing a recipe with a complete lack of conviction, which means that we accept no claim, no theory, no technique, and no recipe at face value. We simply assemble as many variations as possible, test a half-dozen of the most promising, and taste the results blind. We then construct our own hybrid recipe and continue to test it, varying ingredients, techniques, and cooking times until we reach a consensus.”

The basic philosophy behind this book (an assumption that good cooking is definable, testable, repeatable, and achievable) is wonderfully comforting to my fundamental personality type. Cooking is art, and skill, but (here) it can also be science. Here’s the book’s phrasing: “All of this would not be possible without a belief that good cooking, much like good music, is indeed based on a foundation of objective technique. Some people like spicy foods and others don’t, but there is a right way to saute, there is a best way to cook a pot roast, and there are measurable scientific principles involved in producing perfectly beaten, stable egg whites.”

The book also includes hand-drawn illustrations of cooking techniques (like how to measure different kinds of ingredients and what style of measuring cups works best) and pictures of failed outcomes (like five blueberry muffins that do not qualify as “best”).

Now I can’t wait to actually try out one of these best-recipes. I think I see some “Chicken and Rice with Saffron and Peas” in my future tonight. Thanks to my friend Elizabeth for a fantastic gift!

What is Io’s lava made of?

Jupiter’s moon Io is very active volcanically:

“A Giant plume from Io’s Tvashtar volcano composed of a sequence of five images taken by NASA’s New Horizons probe on March 1st 2007, over the course of eight minutes from 23:50 UT. The plume is 330 km high, though only its uppermost half is visible in this image, as its source lies over the moon’s limb on its far side.” (Robert Wright and Mary C. Bourke)

But what is that lava made of? What materials lie inside the moon that are being spewed out? We can’t (yet) land on Io and test its lava directly. But we can make some inferences based on remote sensing observations of the lava’s temperature. The temperature carries information about how mafic (magnesium and iron-rich) or felsic (silicon-rich) the lava may be.

The best way to test our ability to deduce composition from orbit is to do it here on Earth, where we do have the opportunity to determine the true composition by sampling the lava on the ground. Scientists Robert Wright, Lori Glaze, and Stephen M. Baloga recently reported a positive correlation between temperature observations from Earth orbit (using the Hyperion spectrometer) and ground composition observations of 13 volcanoes: “Constraints on determining the eruption style and composition of terrestrial lavas from space”. The conclusion for Io is that the lava is so hot that it is likely ultramafic: very high magnesium/iron content.

You can read more about this endeavor (and view more pictures).

See our Earth transit, too!

Perhaps you saw the recent Venus transit of the Sun. But what about an Earth transit?

Obviously we can’t see such a phenomenon while sitting on the Earth itself. But some clever astronomers have done calculations to work out when the Earth would transit the Sun from the perspective of other bodies in the solar system, including the Moon and Jupiter.

“In January 2014, Jupiter will witness a transit of Earth. And we can see it too, the astronomers say, by training NASA’s Hubble Space Telescope on the huge planet and studying the sunlight it reflects.”
(From NBCNews, June 4, 2012)

Using Jupiter as a mirror seems a curious strategy, since the reflected light will also be influenced by the chemical makeup of Jupiter’s atmosphere. However, just as with the hunt for exoplanets, if we can stare at Jupiter for long enough before the transit occurs, we can build a good enough model so its factors can be subtracted out from the Earth+Jupiter signal during the transit. Scientists first plan to test this strategy with a Venus transit that Jupiter will see (Earth won’t) in September of this year. And I’ve seen talk that they used the Moon as a mirror to observe the recent Venus transit from the Earth vicinity — but I haven’t been able to find any images of the result yet. Here’s how it works:

« Newer entries · Older entries »