Tuesday, November 29, 2005

Finally, a good Mosquito

A device that gladdens the heart of the old curmudgeon in me. I wonder whether they make portable units so you can sweep the area clean of teenagers wherever you go ...

Monday, November 28, 2005

Marvin the Martian catches a cold

We watched War of the Worlds yesterday, which, overall, was actually a lot better than I'd expected. However, despite the appeal of the notion that an unspecified "wee animalcule" was more powerful than all the king's horses and all the king's men, and the whole "suspension of disbelief" thing, there's something that stuck in my craw about it. Specifically: how could "intellects far superior to ours" [or something to that effect], with the technology to build some seriously kick-ass fighting machines thousands of years before humans developed anything similar, have missed something as glaringly obvious as the fact that they might be allergic to Earth's bacteria and viruses ? Even NASA, an agency that hasn't exactly inspired confidence lately, thinks about stuff like that, and they don't even have the cool "ride the lightning" transport that the Martians had.

Then again, maybe there was pressure among the Martians to
rush to war and not quite enough thought about what to do once they got to Earth and now there's a Martian president uttering sentences along the lines of "We're not interested in playing the blame game, or pointing appendages ... Marvin did a heck of a job ...". In other words, maybe people are people everywhere, so to speak.


Wednesday, November 23, 2005

The synthetic biology of Nature

This week's edition of Nature magazine "celebrates the emerging field of synthetic biology" -- there are a couple of review and commentary articles [which unfortunately require a subscription to read], including one by Drew, but what's even cooler is that the magazine cover is a synthetic biology cartoon, probably a first for Nature.

In general, this field is trailblazing in a bunch of areas other than the "pure" science and engineering aspects of the work, in terms of trying to influence how science is done. There is a pervasive attitude very much informed by the open-source ethos, which is all about sharing information as widely as possible and, just as importantly, making it easy for everybody to participate in generating and updating the information. There's also a lot of emphasis on getting away from the current practice of experimental know-how being handed down from initiate to novice in a mostly oral fashion, which makes biology seem like an arcane science where some people just have "the hands" and can get experiments to work whereas newbies just have to stumble around for a while, paying their dues until they figure out the little detail that makes the difference. Examples of efforts to avoid re-inventing the wheel and make biology more approachable are the Openwetware wiki, the Registry of Standard Biological Parts, the International Genetically Engineered Machines competitions etc.

In general, it's fascinating to be in the middle of watching this approach to science emerge and the social responses to it, both from laypeople and scientists [such as recent comments in an issue of Science that seem more prompted by professional jealousy than any higher motives, as far as I'm concerned].

PS: If you're interested in following what's going on in this field, there's a Synthetic Biology website that is updated as new events of interest arise [which is reasonably often], as well as a couple of [currently low-volume] synthetic biology mailing lists that are open to everybody.

Tuesday, November 22, 2005

Lessons from AI, maybe

Disclaimer: My knowledge of the history and practice of AI is pretty sketchy, so all that follows may be totally wrong.

In thinking about the current state of computational biology, specifically its application to the real world, it seems like there are certain parallels to be drawn with the development of artificial intelligence [AI].

Lots [all ?] of the first AI systems were based on formal/logical reasoning of the type "A is a bird; birds can fly; therefore A can fly" ie they relied on having very tightly specified knowledge, and rules about how to apply that knowledge, to allow them to make decisions/predictions. This approach ran into problems for a number of reasons.

For one, there are always exceptions to rules; for example, penguins are birds, but they can't fly. So any rule like the one above had to be modified to say something like "A is a bird; birds, except for penguins can fly; if A is not a penguin, A can fly". But that way lies madness -- there are lots of other flightless birds, and so you have to keep making the rule more and more complicated.

Another problem was the sheer amount of rules and knowledge required to reason about even the simplest things. For example, in order to make a rule about birds, you first have to specify what is and isn't a bird, and to know that a "house sparrow" is the same as "an animal of species Passer domesticus", and that a Kentucky Fried Chicken isn't actually a bird etc or else you won't be able to reason correctly in all cases.

The upshot of all this is that these formal system [also known as "Good Old-Fashioned AI"] were pretty much only usable for "toy problems", the kind that are easy to find in a research lab but non-existent in the real world, and led to some disenchantment with AI. [That hasn't stopped some people from continuing to pursue that approach, via the brute-force approach of building a database with lots and lots of rules ...].

Then, in the 1990's, statistical/machine learning approaches to AI started to become popular. These approaches don't have all their knowledge rigidly encoded; instead, they analyze past data to come up with more fuzzy notions like "What's the probability that X will happen, given what has happened in the past ?" eg "In the past, this coin came up heads 90% of the time; what are the chances that it'll come up heads again on the next toss ?". This approach more accurately reflects the messiness in the real world -- you can't know everything in advance and freak occurrences do happen, so the best thing to do is make [almost] no assumptions, learn from history, never say something can't happen and make your best guess based on what you've learned. Software systems using these sorts of algorithms have enjoyed some spectacular successes, like completion of the 2005 Darpa Grand Challenge, a contest to build a vehicle that could successfully navigate 175 miles of desert terrain on its own ie without a driver.

So, to summarize: in AI, formal/fully-specified systems: not so good, "informal"/statistically-based systems: pretty good, in terms of being able to handle the real world.

In computational biology, I think of mechanistic, differential-equation based models of biological processes as the the analogue of formal AI systems. You have to know all the interacting proteins, specify which ones interact and how strongly etc. What makes this difficult is that, in general, you really don't know all the proteins involved, you don't know all the interactions, you have only a few [not very accurate] measurements of what the reaction rates are etc. Basically, there's a whole bunch of stuff you don't know and so can't build into the model and each time something new is discovered, you have to go back and update your model. And measuring some of the stuff you'd need to refine your model is generally lots of drudgery and so nobody does it. The end result is that you can only build these sorts of models for small, very well-understood biological systems and even then the models aren't very good at capturing what's actually going on. In other words, you're restricted to mostly toy problems, which, while interesting, are unlikely to be useful to anybody who wants to do something like predict how cells will respond to a particular drug.

On the other side, there are statistical models of biological systems, which don't make any specific statements about whether protein X interacts with protein Y but rather examine data that's relatively easy to generate, make more general statements, like "When there's lots of protein X, there isn't much protein Y but lots of protein Z" and make those sorts of statements about lots of proteins at once. Because you're looking at so much more data, and aren't constrained by trying to figure out how all the pieces interact in detail, you end up with a much broader picture of what's going on in a cell. And, arguably, that broader picture is more useful when you have real-world applications in mind.

So, similar to what happened with AI, my guess is that mechanistic models in computational biology will remain an academia-only topic for the foreseeable future, whereas the more statistically-based models will be [and already are, I suppose] the most useful "real-world" ones for the next 10-20 years [and maybe forever, depending on how much more effort is expended on the more mechanistic models ...].

Tuesday, November 15, 2005

Bling out your dead

Between watching lots of "Six Feet Under", Christina taking various death-themed pictures [like this and that] and listening to bits of "Stiff", Christina and I have had a couple of discussions about what we'd like done with our remains if [that's right, "if", not "when" ;-)] we die. My take it on it has been pretty pedestrian [namely, cremation], whereas Christina has come up with some unusual ideas, like wanting her skeleton to be on display in our house, or plastinated as an anatomy exhibit [both ideas that I firmly vetoed, because they're just creepy.]

However, we think we've found a better idea: get turned into a LifeGem. The LifeGem company takes cremation ashes and turns them into diamonds. Granted, you can only get blue and yellow diamonds, the best clarity they offer is VVS and it's kinda pricey, but it's still a pretty cool idea [and presumably their process will get better and cheaper as time goes on]. Finally, men can have that diamond pinky ring they always wanted and women can get jewelry from their husband even after he's dead ;-)

A few other thoughts come to mind:

- It's probably only a matter of time before a rapper gets shot in some sort of rap beef and turned into a bunch of diamonds that are then proudly displayed in the teeth of his bereft posse. Or maybe his LifeGem ends up in the dentition of the guy who shot him, kind of a rap version of a hunting trophy.
- If you lose a limb, presumably you could also have it cremated and turned into a diamond. At least that way you'd still sort of have it ...
- There's the potential for repartee that terminates all further conversation: "Oh, what a lovely ring ! Where did you get it ?" "That's, literally, my dead husband, Sid ..."
- It'd be kind of a creepy heirloom to pass on to your kids: "Hey kids, here's a bit of your dead mother. Treasure it."
- It seems like you'd feel obliged to wear it, or display it somewhere, all the time. Sticking it into a drawer somewhere would just be ... disrespectful.

Actually, hold on, shouldn't it be called a DeathGem ? That name probably wouldn't test well in a market survey ...

Friday, November 11, 2005

An African Iron Lady

Looks like Liberia is going to have a female president, which is pretty cool. Hopefully she can do a better job than the "big men" [including Nkrumah and Kenyatta, both listed as "heroes" in the NYT article] who ran the continent into the ground after independence, and won't have to deal with some macho army upstart deciding that he doesn't want his country run by a woman and staging another coup.

Tuesday, November 08, 2005

The curse of unknown dimensionality

Maybe my professor read this Notional Slurry post, or at least the bit that reads:

"Some of these problems can be answered with a quick and simple Google search and some writing. Some would make good Masters Thesis projects. Some have one right answer; some have no right answer; some have many. Some require explanation, some require programming, some require mathematics, some require historical background, some require number crunching, some require experimentation, some require intuition, some require asking the right person, some require advanced domain skills from outside our department. Some are trick questions; some are so obvious you’ll imagine they’re trick questions; some are inherently time-consuming; some have hard and easy ways to solve them. Many are ill-posed, and need clarification. Some are problems you should already know how to answer. Some are problems you might not be able to answer by yourself when we arrive at the final exam."


... and, inspired, decided to try it out and had it backfire.

I say this because last night the following little note, from my professor, regarding the latest homework assignment, appeared in my inbox:

Problem 2.1 turns out to be harder than expected (i.e., none of us knows the answer). So write down what you understand about it, and don't worry if you don't get it. Extra points to anyone who does!

In other words, this problem apparently breaks the contract that's generally implicit in homework problems, namely that a) the professor knows the answer and b) students should be able to figure it out within the time alloted, without requiring them to be von Neumann.

I can't say I was all that surprised because several of us [students] spent two hours talking to one of the TAs for the class and it became clear that nobody, including the TA, knew definitively what the answer was [or at least how to prove it]. At least the prof had the decency to tell us this a couple of days before the problem set is due.

[For those of you so inclined, the question is: "What is the VC dimension of a single rectangle in the plane, with the freedom to decide whether the inside is positive or negative, and without requiring the rectangle to be aligned with the axes ?". The current money is on "at least 7, and less than 10".]

Friday, November 04, 2005

Watch out, pro photographer coming through ...

All of Christina's hard work is starting to pay off: she just landed a contract with a stock photo agency, the first of many, I'm sure. Suh-weet ! Incontrovertible proof that my wife kicks ass and takes pictures, not just names [though she does need names for her model releases].

Thursday, November 03, 2005

Internet-based file synchronization ? What a silly idea !

Microsoft buys FolderShare, a product that allows you to keep files synchronized across various computers, across the Internet, not just on your local network or behind your corporate firewall. I guess they figured that trying to build that functionality themselves would put them several years behind the curve. What I find ... ironic/infuriating about this is that I spent the last couple of years of my career at MS trying to get various teams to work together to build exactly this sort of functionality and failed miserably because [among other things] nobody thought it was important enough and there were too many squabbling cooks in the kitchen arguing about what to build and who should build it.

Apparently somebody finally pulled their head out of their @$$ recognized this actually was important and was willing to put their money where their mouth is. The fact that it wasn't anybody in the core Windows division [the org I was in] doesn't surprise me. I wonder how they're going to integrate this acquisition with the similar capabilities offered by Groove [also purchased by MS in the last year] and other content distribution technologies that are being cooked up inside MS.

Better late than never, I guess. At least it's not my problem anymore.