Wednesday, April 29, 2009

Tree and Vine: An Allegory of Attenuated Parasitism.

The town of Forrest has been around for centuries. It’s the kind of place where sons inherit their father’s businesses and nobody can remember when things were too different. The town has always been so small that it supported only a single shopkeeper; the current proprietor of this humble store is a tall and stable fellow named Woody, the descendant of a long line of tall and stable men just like himself who have worked hard to build and maintain what’s always been a social focus of Forrest.

Small towns like Forrest might seem peaceful to visitors, but internally there are the inevitable gripes, grievances, and grudges. For example, a recent family feud over the inheritance of their grandfather’s property has split Woody from his cousin Trey. As a consequence, Trey has recently opened a competing store directly across the street from Woody thus ending Forrest’s long-established one-shop monopoly. This, as you’d suspect, has been terrible for Woody.

Forrest is not entirely full of hard working capable souls – for example, consider Vinnie the thief. Vinnie, like Woody, is descended from an ancient line of Forrest inhabitants. Vinnie, like Woody, pursues the same occupation as his father and his father’s father. But Vinnie, unlike Woody, isn’t exactly a clone of his ancestors.

Vinnie’s father was a notorious scoundrel. An aggressive thief and burglar, he was nevertheless as dimwitted as he was ruthlessness. It doesn’t take a genius to know that if you continue to steal from the same store over and over that there might eventually be nothing left to steal. This concept seemed totally lost on Vinnie’s father and as a result he almost caused Woody’s father to close the only store in town.

But, as suggested, Vinnie was not cut out of the same aggressive yet witless stock as his father. Indeed, Vinnie is more bargainer than terrorist -- a theme established early in his life. When Vinnie’s father began to push him into the family business, his father told him: “Go into Woody’s store, show him who you are, break a few things then take what you want and stroll out like you own the place. That’s how it works for guys like us. That’s how it has always worked.”

Young Vinnie tried. He walked into Woody’s store and looked around. He picked up a few items that looked breakable and considered tossing them to the ground. But, soon he became aware of Woody’s suspicious gaze following him around and found himself placing the stock back on the shelf and adverting his eyes. Finally, Vinnie decided just to come clean.

“Do you know whose son I am?” Vinnie asked Woody naively.

“Of course.”

“Then how about you just give me a hundred bucks?”

Woody thought about this. A hundred dollars was actually quite a small price to pay compared to the usual cost in damage and theft. But, a hundred dollars for what exactly? A hundred dollars just to make some kid walk away? All things being equal, Woody would just assume he didn’t have Vinnie’s small-time extortions nor his father’s grand theft, but that really wasn’t one of the available options and therefore the proposed agreement would be the lesser of two evils.

“I’ll tell you what”, said Woody pulling out the cash, “I’ll give you one hundred dollars a week for doing absolutely nothing as long as you don’t make the same deal with my cousin Trey across the street. This will be your territory, but the store across the street stays your father’s territory. Deal?”

“Deal.” Vinnie said, shaking Woody’s hand three times.

And with that simple verbal contract, an arrangement was made. Each week Vinnie would come in, shake Woody’s hand three times, and earn a hundred dollars.

Over time, their relationship became, if not exactly friendly, at least routine. Little by little they forgot about the initial circumstances of the arrangement and found themselves acting like civil gentlemen considering the issues of the day.

One day, a small force of bandits from a nearby town attempted to invade, seeking to steal supplies and animals. Obviously, both Woody and Vinnie were desperate to repel this invasion and during the crisis all past discord was forgotten. Not surprisingly, between the two of them, Vinnie was the better fighter owing to the weapons and viciousness inherited from his violent family. That’s not to say that Woody didn’t engage the enemy, but violence is clearly Vinnie's comparative advantage.

A few months later, a fire broke out. As before, both Vinnie and Woody had a mutual interest in stopping this mortal threat. While Vinnie pitched in to fight the fire, this time it was Woody – with his access to buckets and hoses – who played the comparatively larger role in extinguishing this mutual threat.

And so it went. As the relationship normalized, they found that their common needs were greater than their distrust and consequently they found more and more ways that it was profitable to depend on each other’s specializations. Vinnie became not only the defender of the neighborhood but also the store’s out-of-town sales representative and Woody paid him a commission on his sales. Meanwhile, Woody’s freed resources meant that he was able to invest more in a nicer shop with more stock to the profitable benefit of both.

Generation after generation inherited the agreement and the benefit of specializing and working together turned out to be great. The paltry hundred dollars became not so much an extortion as just one part of a complex set of mutual exchanges of goods and services. In fact, Vinnie and Woody’s sons didn’t even know why they engaged in this weekly routine of thrice handshakes and an exchange of cash -- maybe it was some sort of ritual of friendship; maybe it had to do with some old debt now long since irrelevant; whatever, it seemed a quaint part of their past. To outsiders, it was hard to imagine the shop running without two employees, and most assumed that it had always been that way and always would.

Monday, April 27, 2009

Geometry of Biological Time, Chapt 2.

Co-tidal map from NASA via Wikicommons. The points of intersection are the "phase singularities" where the tidal phase is undefined.

Slowly making my way through this book. Chapter 2 is about phase singularities -- places where the phase of some oscillation is undefined. The coolest example is the earth's tides. The surface of the earth is a sphere ("S2" in topology speak) and the tides are defined by a phase (S1). So for each point on earth at any given moment there's a tidal phase. But S2->S1 mappings (with certain continuity assumptions) must contain phase singularities -- there must be places where you can't define the phase. Above is a map from NASA showing these places as the intersections of the co-tidal lines. You can think of the tides as sloshing around those points where the sea level doesn't change.

The chapter is mostly about biological versions of such phase singularities. Detailed examples are given from fruit fly circadian rhythms, but the technical details of the experiments were overwhelming so I didn't fully follow and decided, perhaps unwisely, to plod forward without complete understanding.

Thursday, April 23, 2009

BSTQ - Bull Shit Tolerability Quotient

There are many traits that determine someone's performance in various social settings such as school, work, military, etc. A popular metric for correlation to "success" in such social system is the "Intelligence Quotient" which purports to measure elements of abstract intelligence. Another metric that has gained popularity is the "Emotional Intelligence Quotient" which purports to measure the ability to perceive and mange emotions in oneself and others. Both of these metrics claim a high correlation to success in aforementioned social institutions.

I submit that success in roles within said social systems -- student, factory worker, warrior, etc. -- requires a high tolerance of activities such as: implementing poorly articulated tasks, engaging in inane conversations, attending pointless engagements, and other time-wasting activities known informally as "Bull Shit" (BS). The ability to tolerate such BS is a very important trait that is not normally rigorously evaluated.

I propose a simple test to measure an individual's tolerance for BS: a list of increasingly inane questions and pointless tasks is given to the test taker. For example, the test might begin with questions like: "Fill in the blank: Apples are __ed" and end with stupendously pointless tasks such as "Sort these numbers from least to greatest" followed by several hundred ~20 digit numbers and then having the next task say: "Now randomize those same numbers". The Bull Shit Tolerability Quotient (BSTQ) would just ignore the given answers and simply count the number of questions that test taker was willing to consider before handing the test back in frustration and declaring: "This Bull Shit!"

If a formal BSTQ test is not available, most standardized academic tests can be used as a reasonable substitute. However, the dynamic range of such generic academic tests to measure BSTQ is low. In other words, only extreme low-scorers of a proper BSTQ test will be measurable via the number of unanswered questions on a standard academic test used as a BSTQ surrogate. Extreme caution must be used when interpreting an academic test as a BSTQ analog -- the test giver may misinterpret the number of unanswered questions as the result of the test taker's low knowledge of the test's subject matter instead of as a spectacularly low BSTQ score.

BSTQ tests can easily be made age independent. For pre-verbal children the test would involve increasingly inane tasks such as matching sets of colored blocks to colored holes and so forth. The test would simply measure how many of these tasks the pre-verbal child could engage in before he or she became irritated or upset with the examiner.

Like the IQ and EIQ I suspect that the BSTQ will be correlated to the degree of success within many social endeavors, in particular: school; however, I also suspect that there is a substantial fraction of the population that has an inverse correlation between their IQ and their BSTQ scores. Of these, of particular interest are those with high IQ with low BSTQ. I would not be surprised if the population of people rated by their co-workers as "indispensable" is significantly enriched for individuals with a high IQ / low BSTQ score. Finally, I submit that these individuals are severely under-served by the educational system which demands -- indeed glorifies -- extremely high BSTQ, especially among those with high IQ.

Adding a BSTQ evaluation to pre-academic children might suggest that the student would excel in a non-traditional educational environment where the student is allowed to select their own agendas and tasks. A very low BSTQ coupled with a very high IQ would seem to almost guarantee rebellion if a traditional educational approach is applied. Identifying individuals with exceptionally high IQ scores and exceptionally low BSTQ scores may be a valuable tool to prevent the mis-classification of such students as "trouble makers" and instead correctly classify them as "potential indispensable iconoclasts".

(This idea evolved from lunch discussion with Marvin today, so thanks Marvin!)

Monday, April 20, 2009

Belief in torture's efficacy = Belief in witchcraft

This piece on Slate about the history of witch hysteria demonstrates to me the absolute absurdity of torture. Anyone who thinks that torture techniques such as waterboarding are effective tools of interrogation must also believe in witches. Why? Because throughout history (and into the present day) people have confessed to being witches under torture. Therefore, if you believe that torture works to "extract the truth" then all those people who confessed must really have been witches!

This demonstrates the insidious evil nature of torture. Not only can the torturer come to a false conclusion -- the one they want -- but even the tortured can come to hold the same false ideas. In other words, torture isn't merely morally reprehensible, but it doesn't even work!

Indeed, suppose you were "the Devil" and your goal was to explicitly foil legitimate interrogations because, as the devil, you had a sick desire to ensure chaos reigns throughout the world. As such, you couldn't come up with a "better" interrogation technique than torture. The questioner ends up reinforcing the ideas they started with and thereby ignores possibly valid alternative leads and the suspect may end up believing the planted ideas thereby reinforcing the incorrect assumptions of the torturer. If it weren't horrific, it would be the plot of a goofball comedy where two characters engage in a circular conversation convincing themselves of something absurd like up is down or love is hate. A "real" malevolent Devil would watch humans engaged in such cruel pointless floundering and be amused to no end. Will we stupid humans ever stop entertaining "the Devil" by engaging in this ghastly charade given the obvious pointlessness and immorality of it? Signs are not hopeful.

Saturday, April 18, 2009

Shopping in the Science Supermarket

"Can you tell me where the mustard is?", I asked the nerdy looking storekeeper.

"It's next to the mayonnaise."

"Um okay....... But where is the mayonnaise?", I replied peevishly.

"Near both the ketchup and the soup."

"Again, this isn't really helping me. Maybe some sort of landmark independent of the foodstuffs themselves would be helpful?"


"I mean, really? All you can give me is the location of everything in terms of other things! I want mustard and I'm standing next to radishes what am I suppose to do?!"

"Radishes are near the soup!"


"Soups..." he directed me like I the slow child I was, "... are... near... the... mayonnaise."

And so I headed towards the soup. Turns out something called "onions" are also near the soup and the smell of these caught my attention: so pungent yet sweet. I peeled one back to see what was inside and what I found was... another onion! Onions are made of onions?! How can that be? So I tore open the onion and found onions all the way down.

That was 30 years ago. Someone just asked me where the mustard is. I don't know, I never did find it but, I told him. "the mayonnaise is near the bread."

Friday, April 17, 2009

Tree logic

The pecan in front of my house is slow. I think it might be, you know, one of the thicker trunks in the forest. The tree in the back yard tells me that it's time to blossom, flower, leaf out, spread its tree-semen with abandon. I say delicately to the front tree, "Look, I don't want to criticize, but, you know, the tree in the back..."

The front tree is having none of this; and, frankly, it resents being judged. "Look, just stop right there monkey," it says to me "I don't need to hear your thoughts on this. I was planted here 100 years ago. I didn't ask to be put here. I'm doing the best I can. I'm from Illinois, I know about snow. You ever had snow on your new leaves? No, you haven't because you're an ape. Trust me, you don't want to get caught out in that. I'm not going to get caught out in that."

"But in the 100 years you've been here has it ever snowed in April?" I queried cautiously.

"I got my ways. I've never been caught out in the snow."

"But it doesn't snow here in spring."

"And I've never been caught out in it."

"But if you don't get a move on, you're going to lose your chance to pollinate the other trees. I mean, don't you care about your legacy?"

"I'm not interested in having children that are so dumb as to leaf out too early and get caught in the snow. I don't want to breed with those premature blossomers, like your friend back there -- that's reckless risk taking. Rather not have children than have stupid children," the tree sulked.

"But it doesn't snow here in April." I repeated.

"And I've never been caught out in it."

Thursday, April 16, 2009


The first of a few new rugs has arrived. Thanks to Amberlee for all the help in finding these. I especially like the runner in the entrance.

Wednesday, April 15, 2009

Molecular computers -- A historical perspective. Part 2

We left off last time discussing the precision of an analog signal.

Consider a rising analog signal that looks like the following ramp.

Notice that there's noise polluting this signal. Clearly, this analog signal is not as precise as it would be without noise. How do we quantify this precision? The answer was described in the early 20th century and is known as the Shannon-Hartly theorem. When the receiver decodes this analog variable what is heard is not just the intended signal but rather the intended signal plus the noise (S+N); this value can be compared to the level of pure noise (N). Therefore the ratio (S+N)/N describes how many discrete levels are available in the encoding.

The encoding on the left is very noisy and therefore only 4 discrete levels can be discerned without confusion; the one in the middle is less noisy and permits 8 levels; on the right, the low noise permits 16 levels. The number of discrete encodable levels is the precision of the signal and is conveniently measured in bits -- the number of binary digits it would take to encode this many discrete states. The number of binary digits need is given by the log base 2 of the number of states, so we have log2( (S+N)/N ) which is usually algebraically simplified to log2(1+S/N).

It is important to note that although Shannon and Hartley (working separately) developed this model in the context of electrical communication equipment, there is nothing in this formulation that speaks of electronics. The formula is a statement about information in the abstract -- independent of any particular implementation technology. The formula is just as useful for characterizing the information content represented by the concentration of a chemically-encoded biological signal as it is for the voltage driving an audio speaker or the precision of a gear-work device.

We're not quite done yet with this formulation. The log2(1+S/N) formula speaks of the maximum possible information content in a channel at any given moment. But signals in a channel change; channels with no variation are very dull!

(A signal with no variation is very dull. Adapted from Flickr user blinky5.)

To determine the capacity of a channel one must also consider the rate at which it can change state. If, for example, I used the 2 bit channel from above I could vary the signal at some speed as illustrated below.

(A 2-bit channel changing state 16 times in 1 second.)

This signal is thus sending 2 bits * 16 per second = 32 bits per second.

All channels -- be they transmembrane kinases, hydraulic actuators, or a telegraph wires -- have a limited ability to change state. This capacity is generically called its "bandwidth" but that term is a bit over simplified so let's look at it more carefully.

It is intuitive that real-world devices can not instantaneously change their state. Imagine, for example, inflating a balloon. Call the inflated balloon "state one". Deflate it and call this "state zero". Obviously there is a limited rate at which you can cycle the balloon from one state to the other. You can try to inflate the balloon extremely quickly by hitting it with a lot of air pressure but there's a limit -- at some point the pressure is so high that the balloon explodes during the inflation due to stress.

(A catastrophic failure of a pneumatic signalling device from over-powering it. From

Most systems are like the balloon example -- they respond well to slow changes and poorly to fast changes. Also like the balloon, most systems fail catastrophically when driven to the point where the energy flux is too high -- usually by melting.

(A device melted from overpowering it. Adapted from flickr user djuggler.)

Consider a simple experiment to measure the rate at which you can switch the state of a balloon. Connect the balloon to a bicycle pump and drive the pump with a spinning wheel. Turn the wheel slowly and write down the maximum volume the balloon obtains. Repeat this experiment for faster and faster rates of spinning the wheel. You'll get a graph as follows.

(Experimental apparatus to measure the cycling response of a pneumatic signal.)

(The results from the balloon experiment where we systematically increased the speed of cycling the inflation state.)

On the left side of the graph, the balloon responds fully to the cycling and thus has a a good signal (S). But, on the left side very few bits can be transmitted at these slow speeds so there's not a lot of information able to be sent despite the good response of the balloon. But, further to the right the balloon still has a good response and now we're sending bits much more rapidly so we're able to send a lot of infrmation at these speed. But, by the far right of the graph, when the cycling is extremely quick, the balloon response falls off and finally hits zero when it popped so that defines the frequency limit.

The total channel capacity of our balloon device is an integral along this experimentally sampled frequency axis where we multiply the number of cycles per second at that location by the log2( 1+S/N ) where S is now the measured response from our experiment which we'll call S(f) = "The signal at frequency f". We didn't bother to measure noise as a function of frequency in our thought experiment, but we'll imagine we can do that just as easily and we'll have a new graph N(f) = "The noise at frequency f". The total information capacity (C) of the channel is the integral of all these products across the frequency samples we took up to the bandwidth limit (B) where the balloon popped.

If you want to characterize the computational/communication aspects of any system you have to perform the equivilent of this balloon thought experiment. Electrical engineers all know this by heart as they've had it beaten into them since the beginning of their studies. But, unfortunately most biochemists, molecular biologists, and synthetic biologist have never even thought about it. Hopefully that will start to change. As we both learn more about biological pathways and we become more sophisticated engineers of those pathways we will have an unnecessarily shallow understanding until we come to universally appreciate the importance of these characteristics.

Next, amplifiers and digital devices. To be continued...

Tuesday, April 14, 2009

Molecular computers -- A historical perspective. Part 1

I've been having discussions lately with Andy regarding biological/molecular computers and these discussions have frequently turned to the history of analog and digital computers as a reference -- a history not well-known by biologists and chemists. I find writing blog entries to be a convenient way to develop bite-sized pieces of big ideas and therefore what follows is the first (of many?) entries on this topic.

In order to understand molecular computers -- be they biological or engineered -- it is valuable to understand the history of human-built computers. We begin with analog computers -- devices that are in many ways directly analogous to most biological processes.

Analog computers are ancient. The first surviving example is the astonishing Antikythera Mechanism (watch this excellent Nature video about it). Probably built by the descendants of Archimedes' school, this device is a marvel of engineering that computed astronomical values such as the phase of the moon. The device predated equivilent devices by at least a thousand years -- thus furthering Archimedies' already incredible reputation. Mechanical analog computers all work by the now familiar idea of inter-meshed gear-work -- input dials are turned and the whiring gears compute the output function by mechanical transformation.

(The Antikythera Mechanism via WikiCommons.)

Mechanical analog computers are particularly fiddly to "program", especially to "re-program". Each program -- as we would call it now -- is hard-coded into the mechanism, indeed it is the mechanism. Attempting to rearrange the gear-work to represent a new function requires retooling each gear not only to change their relative sizes but also because the wheels will tend to collide with one another if not arranged just so.

Despite these problems, mechanical analog computers advanced significantly over the centuries and by the 1930s sophisticated devices were in use. For example, shown below is the Cambridge Differential Analyzer that had eight integrators and appears to be easily programmable by nerds with appropriately bad hair and inappropriately clean desks. (See this page for more diff. analyzers including modern reconstructions).

(The Cambridge differential analyzer. Image from University of Cambridge via WikiCommons).

There's nothing special about using mechanical devices as a means of analog computation; other sorts of energy transfer are equally well suited to building such computers. For example, in 1949 MONIAC was a hydraulic analog computer that simulated an economy by moving water from container to container via carefully calibrated valves.

(MONIAC. Image by Paul Downey via WikiCommons)

By the 1930's electrical amplifiers were being used for such analog computations. An example is the 1933 Mallock machine that solved simultaneous linear equations.

(Image by University of Cambridge via WikiCommons)

Electronics have several advantages over mechanical implementation: speed, precision, and ease of arrangement. For example, unlike gear-work electrical computers can have easily re-configurable functional components. Because the interconnecting wires have small capacitance and resistance compared to the functional parts, the operational components can be conveniently rewired without having to redesign the physical aspects of mechanism, i.e. unlike gear-work wires can easily avoid collision.

Analog computers are defined by the fact that the variables are encoded by the position or energy level of something -- be it the rotation of a gear, the amount of water in a reservoir, or the charge across a capacitor. Such simple analog encoding is very intuitive: more of the "stuff" (rotation, water, charge, etc) encodes more of represented variable. For all its simplicity however, such analog encoding has serious limitations: range, precision, and serial amplification.

All real analog devices have limited range. For example, a water-encoded variable will overflow when the volume of its container is exceeded.

(An overflowing water-encoded analog variable. Image from Flickr user jordandouglas.)

In order to expand the range of variables encoded by such means all of the containers -- be they cups, gears, or electrical capacitors -- must be enlarged. Building every variable for the worst-case scenario has obvious cost and size implications. Furthermore, such simple-minded containers only encode positive numbers. To encode negative values requires a sign flag or a second complementary container; either way, encoding negative numbers significantly reduces the elegance of the such methods.

Analog variables also suffer from hard-to-control precision problems. It might seem that an analog encoding is nearly perfect -- for example, the water level in a container varies with exquisite precision, right? While it is true that the molecular resolution of the water in the cup is incredibly precise, an encoding is only as good as the decoding. For example, a water-encoded variable might use a small pipe to feed the next computational stage and as the last drop leaves the source resivoir, a meniscus will form due to water's surface tension and therefore the quantity of water passed to the next stage will differ from what was stored in the prior stage. This is but one example of many such real-world complications. For instance, electrical devices, suffer from thermal effects that limit precision due to added noise. Indeed, the faster one runs an electrical analog computer the more heat is generated and the more noise pollutes the variables.

(The meniscus of water in a container -- one example of the complications that limit the precision of real-world analog devices. Image via WikiCommons).

Owing to such effects, the precision of all analog devices is usually much less than one might intuit. The theoretical limit of the precision is given by Shannon's formula. Precision (the amount of information encoded by the variable, measured in bits) is log2( 1+S/N ). It is worth understanding this formula in detail as it applies to any sort of information storage and is therefore just as relevant to a molecular biologist studying a kinase as it is to an electrical engineering studying a telephone.

.... to be continued.

Utility yard fence

In the last few days I've finished up the fence line that separates the backyard from the utility yard. This involved staining more boards with Pinofin which is as malodorous as it is beautiful. Thanks to Jules for the help with staining! Fortunately she is hard-of-smelling so didn't notice how bad it was!

Saturday, April 11, 2009

Finished workshop drawers

Today I finished attaching the hardware to my new tool drawers. I'm stupidly excited about them as I can put away all my tools and clear out a lot of clutter from my shop.

We ordered the boxes from Drawer Connection. They really did a great job; they are perfectly square, dovetailed joined, glued, sanded, and polyed. As Bruce said, "I'll never build another box again." It's a demonstration to me how custom web-based CNC construction is the future of a lot of products. We ordered about 30 boxes of all different sizes and the total was only about $1100 including shipping. There's no possible way we could have made them for that.

Thursday, April 9, 2009

Finished utility yard

Finished up the utility yard today which involved raising the AC units and changing grade a little bit. This weekend I'm going to stain the pickets and rebuilt the rear fence line.

Tuesday, April 7, 2009

The 21st Century Chemical / Biological Lab.

White Paper: The 21st Century Chemical / Biological Lab.

Electronic and computer engineering professionals take for granted that circuits can be designed, built, tested, and improved in a very cheap and efficient manner. Today, the electrical engineer or computer scientist can write a script in a domain specific language, use a compiler to create the circuit, use layout tools to generate the masks, simulate it, fabricate it, and characterize it all without picking up a soldering iron. This was not always the case. The phenomenal tool-stack that permits these high-throughput experiments is fundamental to the remarkable improvements of the electronics industry: from 50-pound AM tube-radios to iPhones in less than 100 years!

Many have observed that chemical (i.e. nanotech) and biological engineering are to the 21st century what electronics was to the 20th. That said, chem/bio labs – be they in academia or industry – are still in their “soldering iron” epoch. Walk into any lab and one will see every experiment conducted by hand, transferring micro-liter volumes of fluid in and out of thousands of small ad-hoc containers using pipettes. This sight is analogous to what one would have seen in electronics labs in the 1930s – engineers sitting at benches with soldering iron in hand. For the 21st century promise of chem/nano/bio engineering to manifest
itself, the automation that made large-scale electronics possible must similarly occur in chem/bio labs.

The optimization of basic lab techniques is critical to every related larger-scale goal be it curing cancer or developing bio-fuels. All such application-specific research depends on experiments and therefore reducing the price and duration of such experiments by large factors will not only improve efficiency but also make possible work that was not previously. While such core tool paths are not necessarily “sexy”, they are critical. Furthermore, a grand vision of chem/bio automation is one that no single commercial company can tackle as the vision for such requires both a very long time commitment and a very wide view of technology. It is uniquely suited to the academic environment as it both depends upon and affords cross-disciplinary research towards a common, if loosely
defined, goal.

Let me elucidate this vision with a science-fiction narrative:

Mary has a theory about the effect of a certain nucleic acid on a cancer cell line. Her latest experiment involves transforming a previously created cell line by adding newly purchased reagents, an experiment that involves numerous controlled mixing steps and several purifications. In the old-days, she would have begun her experiment by pulling-out a pipette, obtaining reagents out of the freezer, off of her bench, and from her friend's lab and then performed her experiment in an ad hoc series of pipette operations. But today, all that is irrelevant; today, she never leaves her computer.

She begins the experiment by writing a protocol in a chemical programming language. Like high-level languages used by electrical and software engineers for decades, this language has variables and routines that allow her to easily and systemically describe the set of chemical transformations (i.e. “chemical algorithms”) that will transpire during the experiment. Many of the subroutines of this experiment are well established protocols such as PCR or antibody
separation and for those Mary need not rewrite the code but merely link in the subroutines for these procedures just as a software engineer would. When Mary is finished writing her script, she compiles it. The compiler generates a set of fluidic gates that are then laid-out using algorithms borrowed from integrated circuit
design. Before realizing the chip, she runs a simulator and validates the design before any reagents are wasted – just as her friends in EE would do before they sent their designs to “tape out.” Because she can print the chip on a local printer for pennies, she is able to print many identical copies for replicate experiments. Furthermore, because the design is entirely in a script, it can be reproduced next week, next year, or by someone in another lab. The detailed script means that Mary’s successors won’t have to interpret a 10 page hand-waving explanation of her protocol translated from her messy lab notes in the supplementary methods section of the paper she publishes – her script *is* the experimental protocol. Indeed, this abstraction means that, unlike in the past, her experiments can be copyrighted or published under an open source license just as code from software or chip design can be.

Designing and printing the chip is only the first step. Tiny quantities of specific fluids need to be moved into and out of this chip – the “I/O” problem. But Mary’s lab, like any, both requires and generates thousands of isolated chemical and biological reagents each of which has to be stored separately in a controlled environment and must be manipulated without risking cross-contamination. In the old days, Mary would have used hundreds of costly sterilized pipette
tips as she laboriously transfered tiny quantities of fluid from container to container. Each tip would be wastefully disposed of despite the fact that only a tiny portion of it was actually contaminated – such was the cost when everything had to be large enough to be manipulated by hand. In the old days, each of the target containers – from large flasks to tiny plastic vials – would have had to be hand-labeled resulting in benches piled with tiny cryptic scribbled notes with all of the confusion and inefficiency that results from such clutter. Fortunately for Mary, today all of the stored fluids for her entire lab are maintained in a single fluidic database; she never touches any of them. In this fluidic database, a robotic pipette machine addresses thousands of individual fluids. These fluids are stored inside of tubes that are spooled off of a single supply and cut to length and end-welded by the machine as needed. Essentially, this fluidic database has merged the concepts of “container” and “pipette” – it simply partitions out a perfectly sized container on-demand and therefore the consumables are cheaper and less wasteful. Also, the storage of these tube-containers is extremely compact in comparison to the endless bottles (mostly filled with air) that one would have seen in the old days. The fluid-filled tubes could be simply wrapped around temperature-controlled spindles and, just like an electronic database or disk drive, the system can optimize itself by “defragmenting” its storage spindles ensuring there’s always efficient usage of the space. Furthermore, because the fluidic
database knows the manifest of its contents, all reagent accounting can be automated and optimized.

Mary has her experiment running. But, moving all these fluids around is just a means to an end. Ultimately she needs to collect data about the performance of her new reagent on the cancer line in question. In the old days, she would have run a gel, used a florescent microscope, or any number of other visualization techniques to quantify her results – any of these measurements would have required a large and expensive machine. But today, most of these measurements are either printed directly on the same chip as the fluidics using printable chemical / electronic sensors or those that can’t be printed are interfaced to a standardized re-usable sensor array. The development of those standards was crucial to the low capital cost of her equipment. Before far-sighted university engineering departments set those standards, each diagnostic had its own proprietary interface and therefore the industry was dominated by an oligopoly of several companies. But now, the standards have promoted competition and thus the price and capabilities of all the diagnostics has improved.

As Mary’s chemical program executes on her newly minted chip, she gets fluorescent read-outs on one channel and antibody detection on another – all such diagnostic were written into her experimental program in the same way that a “debug” or “trace” statement is placed into a software program. After her experiment runs, the raw sensor data is uploaded to the same terminal where she wrote the program and she begins her analysis without getting out of her chair.

After the experiment, the disposable chip and the temporary plumbing that connected to it are all safely incinerated to avoid any external contamination. In the old days, such safety protocols would have had to be known by every lab member and this would have required a time-consuming certification process. But today, all of these safety requirements are enforced by the equipment itself and therefore there’s much less risk of human mistake. Furthermore, because of the
enhanced safety and lower volumes, some procedures that were once classified as bio-safety level 3 are now BSL 2 and some that were 2 are now 1, meaning that more labs are available to work on important problems.

Mary’s entire experiment from design to data-acquisition took her under 1 hour – comparable to a week by old manual techniques. Thanks to all of this automation, Mary has evaluated her experiment and moved on to her next great discovery much faster than would have been possible before. Moreover, because so little fluid was used in these experiments her reagents last longer and therefore the cost has also fallen. Mary can contemplate larger-scale experiments than anybody dreamed of just a decade ago. Mary also makes many fewer costly mistakes because of the rigor imposed by writing and validating the entire experimental script instead of relying on ad hoc procedures. Finally, the capital cost of the equipment itself has fallen due to standardization, competition, and economies of scale. The combined result of these effects is to make the acquisition of chemical and biological knowledge orders of magnitude faster than was possible just decades ago.

Monday, April 6, 2009

Macro-scale examples of chemical principles

I like macro-scale examples of chemical principles. Here's two I've noticed recently.

I was very slowly pouring popcorn into a pot with a little bit of oil. The kernels did not distribute themselves randomly but instead formed some long chain aggregations because, apparently, the oil made them more likely to stick to each other than to stand alone. This kind of aggregation occurs frequently at the molecular scale when some molecule has an affinity for itself.

This is wheelbarrow chromatography. During a rain, water and leaves fell into this wheelbarrow. Notice that the leaves and the stems separated; apparently the stems are lighter than water and the leaves are heavier. This sort of "phase separation" trick is frequently used by chemists to isolate one type of molecule from another in a complex mixture. Sometimes the gradient of separation might be variable density as in this example, but other times it might be hydrophobicity or affinity to an antibody or many other types of clever chemical separations known generically as "chromatography". Note that the stems clustered. Like the popcorn above, apparently there is some inter-stem cohesion force that results in aggregation as occurs in many chemical solutions.

Porch branches

Bruce and I finished up the porch branches on Friday; they have not yet been stained so the color is different. It's funny -- this is one of the very first details I thought of for the house design and one of the very last to be implemented so for me this small detail is very important in that it collapses some sort of psychic "todo" stack and thereby provides the relief that one feels in crossing-out a complicated set of tasks (never mind that the list has grown substantially since then! :-)

Geometry of Biological Time, Chapt 1.

I've started reading A. T. Winfree's book (father of Erik): "Geometry of Biological Time". Sometimes one finds just the right book that fills in the gaps of one's knowledge; this book is just right for me at this moment, as if I was fated to read it.

It begins with an excellent introduction to topology mapping. I had picked up some of the ideas by osmosis, but the first 20 pages were an excellent and helpful series of discussions that help solidify my understanding of this subject. He lucidly expands on abut 15 topological mappings in increasing complexity. For each, he provides intuitive examples with lovely side discussions such as relating the S1 -> I1 mapping to the international-date-line problem and the astonishment of Magellan's expedition to the loss of a day upon the first round-the-world trip. (I first heard of this idea as the climax of the plot of "around the world in 80 days"). He introduced the idea of all such mapping problems as singularities in the mapping functions. Again, this was something that I half-understood intuitively and thus it was very helpful to have it articulated clearly.

I now realize that in previous amorphous computing experiments described in this blog, I had been exploring S1 x S1 -> S1 mappings (circular space by oscillator space mapping to a visible phase). This S1xS1->S1 mapping is exactly where he heads after his introduction as a place of interest. In other words, I had ended up by intuition exactly where he did.

It's a very long and dense book, if I can maintain my way through it, it may generate a lot of blog entries!

Wednesday, April 1, 2009

House projects

Arch cut for the handrail on the main stairs.

Kitchen nearly complete.

Bruce gets out the Gallagher Saw!

Bruce and I start mounting branches on the front porch.

This one branch was a bitch, we gnawed away it it with probably 30 cuts before we got it to fit right!

We wrapped up many of the recent house projects. The upstairs porch is screened in and has a pane of glass on one side which significantly reduced the noise from the neighbor's AC unit -- now I can have my bedroom windows open!

The rear kitchen is done except for drawer pulls and a couple of electric outlets.

We cut an arch into one of the supports along side the staircase that had been bothering me for a long time because it didn't leave room for your hand to slide along the hand rail. Bruce and I came up with this cut arch solution which also I think looks really cool -- opened up the space a lot.

We started adding branches to the front porch beams which was part of the original plan but we had never gotten around to it. One of these branches was much harder than the others we ended up make many small cuts until we got it positioned just right.