Saturday, December 6, 2008

Planter progress


A little progress on the top of the planter this morning.

Friday, December 5, 2008

More AC experiments





I'm starting to have fun exploring the possibilities of this AC simulator. Above are spatially stable patterns using a bi-stable latch and small random initial conditions (approximating the noisy conditions of uninitialized amps). In the first picture, there is no diffusion so each parcel of space commits to one of the two states randomly. In the second picture, with diffusion, larger areas that by chance share a state tend to recruit their neighbors into that state. But, all of this recruitment must happen early because the gain on the latches eventually wins at which point there's no changing anyone's state (like an election). Thus, by dialing the ratio of diffusion to latch gain, you can choose the mean size of the features which is a cool phenotype all by itself. For example, imagine that this was a self-organized filter -- that one parameter could allow the construction of different kinds of mechanical particle filters.



In this picture I've stated to combine features. The left and center are two independent ring-oscillators with noisy initial conditions which create these interesting patterns as I've shown previously. (Although I'm still not positive they aren't artifacts, I'm starting to get a theory about how they form, and I'm going to be testing those ideas with controlled experiments tomorrow.) On the right is product of the two in oscillators which results in interesting spatio-temporal patterns. Like the latches above, these patterns are uncontrollable in all but gross properties because the pattern's position is the result of what amounts to "fossilized noise". In other words, the asymmetries at t=0 are amplified/converted into patterns at later time. That said, the form of the patterns is inspirational -- it hints at what is possible in potentially more information-rich initial conditions. For example, I now have an inkling how to partition space into integer sub-divisions (like fingers on a hand) without explicitly putting them there -- I'll be trying that soon.

Wednesday, December 3, 2008

Oscillator + Diffusion + Noise = Pattern


(Ring-oscillator with diffusion; x-axis: space, y-axis: time )

After an incredible multi-day pain-in-the ass getting Matlab installed, I'm able to start to explore some of the amorphous computations possible with this toy model I've been playing with. (Previous results came from running Matlab over X which was painfully slow). The above image is a simple ring-oscillator with diffusion and initialized with small random values. The random initial values seem likely in a molecular implementation whereby the inputs to the molecular amplifiers were un-initialized and therefore small stochastic deviations would dominate.

I know that simple processes can produce complicated structures as Wolfram is wont to repeat, but it's still astonishing when you see it. I mean, this thing has no clock, no memory, no boundaries, no initial conditions (just background noise) and a very simple oscillator; it doesn't get much simpler than that. I think the result is kind of beautiful, sinuous, like a tree made of waves. Maybe I'll do my next door panel like this.

All that said, I'm not positive that the patterns aren't an artifact of the integrator. Since I partition space up uniformly, it might be a result of that. I need to run a test where I reduce the spatial step and proportionately reduce the concentrations but my code isn't set up for that yet.

Sunday, November 30, 2008

Door panels 2nd panel and stain


It's taking more than an hour per panel but it feels like I could get it under an hour once I get good at it. There's nearly 100 panels in the house, this is obviously not going to be something I do all myself. I think I'll finish up this door as a prototype and then wait until I buy a big mill or have a mill shop do the rest. Either that, or hire some hourly labor for about 100 hours.

Friday, November 28, 2008

Molecular and Cellular Videos (External Link)

http://www.molecularmovies.com/showcase/index.html

OK, I thought I'd keep my blog mostly about only my projects but sometimes one runs across something really cool and blogging about it increases its Google score. My friend Eric Siegel at NY Hall of Science sent me this link to large collection of nice molecular and cellular animation videos.

I love videos like this. That said, I do have a very big complaint about the non-simulations (most of them) -- they make molecules appear to be intelligent agents. Molecules do not make deliberate choices; they do not see a complex forming and then think to themselves: "Hey, I think I'll whiz over there and insert myself into that growing structure!" For example, see the microtubule growth in Inner Life.

It is completely understandable that the animators of these videos have a hard time capturing the reality of molecules because the velocities at which things happen at the nano-scale are extremely difficult to comprehend and thus it is hard to create these animation without resorting to the "cheat" of "deliberateness". Unfortunately this cheat creates a major confusion -- I know because I remember being confused! In Segan's wonderful Cosmos series, there was an animation of DNA polymerase with its reagents all flying across the screen to assemble themselves into a growing polymer. I distinctly remember as a nine-year-old thinking: "How do the parts know where to go?" No one told me that 1) that's a great question and 2) they don't.

Here's the way animators to create these effects. They place the pieces of the model together in their final configuration and then they tell the animation program to fling all these pieces away in random directions with random tumbles. Then they simply play the animation backwards to create the effect of the individual molecules assembling themselves into the formation (that's the easy way to do it anyway). It creates the lovely assembling effect but it is a lie -- a very, very interesting lie.

Think about it -- in order for the animators to make it look like the molecules know what they're doing they have to run time backwards. That isn't merely a statement about animation -- it affords a deep insight into thermodynamics. Things which "know what they're doing" are, in effect, "running time backwards". Getting your head around this idea is the key to understanding what life is, why perpetual motion is impossible, and failing to understand it is central to many misconceptions especially among creationists.

Molecules don't know where they are going. They just thrash around randomly due to collisions. The sum of all that motion is what we call "heat" -- more heat, more violent thrashing around. If you were to put some molecules in a little pile they would bounce off each other spreading out into a more diffuse pile. Why should they spread out and not stay put or even compact themselves tighter? Because, as long as they aren't interacting with each other (we'll come back to this case) there are a lot more ways to be spread-out than there are to be compact. Scientist call this by weird name "entropy" -- it's the second law of thermodynamics: entropy (spread-out-ness) is always increasing. It's an idea that's so simple and yet so profound. Why is it true? Nobody knows; that said, try to imagine what the world would be like if it were false.

Suppose that molecules spontaneously created little ordered piles without interacting (again, we'll come back to interaction case). Those little piles are information. In other words, you could look at them and say: "Hey, there's a little pile there that shouldn't be -- since they aren't interacting they should have spread out, thus, something must have put them there." And then what? What are these little piles of spontaneous information forming? Are they spelling out Shakespeare? Or drawing a picture of a cat? Or writing out a cryptic secret that we can't read? See, it's nonsense; you can't turn it around. When you try to imagine a world that doesn't spread-out spontaneously then you end up with a world where information spontaneously appears out of nowhere and such a world would be indistinguishable from one where time was running backwards. In other words, the concepts of time and increasing entropy are the same concept.

Here's another way to think about it. Suppose that you had a tiny ball in a tube trap. Say the ball can be on either side of the tube: left or right. If the ball and tube are not interacting in some biased way then there's just as much chance that you'll find the ball on the left as the right side. Say you tried to use this tube as a memory device with the position of the ball meaning different things. You reach in and move the ball to the left side and then shut the trap and hand it to over to a friend who examines it. You shouldn't be surprised that when they open it they are just as likely to see the ball on the right as the left. This is a terrible memory device! The reader of the information might as well have just flipped a coin instead of relying on this thing to remember what you entered. How would you fix this? You'd have to glue the ball in place somehow to prevent it from moving. So, how would you glue it? There's lot of ways, you could introduce a chemical bond that stuck the ball and tube together or you could jam in a plug or lots of other clever contraptions. But every way of "gluing" will have the same requirement: it will need an investment of energy. In other words, an investment of energy is the same thing as information. If you see a pile of energy laying around somewhere then you know that such a pile potentially holds information (what that information encodes or means, that's a totally different question). And vice-versa, if you know some information then it must be that case that energy was invested to make it known. The two concepts -- information and free-energy -- are the same concept! And this explains why you can't build a perpetual motion machine. If you could then it would be creating information out of nowhere which is the same thing as time running backwards. Or, to put it another way, if you do build a perpetual motion machine then (just try) to stay the hell away from it because that thing is running time backwards!

And this gets us back to life. If it is the case that things can't spontaneously assemble then how can there be living things which are made from spontaneously assembled molecules? The fact that life is so information rich, is this evidence that something made the investment of free-energy? Yes. Shall we call this investor of free energy some sort or god or spirit or vitalistic force? That's a reasonable question, and I've seen this argument in creationist literature, but the answer is: no.

This gets us back to the videos and what's wrong with them. The videos make it appear that molecules "know" what they are doing. The seem to "know" that they should fly through space and attach themselves to some cool growing nano-machine. But they don't. What they do instead is much more interesting. They bounce all over the place without knowing squat. Why don't the spread out? They do, but they are held inside of a bag -- the cell -- which keeps them contained. When they bounce around they accidentally find molecular partners with whom they interact. This is very different than what I described before with the ball in the trap where we assumed that there was no interaction. Now, there is interaction -- they stick like glue. As described, such gluing requires energy. Where does the energy come from? It is pumped into the cell from the outside. And when the interactions break, that energy is released at higher entropy (time moving forward) and that entropy is pumped outside of the cell to keep it from poisoning the inside. Living things are devices that invest free-energy from their environment to temporarily increase the information inside of the cell. This is only possible because they have access to the free-energy; no free-energy, no life. By the way, there’s lot's of things do this, not just life. For example, a whirlpool is a pretty clearly defined "thing" that it is possible because free-energy in the form of rushing water gets trapped into a shape that then dissipates the entropy out the bottom. Whirlpools, and living things, are not "things" in the sense that they are persistent collections of molecules -- they are things in the sense that they are persistent patterns of molecules -- the molecules themselves just pass right through.

What makes life really interesting and different from a whirlpool is that it is a self-contained computational device that stores the changeable instructions to copy itself. A whirlpool's pattern is created by the external circumstances around it -- the pattern of the rocks and the waterfall. In contrast, living things internalize the "circumstances" that build them (the DNA, the proteins, etc) thus living things can be viewed as a single package that makes decisions and evolves as a computational whole. The magic of living things is that no individual part (the molecules) "knows" what it's doing (my problem with these videos) yet the ensemble does "know" what it's doing! When we casually look at a living thing we can't easily track the energy flux in and the entropy flux out and thus living things appear unique, as if they were running time backwards -- exactly the trick the animators use to make the (wrong) animations. Ha!

Thursday, November 27, 2008

Amorphous computing experiments in matlab



I've been playing with what I hope will be an interesting formulation of amorphous computing simulations involving randomly generated logic networks. I first prototyped these in C in my zlab framework but have decided to move them to matlab both to make it easier for others to work on it but also because as I move them from 1D to 2D I'll need a fancier integrator than RK45. Matlab offers a lot more ODE solvers than does my current C framework where I would inevitably have to port in Fortran solvers.

The above figures show the first test results from the matlab code. A three-node ring oscillator (that's 3 "not" gates connected in a cycle) are arrayed across space (x-axis). In both figures, the osciallator at randomly initialized (the same ICs in both images) and thus begin to oscillate though time (y axis). In the first image, there is no communication between each spatial machines so each vectical stripe oscillates in its own arbitrary phase. In the second figure, the exact same machine and ICs are now allowed to exchange information through space by diffusion and you can see that there is a rapid phase alignment between the vertical stripes. Think of it like this: each machine is now trying to recruit its neighbors into its phase. At start, by chance, there will be some neighbors who happen to have similar phase and thus they will be able to dominate their neighbors and bring them over to their phase resulting in a larger dominating force and thus making it easier to dominate even more neighbors, etc, until the whole space phase synchronizes.

This effect has been known for centuries -- it was described by Huygens in 1665 when he noticed pendulum clocks hung on the same wall phase-synchronizing because they could communicate by vibrating the wall. Here's an article about a nanomachine that does the same thing.

Lots more of these results to come now that I have the basic matlab framework built. Early indications are that some interesting things are possible.

Tuesday, November 25, 2008

Kinetic Explorer v2.0 Released



http://kintek-corp.com/kinetic_explorer/

This is a reaction simulator and data fitter project that I started years ago with Ken Johnson and Thomas Blom. We have just released version 2.0 which includes substantial improvements in the integrator and includes a nice tool for viewing the parametric fit space. After playing with this for years now, I'm convinced that the major problem with fitting tools is that it is incredibly easy to fool yourself into believing that you have a well constrained system when you don't. In this and the upcoming version we've put enormous effort into a UI that can demonstrate if a system is well constrained and if not, why. Thanks to a lot of effort by my bestest-nerd-buddy John Davis, v3.0 will have a brand new super-optimized fitter that uses singular value decomposition to dramatically improve the fit descent and also provide instant feedback on the system's condition rank in signal to noise units.