Back in March of 2012, I wrote some code that disturbed me immensely. The experience was profound and a bit frightening.
It was a code generator — specifically, it was a program that could emit random abstract syntax trees for a language that a friend and I are writing. Our idea was to emit random ASTs from our types and, in turn, convert those random ASTs into pretty-printed code. With this randomly generated code, we could test to ensure our parser was adequately able to parse everything we expected.
The approach worked great. We used it to verify a subset of our in-the-works parser. Specifically, we used it to test the correct operation of the parsers responsible for interpreting the textual representation of types.
While I’m a huge fan of this approach and will hopefully have more material to write about concerning it at a later time, I’m far more interested in what happened in my mind while trying to get the random-AST-generator to work.
I was unable to write this tool quickly. In theory, it was simple. I used the [Arbitrary](http://hackage.haskell.org/packages/archive/QuickCheck/220.127.116.11/doc/html/Test-QuickCheck-Arbitrary.html#t:Arbitrary) typeclass in the [QuickCheck](http://hackage.haskell.org/package/QuickCheck-18.104.22.168) library to define how a random AST could be generated. While the program compiled and ran, it would never terminate. I couldn’t figure out why. “I’m not missing anything,” I thought.
Being unable to figure out what was happening, I turned to Job and asked for some help. “Job, can you see what I’ve done wrong?” He thoughtfully looked at my code and shared my confusion for a second — until suddenly, the still-running program exhausted the available memory on the system and crashed.
It struck us both at the same time. The program was generating enormous (but finite, given enough RAM) structures.
“It’s generating random types that _cannot_ fit into system memory!” I exclaimed. This was amazing. I had never even considered that such types _could exist_ before. Normally a type is a small and simple statement. Even large ones normally only take several lines.
These were types that managed to exhaust hundreds of megabytes of RAM. We tweaked the parameters of the system so that the random generation selected fixed types more frequently than the [parametric types](http://en.wikipedia.org/wiki/Parametric_polymorphism). This change caused the system to halt much more quickly. What it output still shocked me. Here, [see for yourself](https://gist.github.com/raw/2017131/2e7a950271415caf54a9415c40e984402a27969c/gistfile1.txt).
That looks like gunk, and it is! But it’s a valid type signature that our AST could represent. It took me a while to recognize it as such. I had to run the program several times before I finally recognized the code as valid.
After I got over how cool it was that this technique was going to be useful, I got uncomfortable. “There’s a giant space of programs that this computer can emit and consume that I have no way of ever understanding,” I thought. That’s cool, but it’s also scary. I made something that caused me to feel deeply insignificant.
Programming became scary. The potential of the machine suddenly felt limitless.
After understanding why these huge trees were emitted, I recognized it as an obvious property of such trees that’s usually masked by how we, as humans, normally use them. Even though I could have written that randomly generated code, I didn’t. I wouldn’t. No one would. This experience tangibly showed me the stuff of abstraction and has since made me question how it is used.
The programs that were generated were correct; that is, they would compile and run without encountering errors. Once this property is achieved, the rest (it seems to me) is entirely human. Most of the work we put into these machines is for our own benefit. I fear that we’ve been holding this technology back by depending too much on our understanding of each specific piece of implementation. Here, go [watch this presentation from StrangeLoop 2012](http://www.infoq.com/presentations/miniKanren). By the end of it, the presenters, Daniel P. Friedman and William E. Byrd, are using a program to emit other programs that have the property of being [quines](http://en.wikipedia.org/wiki/Quine_(computing)). Do they care how the quines work? No! They only care that they are correct and have the desired properties. They only care _that they are quines_.
That perspective, I think, is the future of programming.