3 Mind-Blowing Paradigms from Strange Loop

I had a long-time programmer tell me: “Once you know a few programming languages, you pretty much know them all. At the end of the day, programming is all just variables, conditionals, loops, and so on.” If you’re nodding your head in agreement, it’s time to seek out more interesting programming paradigms!

Several of my fellow Atoms and I attended Strange Loop 2013 last month. There were a huge number of awesome talks on a wide variety of programming topics, including several that challenged my assumptions about what concepts are really essential in computer programming.

Here are three programming paradigms featured at Strange Loop 2013 and the Emerging Languages Camp that blew my mind. These paradigms aren’t new to computer science, but I found them fascinating, and I hope you will too!

3. Logic Programming

As seen in: Celf (slides), Axiomatic (slides). (See also: Prolog, miniKanren.)

Why it’s mind-blowing: Instead of giving the computer a step-by-step list of instructions to perform, you declare a set of logical rules or relations, then let the system determine how to apply them to solve for a goal.

As a simple example, you might declare a relation (A is an ancestor of C) which is true if either (A is a parent of C), or (A is a parent of B) and (B is an ancestor of C). If you then state that (Alice is a parent of Bob), (Bob is a parent of Cindy), and (Cindy is a parent of Dan), then the system can determine that the statement (Alice is an ancestor of Dan) is logically true.

There are many different aspects of logic programming, each of which specialize on a certain kind of logic. The language Celf, for example, is geared toward linear logic, which allows you to easily describe situations involving resource consumption or local state change.

For example, suppose I have $5, and I can buy snacks for $1 each and drinks for $2 each. I could use linear logic programming to tell me all my possible courses of action: buy a snack and two drinks, buy three snacks and one drink, buy just a snack and keep $4, buy nothing and keep all $5, and so on.

You could also use linear logic programming to determine all possible series of moves in a chess game (and which ones lead to a win), generate a story with a random (but sensible) chain of events, or find solutions for many other complex scenarios that would be impractical to program using other paradigms.

2. Stack-Based Concatenative Programming

As seen in: Gershwin (slides). (See also: Factor, Joy.)

Why it’s mind-blowing: Instead of moving data around with variables and function arguments, computation is performed by cleverly manipulating a global stack of data, much like in Reverse Polish Notation (RPN) calculation. Function composition and partial application (currying) become much simpler and more natural to express than in most other languages.

As a simple example, the expression 5 3 - pushes the value 5 onto the stack, then the value 3, then pops the top two values (3 and 5) off the stack, performs subtraction on them (subtract 3 from 5), and pushes the result (2) onto the stack.

But arithmetic is just the tip of the iceberg. There are also logical conditionals (if, and, or, etc.), stack manipulation (dup, pop, swap, etc.), and more, depending on the particular language.

Low-level languages like Color Forth have operations for storing and retrieving data in registers, communicating with other processors, and so on, directly mirroring the machine instructions of the hardware it was designed for. High-level languages like Gershwin provide things like data structures, higher-order operations, and quotations (executable lists of operations). Quotations are a particularly interesting feature, because they provide the same kind of flexibility and extensibility that you would get from blocks, lambdas, and macros in other languages, but all wrapped up into one remarkably simple concept.

1. Array Programming

As seen in: J (slides). (See also: APL.)

We’ve written about the J programming language before, but it’s so mind-blowing that it deserves another mention.

Why it’s mind-blowing: Well, for starters, just look at some J code:

|: 8(+/%#)\ |: 8(+/%#)\ Z

At first glance, it looks like a cat walked across a keyboard. But as you become more experienced with J, you learn to recognize and create patterns that never would have occurred to you in other languages. This sample code contains two uses of (+/%#)\, which performs a rolling average over the input array. In the example I borrowed this code from, the rolling average is used to perform an 8×8 blur effect over a two-dimensional array of grayscale image data.

This example also shows how J code can be concatenative (aka tacit, or point-free), much like Forth and Gershwin. The rolling average operation is made up of a few basic operators composed in a useful order:

  • + (plus): plain old arithmetic addition.
  • / (insert): a higher-order operation, which inserts a + between each value in the array, to calculate the sum. E.g. +/ (1 2 3) is the same as (1 + 2 + 3).
  • % (divide): arithmetic division. The sum of an array divided by the size of the array will yield the average.
  • # (tally): gives the size of the array.
  • \ (infix): another higher-order operation, which applies the (+/%#) group of operators to rolling sub-arrays of the input array.

You could take those same basic operators and compose them in a different order to get a different, potentially useful, result.

Another key characteristic of array programming languages is, of course, the easy and natural way in which they manipulate arrays of any dimensionality. The same set of basic operators are equally applicable to a 1-dimensional vector as they are to a 9-dimensional hypermatrix.

Final Thoughts: The Value of Constraints

One interesting thing to note about these different paradigms is that they each put certain constraints on the programmer, and thus suggest different ways of approaching problems. This can lead to surprisingly elegant and natural solutions that you may not have thought of in other programming paradigms with different constraints.

  • Logic programming constrains you to describing the problem in terms of rules, relations, and constraints, instead of a step-by-step process for the computer to execute. This frees you from worrying about the loops and branches and boilerplate code that you would have to write if you were using another paradigm. Instead, you can focus your energy on solving the problem at hand.
     
  • Stack-based concatenative programming constrains you to working with data on a global stack, instead of passing it around via function calls. In a way, this is the opposite of functional programming, where you typically pass immutable values through multiple levels of function calls. And yet, stack-based concatenative programming can yield concise and elegant solutions, whether you are programming at a high level or a low level.
     
  • Array programming (mostly) constrains you to a single type of data structure, and a set of ultra-simple, efficient, composable operators tailored to that data structure. Each of the available operators is so basic on its own that you are led to think about how they can be combined to achieve the desired result. Since the operators often apply to arrays of any dimensionality, the patterns of composition that you create or recognize are often applicable in many other cases.

Of course, logic programming, stack-based concatenative programming, and array programming aren’t the only interesting paradigms featured at Strange Loop, let alone in computer science as a whole.

What programming paradigms do you find most interesting? What kinds of constraints do they impose, and how do those constraints affect the way you solve problems?