# Unifying Programming and Math – The Dependent Type Revolution

The way that programmers and mathematicians work seems to be very different. Programming is about making computers do something, whereas math is about describing things. Programming involves state that changes over time, whereas math deals with Platonic theorems that are always true, even if not proven yet.

But, in fact, the Curry-Howard Isomorphism states that there is a very direct correspondence between basic logical concepts (namely, propositions) and basic programming concepts (namely, types). This correspondence has been known for a long time, but it is only recently that it has begun to creep into the actual practices of programmers and mathematicians alike.

Due to a recent advance in mathematics called Homotopy Type Theory, programming and math are about to be unified on a practical level in a way that will fortify their strengths and overcome some of their respective weaknesses.

## The Curry-Howard Isomorphism and Martin-Löf Type Theory

The Curry-Howard isomorphism states that:

In logic, every proposition can be viewed as a type whose members are the proofs of that proposition.

Empty types are then identified with false propositions, because they have no “proof” (i.e., member). A proof of the proposition (A and B) consists of what? A proof of A and a proof of B. Thus, when A and B are viewed as types, (A and B) is simply the product type A x B, which consists of pairs (x, y) where x is of type A and y is of type B. Types can also be viewed as sets, and have many of the same operations.

Whenever two different things have basically the same structure, the natural question is, are they really the same thing? Can we simply say that types are propositions?

It turns out that we can. Although the basic idea of the Curry-Howard isomorphism was known for a long time, it was only a few decades ago that someone fully succeeded in creating a formal system where types and propositions are the same thing: Martin-Löf Type Theory. The amazing thing about Martin-Löf Type Theory is that not only does it implement the Curry-Howard isomorphism on the level of propositions (i.e., using variables A, P, Q, etc. and operations AND, OR, NOT, IMPLIES), it also includes the level of predicate logic in which we have statements like “For some x in Q, P(x)” where P is some property of x. As a type, this means the type of x’s where x is in Q and P(x) is true.

This operation immediately allows for an immensely more abstract type system. We can define a type of odd natural numbers, for instance: “the type of x’s in Nat such that x % 2 == 1.” Modern type systems like Haskell’s are a step towards this kind of power, but they do not have dependent types, where type definitions can depend on values (such as the constant 2 and the operation % in the above example).

## Implications

So when it comes down to it, programmers and mathematicians are really doing the same thing, whether it’s constructing an element of a type or constructing a proof for a proposition. Benefits for programmers who adopt dependently-typed languages include:

### Spending Much Less Time Writing Complex, Potentially Buggy Algorithms

Even languages that don’t have the full power of the Martin-Löf type system can benefit from the type theory paradigm. For example, Djinn is a Haskell program which, given a type signature for a Haskell function, uses proof-based techniques to search for a function with that type signature.

If there are only a few, strictly defined ways to construct functions, it’s possible to reverse-engineer algorithms instead of writing them tediously by hand. For example, we could specify that matrix multiplication is a function of the type Mat -> Mat -> Mat (or Mat x Mat -> Mat for those who aren’t familiar with Haskell-type functions), where Mat is the type of matrices. Given both function types and dependent types, we can also specify properties of the function as part of the type. So if we wanted to specify that matrix multiplication is associative, we could ask a Djinn-like program to construct a function of this type: {f: Mat -> Mat -> Mat such that f(x,f(y,z)) = f(f(x,y),z)}.

### Provably Correct Code

If one is using a full-fledged proof system to write programs, it automatically becomes possible to prove properties of functions. I think in the future, as proofs and programs become closer, people will see unit testing as a very primitive way to check program correctness, necessitated only by the limitations of formally incoherent programming languages. Really, we might still be in the Stone Age of programming practices. Given a Martin-Löf type system you would probably spend more of your time proving properties of functions than specifying the algorithm for computing them.

### Universal Datatypes

One pain point in programming is the incompatibility of datatypes between languages. An “integer” in Javascript is very different from an “integer” in Ruby, which is different from an “integer” in C. What type theory allows you to do is formally specify what operations are allowed on types and what laws they are assumed to follow. Once you can verify that the basic operations work as expected in a different language, the more complex ones can be constructed from them. Thus, we can directly use datatypes from language to language instead of having to convert to an intermediate format such as text. Formal specifications could replace the often overly complex, ad hoc standards that are fashionable nowadays.

## Recent Developments

Like the lambda calculus before it (which is actually embedded in it), MLTT may be a fruit of mathematics that has a very real and practical impact on how programmers think about programming. It could be the unifying paradigm that eventually, finally puts an end to all the “programming language wars” of our time. Already it has begun to transform the mathematical community.

A Russian mathematician named Vladimir Voevodsky has put forward a new interpretation of type theory where types can be interpreted as not just sets, but categories. Categories are collections like sets are, but with a bit of extra structure. While sets contain things, categories include both things and processes relating those things. The new theory was christened Homotopy Type Theory, or HoTT. In a nutshell, HoTT gives a more sophisticated way to tell if two different types are “the same.”

HoTT development is taking place not primarily in research papers, but in code in the programming languages Coq and Agda, which are based on dependent type theory. In light of this, Urs Schreiber jokingly refers to the “pseudocode formerly known as traditional mathematics.” Given that mathematicians are adopting the practices of programmers, I don’t think it will be long before we see the reverse happening too.