What is a Function?


Functions are the essence of all programming. In the early days of computer science they represented a naive dream. The idea was that we could reduce computation into these smaller and smaller bits of re-usable code. Given enough time, and a proper structure for libraries, eventually we would have written code required for all computational needs. No longer would people have to write their own functions, and programming would consist of an easy job of stitching together components.

This dream hasn’t come true yet, but it persists, no matter how flawed. Each new programming technique or paradigm that comes along shakes up this idea a little. They promise better re-use of code. Better abstractions, and an easier life for all.

bananas

Bananaphone • Another naive dream.

In reality what each paradigm delivers is simply different abstractions. There has always been a trade-off. For each higher level of thinking about programming, some piece is thrown away. And this means, no matter how well you decide what to keep and what to leave, occasionally someone will need that piece that has been lost. But through all of this, one way or the other, functions have always persisted, and have continually proven to be effective.

We’ve used functions in C, we know what they look like, but we don’t know exactly what they are. Here are a few ways to think about them.

One way to think about functions is as description of some computation you want to be performed later. When you define a function it is like saying “when I use this name I want that sort of thing to happen”. This is a very practical idea of a function. It is very intuitive, and metaphorical to language. This is the way you would command a human or animal. Another thing I like about this is that it captures the delayed nature of functions. Functions are defined once, but can be called on repeatedly after.

Another way to think about functions is as a black box that takes some input and produces some output. This idea is subtly different from the former. It is more algebraic, and doesn’t talk about computation or commands. This idea is a mathematical concept, and is not tied to some particular machine, or language. In some situations this idea is exceptionally useful. It allows us to think about functions without worrying about their internals, or how they are computed exactly. We can then combine and compose functions together without worry of something subtle going wrong. This is the core idea behind an abstraction, and is what allows layers of complexity to work together with each other rather than conflict. This idea’s strength can also be its downfall. Because it does not mention anything about computation it does not deal with a number of real world concerns. “How long will this function take to run?”, “Is this function efficient?”, “Will it modify the state of my program? If so how?”.

black_box

Black Box • Your typical function.

A third method is to think of functions as partial computations. Like the Mathematical model they can take some inputs. These values are required before the function can complete the computation. This is why it is called partial. But like the computational model, the body of the function consists of a computation specified in some language of commands. These inputs are called unbound variables, and to finish the computation one simply supplies them. Like fitting a cog into a machine which previously spinning aimlessly, this completes all that is needed for the computation to run, and the machine runs. The output of these partial computations is itself a variable with an unknown value. This output can be placed as input to a new function, and so one function relies on another.

An advantage of this idea over the mathematical model is that we recognize that functions contain computation. We see that when the computation runs, some physical process is going on in the machine. This means we recognise the fact that certain things take time to elapse, or that a function might change the program state, or do anything else we’re not sure about.

All these ideas are explored in the study of functions, Lambda calculus. This is a field that combines logic, maths, and computer science. The name comes from the Greek letter Lambda, which is used in the representation of binding variables. Using Lambda calculus gives a way of defining, composing and building functions using a simple mathematical notation.

We are going to use all of the previous ideas to add user defined functions to our language. Lisp is already well suited to this sort of playing around and using these concepts, it won’t take much work for us to implement functions.

The first step will be to write a builtin function that can create user defined functions. Here is one idea as to how it can be specified. The first argument could be a list of symbols, just like our def function. These symbols we call the formal arguments, also known as the unbound variables. They act as the inputs to our partial computation. The second argument could be another list. When running the function this is going to be evaluated with our builtin eval function.

This function we’ll call just \, (a homage to The Lambda Calculus as the \ character looks a little bit like a lambda). To create a function which takes two inputs and adds them together, we would then write something like this.

  1. \ {x y} {+ x y}

We can call the function by putting it as the first argument in a normal S-Expression

  1. (\ {x y} {+ x y}) 10 20

If we want to name this function we can pass it to our existing builtin def like any other value and store it in the environment.

  1. def {add-together} (\ {x y} {+ x y})

Then we can call it by refering to it by name.

  1. add-together 10 20