This is part 2 in a series about Functional programming. The first part about Immutability shows how the immutable aspects of functional languages can guide us to handle today’s increasingly concurrent world much more effortlessly.

This time we will look at how functional transformations allow us to transform a set of structured data from its original form into another form, without having to worry about side effects or state.

Business Logic

So, forget about immutability for a second. Let us look at business logic. Now let us look at some code that fall in the so called business logic category:

void Execute() {

Look at the code above for a few seconds. The code is rather clear about what it does. But there is even more that it hides:

  • ReadInput retrieves input data from somewhere and puts them in some internal structure
  • ProcessData assumes the internal structure is populated, applies some computation to it and stored the result in another internal structure
  • GenerateOutput assumes the result is computed and use it to generate output data that it writes somewhere
  • Hopefully this is all thread-safe

As you can see, a lot is happening behind the scenes. We have to assume certain things without knowing what actually happens. It is not necessarily a bad thing for someone who just reads the code. Things get worse for someone who needs to do something else using the same methods. For example, somebody needs to do something else with the result of ReadInput. Is it at all possible? And if so, how?

So we can see a problem with the code above: it is obscure. It depends both on internal state and context, and it hides them at the same time.

Maybe this one is better?

void MoreOpenExecute() {
  InputData input = ReadInput();
  ResultData result = ProcessData(input);
  OutputData output = GenerateOutput(result);

Or is it actually better? We no longer hide inputs and outputs, but we have now introduced intermediate local state. And because of that there is more code to write.

So what about this syntax?

void FunctionalExecute() {

Do you like it? I have mixed feelings about this code. What I like is that we are now explicit about data flow: there is no hidden state, and one function’s output becomes another one function’s input. Why is this good? Because exposing methods’ ins and outs makes the code more open for extensibility and composability. It also increases the chances for the code becoming thread safe. However, the code readability now really sucks. We turned the things upside-down: we start the line with GenerateOutput and end with ReadInput. Only programmers are able to understand such logic.

Now hold your breath, and I will show you something. Here is the same code written in a functional language F#:

|> ProcessData
|> GenerateOutput

Or you can write it like this:

ReadInput |> ProcessData |> GenerateOutput

The operator «|>» is a so called pipelining operator. Its concept is very similar to UNIX (and DOS) pipelining, where a set of processes are chained by their standard streams so that the output of each process feeds directly as input to the next one. Pipelining in F# is more complex, because it can be applied to functions with multiple arguments. However, the basic idea is the same: it is about data transformation.

Functional programming is to a large extent about data transformations. Functional languages encourage developers to write code that transforms immutable data without side effects and dependency on internal state. Such code is easy to write, and once written it looks compact and elegant (like above). Mutable and stateful code is harder to write, and it does not look that short and clean afterwards.

What about fluent interfaces? I have to note that a so called fluent API can bring a functional language flavour to traditional languages:

void FluentExecute() {

However, I do not consider this to be fair analogy. First of all, a special context class needs to be built and used behind the scenes to make implement a fluent API. Second, this is not a standard language practice in C# and Java, it is more like an advanced technique to extend the language.

Another example: arithmetic calculations

Let us take another very simple example: trivial arithmetic calculations. To make it even more trivial, we will limit arithmetic operations to a single one: Add.

int Add(int x, int y) {
  return x + y;

void Main() {
  int a = Add(2, 5);
  int b = Add(a, 4);
  int c = Add(b, 6);

The method Main calls Add three times, every time adding a new number to a previous result. We can also write the same set of operations in a single line, without intermediate local variables:

int d = Add(Add(Add(2, 5), 4), 6);

As in the previous example, removing intermediate variables makes code harder to read because the execution sequence is written in reverse order.

Now let us do the same in F#:

let add x y = x + y

add 2 5
|> add 4
|> add 6

What happens here? The line «add 2 5″ is self-explanatory, but then? The function «add» is used together with pipelining operator «|>», and the resulting code is both readable and does not introduce any intermediate variables. In fact, intermediate state is discouraged in functional languages, so the code above shows typical functional language data processing sequence:

  • add 2 and 5
  • to the result above add 4
  • to the result above add 6

What is unusual for us C# and Java guys, is that the same function «add» that is declared with two arguments in some cases looks like it takes only one. In fact it always takes two, it just receives one of them via pipelining, as the result of the previous function call.

However, it is possible to call function «add» with only one argument:

let add3 = add 3

Hmm, looks a bit weird, right? What happens here? In functional programming we do not always invoke a function by computing a result in a scalar or structural value. We apply the function by fully or partially binding its arguments. In case the function receives a complete list of expected arguments, it can apply the intended transformation and generate an output. If arguments are only partially instantiated, the output generation is deferred. This technique is known as a «closure» and has now come to an object-oriented world, but it has always been essential in functional programming. We can now use this partially applied function, «add3″, like this:

add3 4
|> add3
|> add3

This code first applies «add3″ to 4 (generating 7), then uses pipelining to send the output to a new call to add3 (generating 10) and finally it repeats the last step one more time (generating 13).

Since functional transformations are so important in functional languages, most of them define special operators to express multiple transformations by using simple syntax. For example, using pipelining we can define a function «quad» in F# like this:

let square x = x * x
let quad x = x |> square |> square

But there is another way to define «square» function: by using function composition operator:

let quad = square >> square

What is nice in such a definition is that we do not even have to specify the functions’ arguments: we declare a new function by composing others. When we use it, we can either apply its arguments, or defer them.

Functional programming is about stateless transformations

Functional programming is not about classes, it is about stateless transformations of immutable things, where «things» can be both data and functions, or even partially applied functions. Therefore we should not think about these transformations as changes of values. It is more a matter of mapping input to output where both sides can contain plain data or functions.


In the next installment of this article series we will look at type inference.

Publisert 01.07.2013 av

Vagif Abilov