All of Basic Category Theory

Have you ever been puzzled by the suggestion that

data Lens s a = Lens { get :: s -> a, set :: a -> s -> s }

might be in some sense the same as

forall f. Functor f => (a -> f a) -> s -> f s

or, more to the point, how on earth someone ever went about figuring this out in the first place?

Don’t know your Kan extensions from your co-ends, your pushouts from your presheaves?

Join me for a wild adventure tour through the magical land of category theory, stopping off at all the major sights.

We’ll learn the basic notions that form the conceptual backbone of category theory, and how they all fit together.

Category theory has made deep inroads into computer science theory, but in this talk we’ll be focused on computer science practice. We’ll explore the advantages category theory brings to programming in terms of

  • providing alternate representations for types
  • classifying solutions, or
  • simply providing a clarifying viewpoint and helping to organise our thinking.

In addition this should provide you with the right mental framework for further exploring category theory, should you so wish.

Conditional contexts

Wouldn’t it be nice if Scala would let you write a parametric function that could do different things depending on whether or not the type parameter was an instance of some type class?

For instance

  • a Set[A](a: A*) function that creates either a TreeSet or a HashSet, depending on whether an Ordering[A] is in scope.
  • a numerical algorithm doCalc[N : Numeric](n: Matrix[N]) that works for all numeric types N, but uses a (more expensive, but numerically stable) algorithm for floating point numbers, but not integers (which don’t need it).

I’ll run through the use and implementation of a Haskell library (ifcxt) which provides this functionality, then show the (surprisingly simple) Scala implementation.

Notably, this has to be an extension to the type system in Haskell, but practically comes for free in Scala.

Accompanying Scala code allows you to play with three different implementations.

Monadic matching mishaps

The following Scala code fails to compile:

  // Search for a file by filename glob
  def find(glob: String): Error \/ File

  // retrieve the first and last lines
  def firstLastLine(file: File): Error \/ (String, String)

  // summarize the first text file
  val summary = for {
    file          <- find("*.txt")
    (first, last) <- firstLastLine(file)
  } yield file + ": " + first.take(20) + "..." + last.takeRight(20)

with the error

could not find implicit value for parameter M: scalaz.Monoid[Error]

but it’s not immediately clear why, or why it does work when we replace \/ with Option.

I’ll discuss why this occurs, a simple fix to make it work, and compare with similar situations in other languages with pattern-matching and monads (Haskell, Idris, Purescript), ending by identifying a Scala feature that perhaps could be improved in future compiler versions.

How do functional programmers do dependency injection?

How should we think about dependency injection?

In the context of imperative languages there are many well-established frameworks for doing dependency injection. Is there a functional programming equivalent?

We often hear that the FP answer is “use a Reader monad”. However, I’ll argue this isn’t a good answer.

A functional programmer would instead turn the question around and ask “What operations would you do if you had these dependencies?”. By reframing the original problem we reveal a more fundamental outlook. Namely, what are the languages of operations these dependencies allow us to express, and what are the interpreters for those languages?

To achieve reuse, both languages and interpreters will need to be modular, i.e. support good notions of composition.

So how do we go about doing this?

I’ll compare and contrast two solutions of this problem. The first, popularised in Wouter Swierstra’s Functional Pearl Datatypes à la Carte (and familiar to some through Rúnar Bjarnason’s talk “Compositional application architecture with reasonably priced free monads”) builds up a data structure of abstract operations, which are then run by an interpreter.

The second approach — popularised by Oleg Kiselyov as Typed tagless final interpreters — takes advantage of Scala’s support for higher-kinded types. Rather than building up and tearing down a data structure, we simply add a type class constraint to our program. “Interpretation” is now a compile-time operation: it’s just specialisation to a type that satisfies the constraints. This is both simpler and more efficient, and sufficient for almost all purposes.

Stop Paying for Free Monads!

Free monads, as used in the Datatypes à la carte pattern, are a useful way to structure code, separating of specification from implementation and enabling modularisation.

But they also come with a runtime performance penalty…

The complementary typed tagless final interpreters approach (popularised in the Haskell and OCaml worlds by Oleg Kiselyov) offers a performance boost over free monads.

Yet, frustratingly, available sample code centres around simple expression languages, and “real world” examples that you might be able to put to use in your day job are hard to come by.

In this talk, we’ll compare and contrast the typed tagless approach with datatypes à la carte and explore how the typed tagless approach can be used in real production code — with configuration, state, side effects and error handling.

Haskell code for the associated LambdaJam workshop.

Macro-based smart constructors

Smart constructors are a nice pattern for building data types that need extra validation or calculation.

But in the case when we’re building our data type from statically known values, they leave us with two annoyances:

• they’re more clunky

• we have a run-time test for something that’s known at and should be checked at compile time.

We’ll investigate how Scala macros can help us address these problems and thus achieve both additional safety and greater convenience.

Accompanying Scala code demonstrating the techniques in action.

Seven trees in one

It’s a rather surprising fact that there’s an O(1) bijection between the datatype of unlabelled (planar) binary trees

data Tree = Leaf | Node Tree Tree

and 7-tuples of these trees. This presentation explores the fascinating story of how this bijection comes about, and the interesting connections to notions of isomorphism of types and distributive categories.

Haskell code containing a QuickCheck test of the bijection.

Getting higher: higher-kinded and higher rank types in Scala

Some of us may have heard the terms “higher-kinded type” or “higher rank type” and wondered what they meant, or what the difference was.

In this talk I’ll attempt to demystify these terms and give a few examples of how they can be useful in practice.

Much ado about monoids

I’ll introduce the Monoid and Foldable type classes from Scala’s scalaz library and show how the compositionality property they satisfy means they can be used to do basic data analysis in a single pass over the data.

Blog post.


Spot an error or typo? Please consider raising an issue or submitting a PR.