# splonderzoek

Geeky things and other research from a work in progress

## 2009-03-31

### Incremental attributes

I previously wrote about a design pattern I called an incremental fold (or catamorphism). I described it as a design pattern, because, as written, it cannot be factored into code. That is, it is a pattern for designing part of a program.

The pattern I presented is a useful way to implement functions that can be expressed as catamorphisms such that the result is incrementally computed for each operation on a value of a datatype. Unlike a fold defined directly as a function, which traverse an entire value, the incremental fold only traverses parts that are updated. For some values, this may provide a performance benefit.

This post shows how we can adapt the above idea to a more general concept that I'm calling incremental attributes. It's more general in that incremental attributes can express the incremental fold as well as other flows of incremental computation.

#### Review of the incremental fold

First, let's review the implementation of the incremental fold.

[Note: This is not a literate Haskell article, because there's too much duplicated code required; however, all source files are available.]

``module IncrementalAttributes1Synthesized wheredata Tree a s  = Tip s  | Bin a (Tree a s) (Tree a s) s  deriving Showdata Alg a s  = Alg { stip :: s, sbin :: a -> s -> s -> s }result :: Tree a s -> sresult (Tip s)       = sresult (Bin _ _ _ s) = stip :: Alg a s -> Tree a stip alg = Tip (stip alg)bin :: Alg a s -> a -> Tree a s -> Tree a s -> Tree a sbin alg x lt rt = Bin x lt rt (sbin alg x (result lt) (result rt))empty :: (Ord a) => Alg a s -> Tree a sempty = tipsingleton :: (Ord a) => Alg a s -> a -> Tree a ssingleton alg x = bin alg x (tip alg) (tip alg)insert :: (Ord a) => Alg a s -> a -> Tree a s -> Tree a sinsert alg x t =  case t of    Tip _ ->      singleton alg x    Bin y lt rt _ ->      case compare x y of        LT -> bin alg y (insert alg x lt) rt        GT -> bin alg y lt (insert alg x rt)        EQ -> bin alg x lt rtfromList :: (Ord a) => Alg a s -> [a] -> Tree a sfromList alg = foldr (insert alg) (empty alg)heightAlg :: Alg a IntegerheightAlg = Alg 0 (\_ x y -> 1 + max x y)t1 = fromList heightAlg "azbycx"``

This will be the starting point for our discussion. We have a basic binary tree with an algebra type that gives the fold functions, `stip` and `sbin`. The application of the algebra is blended into the utility functions just as it was with the incremental fold. I have chosen to keep the representation simple, so the algebra is passed around as an argument to the functions. This could, of course, be done with type classes. Lastly, we have an example algebra that determines the height of a tree. To get the height of the example `t1`, simply type `result t1` after loading this file into GHCi.

#### Incrementally inherited attributes

Suppose that, instead of height, we wanted to incrementally compute the depth of every node. Thus, we would attach a value to each node that stored its distance from the root. We can't do that with the above implementation, because attributes are only fed "upwards" or from the leaves to the root. We have no way of passing information downwards. The solution is to use inherited attributes.

Before presenting the code, here is a little side note. You may notice the use of the words synthesized and inherited here. The terminology comes from the study of attribute grammars, extending context-free grammars to support semantic operations. Wouter Swierstra wrote a great tutorial on attribute grammars in Haskell for The Monad Reader in 2005. In fact, I use an example from there at the end of this article. You can think of synthesized as "produced by the children for the parent" and inherited as "passed down from the parent to the children."

As you can now imagine, inherited attributes will allow us to bring information to the leaves. Many of the changes to the code are trivial, so we ignore them. The relevant changes are the following:

``data Alg a i  = Alg { itip :: i -> i, ibin :: a -> i -> i }tip :: Alg a i -> i -> Tree a itip alg i = Tip (itip alg i)bin :: Alg a i -> i -> a -> Tree a i -> Tree a i -> Tree a ibin alg i x lt rt = Bin x (update i lt) (update i rt) i  where    update i' t =      case t of        Tip _ ->          tip alg i'        Bin x lt rt _ ->          let s = ibin alg x i' in          Bin x (update s lt) (update s lt) s``

The datatype `Alg` (that I'm still calling the algebra, though that may not be proper use of the category theoretical term) now has functions that take an inherited attribute from a parent and create a new inherited attribute to be stored with the node and passed on to its children. The change to `bin` is the more complicated of the changes, because once a `Bin` constructor is constructed, all of its child nodes must be updated with new inherited values.

To implement an algebra for depth, we do the following:

``depthAlg :: Alg a IntdepthAlg = Alg (+1) (const (+1))t1 = fromList depthAlg 0 "azbycx"``

Load the code and check the result to see for yourself what it looks like.

#### One is not enough

Now that we have use cases for synthesized and inherited incremental attributes, we're going to want both. Fortunately, that's not too difficult. The new datatypes are simply a product of the two previous:

``data Tree a i s  = Tip i s  | Bin a (Tree a i s) (Tree a i s) i s  deriving Showdata Alg a i s  = Alg { itip :: i -> i, ibin :: a -> i -> i,          stip :: s,      sbin :: a -> s -> s -> s }``

You can now see why I was using `s` and `i` to distinguish the types of the attributes. Again, most of the code modifications are trivial, and the `bin` function needs special attention.

``bin :: Alg a i s -> i -> a -> Tree a i s -> Tree a i s -> Tree a i sbin alg i x lt rt =  Bin x (update i lt) (update i rt) i (sbin alg x (sresult lt) (sresult rt))  where    update i' t =      case t of        Tip _ _ ->          tip alg i'        Bin y ylt yrt _ s ->          let j = ibin alg y i' in          Bin y (update j ylt) (update j yrt) j s``

Defining an algebra for both depth and height is no more difficult than defining each alone.

``depthAndHeightAlg :: Alg a Int IntdepthAndHeightAlg = Alg (+1) (const (+1)) 1 (\_ x y -> 1 + max x y)``

#### Feedback

You probably know where this is going by now. There's that famous saying, "what goes down must come up." We want more than just two separate directions of information flow. We want to utilize the information flowing toward the leaves to help determine that which flows up to the root or vice versa. A simple example of this is a counter that annotates each node with its rank in an in-order traversal. This can't be done with just synthesized or inherited attributes, because it depends on a combination of input from the parent, children, and siblings for each node.

The code is similar to the previous implementation, but the differences in `Alg` are important.

``data Alg a i s  = Alg { ftip :: i -> s, fbin :: a -> i -> s -> s -> (i, i, s) }``

Each node now has a single `i`nherited attribute, because it has a single parent. We use the `s`ynthesized attributes to store a local result, so each constructor only has one as an output. For the `Bin` constructor, we have a pair of incoming synthesized values and a pair of outgoing inherited values. The left component in each pair is associated with the left child, and the right with the right child. This allows us to have information flow up from the synthesized attribute of the left child and down to the inherited attribute of the right or in the opposite direction.

The `bin` is again tricky to write correctly.

``bin :: Alg a i s -> i -> a -> Tree a i s -> Tree a i s -> Tree a i sbin alg i x lt rt = update i (Bin x lt rt undefined undefined)  where    update j t =      case t of        Tip _ _ ->          tip alg j        Bin y ylt yrt _ _ ->          let (li, ri, s) = fbin alg y j (sresult zlt) (sresult zrt)              zlt = update li ylt              zrt = update ri yrt          in  Bin y zlt zrt j s``

Notice the circular programming here. The definition and uses of, for example, `li` and `zlt` show that we could easily loop infinitely. This depends on how the specific algebra functions are implemented. Here is the "counter example":

``newtype CounterI = CI { cntI :: Int } deriving Showdata CounterS = CS { size :: Int, cntS :: Int } deriving ShowcounterAlg :: Alg a CounterI CounterScounterAlg = Alg ft fb  where    ft :: CounterI -> CounterS    ft i = CS { size = 1, cntS = cntI i }    fb :: a -> CounterI -> CounterS -> CounterS -> (CounterI, CounterI, CounterS)    fb _ i ls rs =      ( i                                   -- left      , CI { cntI = 1 + cntI i + size ls }  -- right      , CS { size = 1 + size ls + size rs           , cntS = cntI i + size ls }      )t1 = fromList counterAlg (CI { cntI = 0 }) "azbycx"``

I've relied heavily on record syntax to document the flow of information. Notice in `fb` how the `i` is directly inherited by the left child and how the right child inherits the new count that depends on the size of the left subtree and the inherited count of its parent. As shown in this example, the dependency flow must be unidirectional for one desired result. But there's no reason we can't go up, down, and then up again (for example).

#### Revisiting the diff problem.

As I mentioned, Wouter wrote a good introduction to attribute grammars in Haskell (which I highly recommend that you read). He focuses on the use of the UUAG system to generate code for solving problems that are harder to solve with traditional functional programming techniques. He describes the problem as follows:

Suppose we want to write a function `diff :: [Float] -> [Float]` that given a list `xs`, calculates a new list where every element `x` is replaced with the difference between `x` and the average of `xs`. Similar problems pop up in any library for performing statistical calculations.

Great problem! And we can solve it using incremental attributes in Haskell instead of in UUAG's attribute grammar syntax.

``newtype DiffI = DI { avg :: Float } deriving Showdata DiffS = DS { sumD :: Float, len :: Float, res :: Float } deriving ShowdiffAlg :: Alg Float DiffI DiffSdiffAlg = Alg ft fb  where    ft :: DiffI -> DiffS    ft i =      DS { sumD = 0         , len = 0         , res = 0         }    fb :: Float -> DiffI -> DiffS -> DiffS -> (DiffI, DiffI, DiffS)    fb x i ls sr =      ( i      , i      , DS { sumD = x + sumD ls + sumD sr           , len = 1 + len ls + len sr           , res = x - avg i           }      )``

The implementation is not too much more difficult than the attribute grammar solution. We don't have the clean separation of concerns, but adding another attribute only means adding another field in `DI` or `DS` depending on whether it's inherited or synthesized.

Oh, but we're not done! Where's the actual average generated? Ah right, that's fed to the root inherited attribute.

``t2 = let val = fromList diffAlg (DI { avg = a }) [1,4,1.5,3.5,2,3,2.5]         s = sresult val         a = sumD s / len s     in  val``

Here's another example of circular programming. Due to the way we implemented the application of the algebra, we can take advantage of lazy evaluation to ensure that the sum and length (and thus average) are incrementally computed and, as a result, the difference (`res`) is determined as needed for each node.

1. "Why Attribute Grammars Matter" is a good paper, but I never particularly liked the examples. It is useful as a brief tutorial on Attribute Grammars, however I felt they weren't enough to motivate why attribute grammars matter. (I do think they matter, by the way.)

In short, the corecursive diff is a red herring in my book, because it's quite impossible to perform that computation without touching each number twice.

So yes, it eliminates a traversal of the list, at the cost of creating otherwise unnecessary thunks, not concrete floats. In a loose sense, one could say that you made the second pass of the list implicit.

However, that paper did very much motivate me to work on better examples of corecursion. You can read a draft of the result, entitled "Lloyd Allison's Corecursive Queues: Why Continuations Matter".

2. By the way, here's a somewhat more thorough look at the reasons I don't like the corecursive diff example.