A central theme of programming is abstraction. Functions abstract over values by replacing concrete values by variables. Types abstract over values, separating them into classes that form the admissible input or output values of functions. Polymorphic definitions abstract over concrete types by enabling functions that work uniformly for a whole range of types. The list goes on and on, and the ability to handle abstractions and to rapidly switch between different levels of abstraction might well be the most important technical skill of a programmer.
In this chapter, we will abstract over functions. Strong support to easily abstract over functions, pass functions around, and to create new functions on the fly is at the core of the functional programming paradigm.
In Recursion, we identified several different patterns of processing lists. Recall the following function definitions:
allSquares :: Num a => [a] -> [a]
allSquares [] = []
allSquares (x : xs) = x * x : allSquares xs
allToUpper :: String -> String
allToUpper [] = []
allToUpper (chr : restString) = toUpper chr : allToUpper restString
distancesFromPoint :: ColourPoint -> [ColourPoint] -> [Float]
distancesFromPoint point []
= []
distancesFromPoint point (p : ps)
= distance point p : distancesFromPoint point ps
As we observed then, these functions all essentially have the following form:
recursiveFunction [] = []
recursiveFunction (x : xs) = doSomethingWith x : recursiveFunction xs
The difference between the three functions is what they do in place of doSomethingWith
. If these functions would only differ in, say, a constant value that they are using, we would never write three different functions. Instead, we would abstract over that value and pass it as additional parameter to the function. So why should we have to write different versions of a function only because they apply different operations to the elements of a list?
Just as in the case of functions that use different constant values, we want to abstract over doSomethingWith
and pass this functionality as an additional parameter to the traversal function. In Haskell, parameterising over a value or a function is essentially the same. Hence, we rename recursiveFunction
to map
(the standard name for applying a given function to every element of a list) and add a new function argument f
that represents the computation that we called doSomethingWith
above.
import Prelude hiding (map)
map f [] = []
map f (x : xs) = f x : map f xs
The question that remains is what is the type of map
? It can be applied to lists of any element type a
, as long as its first argument, f
, is a function which can be applied to a
and returns some value of type b
. Hence, applying f
to all elements yields a list of element type b
and we get
map :: (a -> b) -> [a] -> [b]
Functions like map
, which take another function as an argument and also functions that return another function as their result, are called higher-order functions.
map
We can use the map
function (either our own version or the one from the Prelude
) to define the three functions from the beginning of the chapter as follows:
allSquares :: Num a => [a] -> [a]
allSquares xs = map square xs
where
square x = x * x
allToUpper :: String -> String
allToUpper string = map toUpper string
distancesFromPoint :: ColourPoint -> [ColourPoint] -> [Float]
distancesFromPoint point points = map distanceP points
where
distanceP :: ColourPoint -> Float
distanceP p = distance point p
In fact, we can simplify the last definition (and omit the local function distanceP
) by applying a concept from First Steps, namely that of partial applications. We used the function
average :: Float -> Float -> Float
average a b = (a + b) / 2.0
to demonstrate that, by applying only a single argument to a binary function, as in average 5.0
, we get a new function. One that, in this case, is of type Float -> Float
and calculates the average of 5.0
and whatever value the new function is applied to. Similarly, we can use a partial application of distance
to save us the trouble of having to define a local function in the definition of distancesFromPoint
:
distancesFromPoint :: ColourPoint -> [ColourPoint] -> [Float]
distancesFromPoint point points = map (distance point) points
Here (distance point)
results in a function that takes a (single) point p
as argument, and then, calculates the distance between point
and p
. In other words, the result of (distance point)
is the same function that we called distanceP
above. A consequence of that is that we could have defined distanceP
alternatively as
distanceP :: ColourPoint -> Float
distanceP = distance point
This and the use of the parameter f
in map
demonstrates nicely that variables in Haskell are used for data values (e.g., numbers, strings, and lists) in the exact same way as they are used for functions. Functions are simply values whose type includes the ->
type constructor; otherwise, they are handled like any other value.
A note on syntax. In the definition of distancesFromPoint
that makes use of the partial application of distance
, we need to make sure to bracket the expression (distance point)
. Without the parenthesis, we get map distance point points
, which has a very different meaning: it represents the application of map
to three arguments, namely the function distance
, point
, and finally points
. The result would be a type error, as map
expects a list as its second argument and it can only be applied to at most two arguments.
The function zipWith
is closely related to map
: it takes a binary function and two lists as arguments, and applies the binary function to the first element of the first and second list, then to the second elements of the two lists, and so on:
zipWith :: (a -> b -> c) -> [a] -> [b] -> [c]
zipWith op [] _ = []
zipWith op _ [] = []
zipWith op (x : xs) (y : ys) = (x `op` y) : zipWith op xs ys
The two bases cases ensure that the length of the result list will be the same as the length of the shorter of the two argument lists.
zipWith average [1, 2, 3] [4, 5, 6] = [2.5, 3.5, 4.5]
The second recursive pattern on lists that we discussed in Recursion was filtering (which removes elements from a list). In particular, we discussed the two functions extractDigits
, which extracts all digits from a string, and inRadius
, which checks for points within a certain distance from another given point:
import Data.Char
extractDigits :: String -> String
extractDigits []
= []
extractDigits (chr : restString)
| isDigit chr = chr : extractDigits restString
| otherwise = extractDigits restString
inRadius :: ColourPoint -> Float -> [ColourPoint] -> [ColourPoint]
inRadius point radius []
= []
inRadius point radius (p : ps)
| distance point p <= radius = p : inRadius point radius ps
| otherwise = inRadius point radius ps
Again, we can see that both functions have the same structure. The only meaningful difference between them is the test that decides whether we take the first recursive case (and keep the currently inspected list element) or whether we select the second recursive case (and drop the currently inspected element). The function inRadius
gets additional arguments, but they are only passed down to the test that distinguishes the two functions. Tests (also called predicates) can be represented as functions from the tested value to a Boolean result; hence, it is natural to represent filtering as a higher-order function that takes a predicate as its function argument.
import Prelude hiding (filter)
filter :: (a -> Bool) -> [a] -> [a]
filter p []
= []
filter p (x : xs)
| p x = x : filter p xs
| otherwise = filter p xs
It has the same structure as extractDigits
and inRadius
, but abstracts the test out as a predicate p
. Hence, we can use filter
to implement both extractDigits
and inRadius
:
extractDigits :: String -> String
extractDigits strings = filter isDigit strings
inRadius :: ColourPoint -> Float -> [ColourPoint] -> [ColourPoint]
inRadius point radius points = filter inRadiusP points
where
inRadiusP :: ColourPoint -> Bool
inRadiusP p = distance point p <= radius
The definition of inRadius
is an example where the use of a higher-order function gets somewhat verbose, as we need to define a local function (here, inRadiusP
) to pass that function as an argument. The first occurrence of the same issue was the definition of allSquares
. (In the case of distanceFromPoint
, we also started out with a where
clause, but we where able to eliminate that using partial application.)
We only use the local function inRadiusP
once. Giving it a name and defining it separately from its use as an argument to filter
seems cumbersome. To simplify cases like this, Haskell supports anonymous functions (sometimes called closures). These are function definitions without a name that can be defined inline; for example, as a function argument. Instead of a function equation of the form
functionName a1 a2 ⋯ an = body
and then passing functionName
as an argument, we simply write the expression
\a1 a2 ⋯ an -> body
which represents the same n-ary function.
This enables us to shorten inRadius
to
inRadius :: ColourPoint -> Float -> [ColourPoint] -> [ColourPoint]
inRadius point radius points = filter (\p -> distance point p <= radius) points
We read \p -> distance point p <= radius
as “lambda p
to distance point p <= radius
” and also call it a lambda abstraction.
We can use the same technique with allSquares
:
allSquares :: Num a => [a] -> [a]
allSquares xs = map (\x -> x * x) xs
The definition of allSquares
using map
doesn't inspect the argument list anymore as the entire recursive list traversal is encapsulated in map
. In fact, allSquares
just passes the list through as the last argument and we can regard it simply as the mapping of the anonymous function \x -> x * x
. Hence, we can use the concept of partial application from First Steps, just as we did earlier with distanceP
, and omit that last argument, which leads to
allSquares :: Num a => [a] -> [a]
allSquares = map (\x -> x * x)
Recall the type of map
:
map :: (a -> b) -> [a] -> [b]
It expects two arguments: the mapped function and the list. In the definition of allSquares
, we apply map
to its first argument, \x -> x * x
, which yields a new function of type Num a => [a] -> [a]
that, applied to a list, in turn applies \x -> x * x
to all elements of that list.
This style of expressing functions as composition of other functions, without explicitly referring to the data structures that they process is called point-free style.
The following screencast illustrates the derivation of map
and its use step by step.
After mapping and filtering, we considered reductions in Recursion, such as
minList :: [Int] -> Int
minList [] = maxBound
minList (x:xs) = x `min` minList xs
sum :: Num a => [a] -> a
sum [] = 0
sum (x:xs) = x + sum xs
and also the product
functions that closely mirrors sum
. The structural similarity of minList
and sum
is immediately obvious: the main difference is that minlist
uses min
and sum
uses (+)
to combine the elements in the consumed list with the result from the recursive call. Hence, we need to pass that combination function as an argument to a higher-order reduction function. However, there is another difference, namely the value returned in the base case. It also varies depending on the concrete reduction, and so, we need to make it into a second argument of the reduction function — thusly, we define
import Prelude hiding (foldr)
foldr op n [] = n
foldr op n (x:xs) = x `op` foldr op n xs
and use it to simplify
minList :: [Int] -> Int
minList xs = foldr min maxBound xs
sum :: Num a => [a] -> a
sum xs = foldr (+) 0 xs
As before, the trailing list argument xs
is merely passed through and we can adopt a point-free style:
minList :: [Int] -> Int
minList = foldr min maxBound
sum :: Num a => [a] -> a
sum = foldr (+) 0
The question that we haven't answered yet is what type should give foldr
? One point is clear: if the argument list is of type [a]
, then op
has to be a binary function which expects as its first argument a value of type a
. From the example reductions that we considered so far, we might think that op
's second argument should be of type a
as well. However, that is not generally the case. Consider the following function that determines whether all elements of a list are even:
allEven :: [Int] -> Bool
allEven [] = True
allEven (x:xs) = even x && allEven xs
We can easily rewrite it such that it has the same structure as foldr
:
allEven :: [Int] -> Bool
allEven [] = True
allEven (x:xs) = x `allE` allEven xs
where
allE :: Int -> Bool -> Bool
allE x b = even x && b
Then, we replace the explicit recursion by a use of foldr
, the local function definition by an inline lambda abstraction, and use point-free style to arrive at a concise definition:
allEven :: [Int] -> Bool
allEven = foldr (\x b -> even x && b) True
The lambda abstraction \x b -> even x && b
is of type Int -> Bool -> Bool
; i.e., its first and second argument have different types, but its result type coincides with the type of the second argument. The latter fact is necessarily the case for all arguments to foldr
, as the higher-order argument op
gets the result of the recursive call to foldr
as its second argument; so, we arrive at
foldr :: (a -> b -> b) -> b -> [a] -> b
foldr op n [] = n
foldr op n (x:xs) = x `op` foldr n op xs
In the previous examples, the result of applying foldr
was always a scalar value, but we have already seen list functions that produce new lists and nevertheless fit the reduction pattern, for example,
concat :: [[a]] -> [a]
concat [] = []
concat (xs:xss) = xs ++ concat xss
reverse :: [a] -> [a]
reverse [] = []
reverse (x:xs) = x `snoc` reverse xs
where
snoc x xs = xs ++ [x]
We can rewrite them to
concat :: [[a]] -> [a]
concat = foldr (++) []
reverse :: [a] -> [a]
reverse = foldr (\x xs -> xs ++ [x]) []
These functions produce lists, like map
, but in contrast to map
, they alter the structure of the consumed list.
Recent Haskell systems based on the Glasgow Haskell Compiler, Version 7.10 or higher, include a new version of the Prelude
, where several list functions have been generalised. In particular, the type of foldr
is
foldr :: Foldable f => (a -> b -> b) -> b -> f a -> b
This more general version still operates on lists as we discussed, but it can also be used with other data structures that are covered by the Foldable
type class. The same goes for other reductions. We will discuss this in more detail in a later chapter. For now, we will stick to the purely list-based reductions that we defined in this chapter.
In Recursion, we already discussed two forms of reduction, depending on whether we reduce the argument list from the right or the left. The function foldr
implements reduction from the right; so, naturally, we need a second higher-order function, foldl
, to capture the pattern of left-associative reduction. In Recursion, we defined the general form of left-associative traversals as follows:
leftReduce acc [] = acc
leftReduce acc (x:xs) = leftReduce (acc `op` x) xs
By adding the combination operator op
as a higher-order argument, we get
foldl :: (b -> a -> b) -> b -> [a] -> b
foldl op acc [] = acc
foldl op acc (x:xs) = foldl op (acc `op` x) xs
The type is subtly different than that of foldr :: (a -> b -> b) -> b -> [a] -> b
, as the accumulator acc
, in foldl
, has to have the same type as the first argument of op
.
With foldl
, we can easily re-implement the deductFromAccount
example of Recursion:
deductFromAccount :: Int -> [Int] -> Int
deductFromAccount initialBalance = foldl initialBalance deduct
where
balance `deduct` x
| balance < x = error ("Your account balance is " ++ show balance ++
" - cannot deduct " ++ show x ++ " cents")
| otherwise = balance - x
Similarly, the conversion of a string of digits to an integral value as well as the fast list reversal function from Recursion are straightforward with higher-order functions:
stringToInt :: String -> Int
stringToInt = foldl (\acc chr -> 10 * acc + digitToInt chr) 0
fastReverse :: [a] -> [a]
fastReverse = foldl (\accList x -> x : accList) []
These examples highlight the compactness of the code facilitated by the use of higher-order functions. This isn't simply for the sake of the convenience of having to write less code. The use of standard functions encapsulating common recursive patterns also facilitates readability (as the recursive pattern is easily identified by way of the name of the higher-order function) and reduces testing effort (as code you don't write cannot go wrong). Picking the appropriate higher-order function, or more often, the right combination of higher-order functions requires some practice, but it is well worth the effort.
In Recursion, we discussed two strategies for functions that need to combine multiple recursive traversals: we can either combine them into one explicit recursive traversal
sumOfSquareRoots :: (Ord a, Floating a) => [a] -> a
sumOfSquareRoots []
= 0
sumOfSquareRoots (x:xs)
| x > 0 = sqrt x + sumOfSquareRoots xs
| otherwise = sumOfSquareRoots xs
or separate them into multiple traversals implemented by separate functions:
sumOfSquareRoots :: (Ord a, Floating a) => [a] -> a
sumOfSquareRoots xs = sum (allSquareRoots (filterPositives xs))
where
allSquareRoots [] = []
allSquareRoots (x:xs) = sqrt x : allSquareRoots xs
filterPositives []
= []
filterPositives (x:xs)
| x > 0 = x : filterPositives xs
| otherwise = filterPositives xs
When we can use standard patterns of recursion, the latter approach is much more attractive as it maximises code reuse:
sumOfSquareRoots :: (Ord a, Floating a) => [a] -> a
sumOfSquareRoots xs = sum (map sqrt (filter (> 0) xs))
Here, the expression (> 0)
is a partial application of an infix operator, also called a section. It is equivalent to the more verbose use of a lambda abstraction: \x -> x > 0
. In the same manner, we can supply the left-hand side argument, without the right-hand side argument, to an infix operator to get a unary function.
Parameterised functions, such as map
, filter
, and foldl
, are often called combinators (here, list combinators). Hence, we call the one-line definition of sumOfSquareRoots
combinator-based. Once many combinators are involved, a combinator-based expression tends to involve many parentheses. To avoid this, Haskell's Prelude provides some more combinators. The first one is an alternative form of function application with a lower precedence than conventional function application in the form of an infix operator:
infixr 0 $
($) :: (a -> b) -> a -> b
f $ x = f x
The declaration infixr 0 $
states that ($)
is right associative and has precedence level 0
, which is the weakest level of precedence in Haskell — i.e., it has the weakest possible binding power. For example, in
sqrt $ average 60 30
we first evaluate the application of average
to 60
and 30
, and then, apply sqrt
to the result — i.e., it is equivalent to using parenthesis as follows: sqrt (average 60 30)
.
We can use the ($)
operator to remove some of the parentheses in sumOfSquareRoots
:
sumOfSquareRoots xs = sum $ map sqrt $ filter (> 0) xs
Whether you prefer the original or this version of the function is a matter of taste and what structure ought to be emphasised in the function definition. However, there is another higher-order function that leads to a clearly superior definition. We are going to discuss it next.
Given that the definition
sumOfSquareRoots xs = sum $ map sqrt $ filter (> 0) xs
does not inspect the argument list xs
, it should be possible to use point-free style. Purely syntactically, we might be tempted to simply drop xs
as a parameter and as an argument to filter
, resulting in the following definition:
-- this code will trigger a compiler error
sumOfSquareRoots = sum $ map sqrt $ filter (> 0)
Unfortunately, this does not work, as we effectively have got three nested function calls (even if we syntactically avoided the use of parentheses, which would make the nesting explicit, by the use of the $
operator). However, conceptually, we use the three combinators filter
, map
, and sum
like a pipeline, where the individual stages process the argument list one after the other. Instead of using function application, we can combine the pipeline stages using function composition
(.) :: (b -> c) -> (a -> b) -> (a -> c)
(f . g) x = f (g x)
The composition f . g
of two functions f
and g
produces a new function that given an argument x
first applies g
to x
, and then, applies f
to the result of that first application. Using this, we successfully define
sumOfSquareRoots = sum . map sqrt . filter (> 0)
The following screencast illustrates the use of higher-order functions to render polygons.
We have seen that the use of higher-order functions can increase code reuse and lead to concise, more readable code. We have yet to discuss that, in addition, higher-order functions can provide flexibility. To illustrate this benefit, let us return to the Pythagorean trees from Spirals, Snowflakes & Trees: Recursion in Pictures. The exercises in that chapter focus on extending the given code, such that multi-coloured trees can be produced (as opposed to creating a single path that represents the entire tree, which only let's us display the tree in one single colour). We might implement this as follows:
fade :: Colour -> Colour
fade col@(redC, greenC, blueC, opacityC)
| opacityC == 0 = col
| otherwise = (redC, greenC, blueC, opacityC - 20)
colouredFTree :: Int -> Float -> Colour -> Line -> Picture
colouredFTree n factor colour line = fT n colour line
where
fT 0 colour line = [(colour, [fst line, snd line])]
fT n colour line = [(colour,[p2, p3])]
++ [(colour,[p4, p1])]
++ fT (n-1) colour' (p5, p3)
++ fT (n-1) colour' (p4, p5)
where
colour' = fade colour
[p1,p2,p3,p4,_] = polygon 4 line
(_, p5) = rotateLine (factor * pi)
$ (\(x,y) -> (y,x))
$ scaleLine 0.5 (p3, p4)
Using this function in
drawPicture 3 (colouredFTree 14 0.55 red line3)
produces the following picture, where we incrementally reduce the opacity of the colour as the recursion progresses.
Now, there are a lot of other, more interesting colour effects than simply reducing the opacity of a colour. One option is to bleach the initial colour:
bleach :: Colour -> Colour
bleach (redC, greenC, blueC, opacityC)
= (min 255 (redC + 18),
min 255 (greenC + 18),
min 255 (blueC + 18),
opacityC)
colouredFTree :: Int -> Float -> Colour -> Line -> Picture
colouredFTree n factor colour line = fT n colour line
where
fT 0 colour line = [(colour, [fst line, snd line])]
fT n colour line = [(colour,[p2, p3])]
++ [(colour,[p4, p1])]
++ fT (n-1) colour' (p5, p3)
++ fT (n-1) colour' (p4, p5)
where
colour' = bleach colour
[p1,p2,p3,p4,_] = polygon 4 line
(_, p5) = rotateLine (factor * pi)
$ (\(x, y) -> (y, x))
$ scaleLine 0.5 (p3, p4)
With this definition, we can generate the following pictures, by using red, green and black, respectively, as initial colours:
Clearly, there are countless other possibilities, and we definitely don't want to rewrite the code, or define a new variant of the existing code, every time we try something new. Instead of committing to a fixed method to calculate the colour for each recursive step, we can pass a function which takes care of this decision as higher-order parameter to the tree construction function.
What should the type of that function parameter be? In the two examples above, we used a function of type Colour -> Colour
, so this seems a reasonable choice. If we want to avoid having to pass an initial colour in addition to the function, we can alternatively model it as function from recursion depth to colour of type Int -> Colour
.
colouredFTree :: Int -> Float -> (Int -> Colour) -> Line -> Picture
colouredFTree n factor colourFun line = fT n line
where
fT 0 line = [(colourFun 0, [fst line, snd line])]
fT n line = [(colourFun n, [p2, p3])]
++ [(colourFun n,[p4, p1])]
++ fT (n-1) (p5, p3)
++ fT (n-1) (p4, p5)
where
[p1,p2,p3,p4,_] = polygon 4 line
(_, p5) = rotateLine (factor * pi)
$ (\(x, y) -> (y, x))
$ scaleLine 0.5 (p3, p4)
For example, by passing these functions as arguments to colouredFTree
:
magentaToWhite, toBlue1, toBlue2 :: Int -> Colour
magentaToWhite n = (127 + (18 - n) * 7, (18 - n) * 15, 255, 255)
toBlue1 n = (255 - (16 - n) * 13, 255 - (16 - n) * 13, 214, 255)
toBlue2 n = (51 + n * 8, 255 + 14 * n, 255, 255)
we generate the following images.
Another way to create more interesting, organically looking trees is to vary factor
argument of colouredFTree
, which determines the ratio between the size of the left and right subtree in each recursive step. As with the colour, we replace the constant value by a function that characterised the change of the factor
in dependence on the recursion depth:
colouredFTree :: Int -> (Int -> Float) -> (Int -> Colour) -> Line -> Picture
colouredFTree n factorFun colourFun line = fT n line
where
fT 0 line = [(colourFun 0, [fst line, snd line])]
fT n line = [(colourFun n, [p2, p3])]
++ [(colourFun n,[p4, p1])]
++ fT (n-1) (p5, p3)
++ fT (n-1) (p4, p5)
where
[p1,p2,p3,p4,_] = polygon 4 line
(_, p5) = rotateLine ((factorFun n) * pi)
$ (\(x, y) -> (y, x))
$ scaleLine 0.5 (p3, p4)
This enables us to generate a rather varied collection of fractals trees with the one definition of colouredFTree
using
toggleFactor2 factor n = if (n `mod` 2) == 0 then factor else (1 - factor)
toggleFactor5 factor n = if (n `mod` 5) == 0 then factor else (1 - factor)
shiftFactor factor n = factor + (fromIntegral (16 - n)) * 0.025
as function parameters partially applied as toggleFactor2 0.7
, toggleFactor5 0.7
, and shiftFactor 0.5
, all with an iteration depth of 16.
What if we don't want to vary one of the two function parameters as the recursion progresses (to get some of our initial fractal trees)? In this case, we simply pass a constant function as an argument — i.e., a function whose result is independent of its argument:
colouredFTree n (\_ -> 0.5) (\_ -> red) line
Constant functions are often useful; hence, the Haskell Prelude provides a combinator to construct them (without an explicit lambda abstraction):
const :: a -> b -> a
const x _ = x
This function is always used with partial application; for example, we can rephrase the previous invocation of colouredFTree
as
colouredFTree n (const 0.5) (const red) line
While the second version is not much more concise, it is considered better style.
Define our old friend natSum :: Num a => a -> a
(which sums the numbers from 1
up to the given argument) in terms of
enumFromTo n m
| n > m = []
| otherwise = n : enumFromTo (n + 1) m
and one of the list combinators introduced in this chapter.
The map
function is just a special case of foldr
. Can you rewrite the map
definition in terms of foldr
? Complete the following definition:
map :: (a -> b) -> [a] -> [b]
map f = foldr …
What does foldr (:) []
do if applied to a list?
Rewrite the spiralRays
function from Spirals, Snowflakes & Trees: Recursion in Pictures
spiralRays :: Int -> Colour -> Line -> Picture
spiralRays n colour line@(p1, p2)
| n <= 0 = []
| otherwise = (colour, [p1, p2]) : spiralRays (n - 1) newColour newLine
where
newColour = fade colour
newLine = scaleLine 1.02 (rotateLine (pi / 40) line)
to accept a function which calculates the current colour, instead of using the fade
function. In other words, want it to have the following signature:
spiralRays :: Int -> (Int -> Colour) -> Line -> Picture
Rewrite the function from the previous question, such that it uses list combinators as follows:
spiralRays :: Int -> (Int -> Colour) -> Line -> Picture
spiralRays n colourFun line = map spiralFun $ enumFromTo 1 n
where
spiralFun :: Int -> (Colour, Path)
spiralFun n = …
Otherwise, it should produce the same pictures as the previous version.