## The developer’s guide to Scala Implicit Values (Part I)

Implicit parameters and conversions are powerful tools in Scala increasingly used to develop concise, versatile tools such as DSLs, APIs, libraries…

When used correctly, they reduce the verbosity of Scala programs thus providing easy to read code. But their most important feature is that they offer a way to make your libraries functionality extendible without having to change their code nor needing to distribute it.
A great power comes with a great responsibility however. For new comers, as well as for relatively experienced Scala users, they can become a galaxy of confusions and pitfalls derived from the fact that the use of implicit values imply the compiler making decisions not obviously described in the code and following a set of rules with some unexpected results.

This post pretends to shed some light on the use of implicit values. Its content isn’t 100% original, it is just a tourist guide through this full of marvels, and sometimes dangerous, code jungle.
As most of those monstrous things that make us shiver, implicit values are mostly harmless once you get to know them.

# Meeting the implicit family: Implicit Parameters and Implicit Conversions

In essence, implicit values are no more than a tool for providing the compiler with a way to continue when the code it is compiling lacks some elements which should had been explicitly specified. These elements can only be:

• A conversion of an element from a type (T) to another (S) when the element is used in a place where a S is required.
• An actual parameter (value passed to a function in a function call).

And are fulfilled with those implicit values present in the context of the incomplete code.
Therefore, there is one type of implicit value for each kind of element above: Implicit conversions and implicit parameters.

# The goal of implicit conversions

The straight line is the shortest path from one point to another, similarly the shortest path from a type to another is a function. What is then the most obvious Scala language tool to describe conversions from a type to another?

Yes, functions! A conversion is just a function from T to S

def conversion(x: T): S = ???

Ok, but just defining a function doesn’t imply that all places where S is expected will be also be able to receive a parameter of type T. Let’s solve that:

implicit def conversion(x: T): S = ???

Making the definition as implicit tells the compiler that it should try to apply conversion when S is expected but T is passed.

The provided type is known as the conversion source and the type the conversion provides is the target.

case class A(x: Int)
case class B(m: String, v: Double) {
def method = println(s"hello world $m$v")
}

val a = A(10)
var b: B = B("what is the meaning of life?", 42.0)

It is obvious that trying something like:

b = a

Won’t even compile because a type mismatch. However, once an implicit conversion from A (source) to B (target) is available (in context; Don’t worry, this concept will be explained soon), the compiler will try to apply it in order to fix the type error:

implicit def conv(v: A): B = B("Some String", v.x.toDouble)

b = a //Now, this is equivalent to write
b = conv(a) // so it works! (A is the source type and B the target)

This also applies to attribute accesses as well as method calls:

Double dval = a.v // A doesn't have a v attribute, but B does...
// so it is translated to: Double dval = conv(a).v
String m = a.m // m = conv(a).m
a.method // conv(a).method



Finally, A elements become also valid actual parameters for formal parameters of type B:

def f(x: B): Double = x.m.size * x.v
f(a) == f(conv(a))
 Rule of thumb Given a context containing an implicit conversion from T to S, you can use any element of type T (source type) as if it were of type S (target type). Warning Keep in mind that this rule does NOT mean that objects of type T will be also seen as elements of type S but that the corresponding conversion (implemented as a function from T to S) will be applied.

# And implicit parameters’s?

The concept behind implicit parameters is simple: A method (or function) parameter which can be omitted when calling the method.

As any Scala developer should know, it allows the declaration of functions or methods with 0, 1 or more than 1 parameter lists (this last case is, in fact, the way the language enables currying). Any function implicit parameter must be in the last parameter list and this list must be marked as implicit. Any parameter within this list will be therefore considered implicit which means that can be potentially provided by an implicit value within the function call context (again this term! It’s claiming its own sub-section here!)

Lets see this in action:

def f(implicit x: Int) = x*2

//f can be used as any other function ...
val res = f(42)   //.. and receive  'x' explicitly

//Or, with an implicit 'Int' in context, ...
implicit val v: Int = 42 //Implicit value (implicit actual param)

// Be called without any actual parameter:
val resb = f

As you may have already deduced, regular parameter lists can be present along with implicit parameter list:

def g(x: Int, y: Int)(implicit base: Int): Int = (x+y)%base

g(41,1) //Will return 0, (41+1)%42, but ...
g(41,1)(100)//will return 42 because of the higher value for 'base'

Implicit values are hugely used to express contexts. e.g:
In a physics simulator for different planets, wouldn’t it be uncomfortable having to explicitly pass the gravitational constant to each calculation method? Why not make it an implicit parameter with a value when working on Earth an another when working on Mars? This observation drives to to concept of implicit context, again!

It is worth of mentioning that the compiler will try those implicit actual parameters (implicit values) which are of the same type than the one required by the formal parameter. The value identifier, its name in the program, doesn’t make any different for the compiler, however it is desirable to follow the same naming rules than any other identifier.

# Implicit Conversions and Implicit Parameters: Two sides of the same coin

Before throwing ourselves into the sea of the implicit context, it would be nice to make an stop and think about the relation between the two, so seemingly different, implicit tools.

Being Scala a language with such a functional programming load,  its functions can be considered values.  We have also seen that implicit conversions are functions marked implicit. Following this reasoning one could try to write an implicit conversion as follows:

case class B(x: Int)
case class A(x: String)

implicit val a2b  = (v: A) => B(v.x.toInt)

val b: B = A("12")

And, as expected, that would work. Implicit conversions are functions, therefore values.

On the other hand, actual implicit parameters (the ones used to complete incomplete function calls) are just that: values.

So, at the end of the day, we have that both implicit conversions and implicit actual parameters are values, special values so marked by the implicit keyword. This should be the first clue to understand the concept of implicit context.

# Context?

At last! Lest’s talk about the infamous implicit context. Last section ended providing some insight on what may be this concept so many times referenced in this post.

The implicit context at a given point of your source code is the set of implicit values available for the compiler to perform type conversions or complete function calls when that point of code is providing an unexpected type object or missing some parameter in a function call.

The same way the Hitchhiker’s Guide to the Galaxy describes a towel as … the most massively useful thing an interstellar hitch hiker can have … – an item you should under no circumstances loose – the knowledge of your current implicit context is the most important think to keep in mind when developing and working with code which uses implicit values.

 Rule of thumb The implicit context is a satchel of values on which the compiler may look for a fitting piece for those gaps generated by the lack of an actual parameter or a type mismatch. The contents of this satchel may be different at different points of your code, that is: Different contexts at different places on your code.

At a given point the implicit context is determined by a fairly concrete set of rules. Instead enumerating all of them here (for that you can always use Google for that), I’ll try to give a guide to understand what implicit values are available at each moment.

All implicit values (both conversions and actual parameters) which are declared in the same scope than a given code snippet are available for the compiler to complete what it my by missing in that snippet

The easiest way of understanding this affirmation is by visualizing implicit elements as regular mutable or immutable variables. If you understand the rules governing the following example:

{
val x = 3
var y = 40;
{
val x = 7
y = 42
println(x*y)
}
println(x*y)
}
//Printing:
// 294
// 126

Then probably know the variable scopes in Scala, therefore you also understand how implicit context evolves in its more general case, and thus, in many practical cases. Understanding this example should be a piece of cake for you:

def f(implicit a: Int, b: String): Unit = println(s"a=$a\tb=$b")

This function is defined in the same scope than the following block:

{
implicit val x: Int = 3; //Implicit value of type Int
implicit val y: String = "humans"; /* This implicit value will be
available at this scope level as well as at any sub-level */
{
f; //The implicit actual parameter for strings here is "humans"
{
//But this implicit declaration for strings eclipses the latter
implicit val y: String = "dolphins";
f
}
}
f
}

The output for the snippet above is:

a=3	b=humans
a=3	b=dolphins
a=3	b=humans
• The first line corresponds to the first call being performed at depth 1. At that point, implicit values x and are in the implicit context as they are declared at depth 0.
• The second is printed by the call at depth 2. The implicit context in this level has changed, the value “humans” for  has been hidden by a new one which will be only available at this scope since it has not more sub-levels.
• Therefore, the last call will print “humans” again, the scope here is the one for level 0.

Yes,  I said the compiler wouldn’t consider the identifier as a criteria for picking implicit values. Then, why has the implicit value at level 2 the same value as at level 0? Easy, if both identifiers were different they then the implicit context at level 2 would contain two different values for strings. Precisely because the compiler doesn’t take into consideration the implicit identifiers for picking the gap fillers, it wouldn’t know which of them use:

{
implicit val x: Int = 3; //Implicit value of type Int
implicit val y: String = "humans"; /* This implicit value will be
available at this scope level as well as at any sub-level */
{
f; //The implicit actual parameter for strings here is "humans"
{
implicit val z: String = "dolphins";
/* The implicit context at this point contains 2
implicit values of type String because they are in scope
with different identifiers.
*/
f
}
}
f
}

: error: ambiguous implicit values:
both value z of type String
and value y of type String
match expected type String
 Warning The chosen names for implicit values are important after all! The compiler wouldn’t choose which one use upon they identifier name. However, values declared at one scope can hide other values with the same name declared at upper scope levels.

This is just an example for implicit parameters but the same ruling pattern applies for implicit conversions. So far, implicit parameters behave the same way implicit conversions do. Let’s explore their differences.

Formal implicit parameters become implicit values within the function body

Any parameter in a function implicit parameter list is an implicit value in the scope of the function body. It doesn’t matter whether the actual parameter has been filled by the compiler or explicitly passed. As long as it was declared within the implicit parameter list, it will be implicit within the function scope.

For the following examples, consider the function below:

def implicitly[T](implicit e: T) = e

It expects a parameter of the parametrized type and use it as its return value. This kind of function is useful to “capture” implicit values references and make them explicit references. That isn’t only useful for debugging purposes but also increases the scalability of your code. e.g:

• What if you need to pass your implicit physics simulation engine context to a method for which that context parameter isn’t declared as implicit?
• What if you are using Java libraries?
• What is the easiest way to check how an implicit value is being resolved?

This function importance is such that it is included within Scala’s Predef object.

Knowing the existence of implicitly when can now use it at the promised examples:

def f(implicit x: Int) = println(
s"Implicit value for integers within f's body: \${implicitly[Int]}"
)

f(3) /* Despite 3 being explicitly passed to f,
it is an implicit value within f's body */

implicit val vOutsideF = 42
f /* Obviously, that's also true when the actual parameter is
filled by the compiler. */



Implicit conversions defined at a target class companion object are available for the compiler to use source ‘s instances where target’s are expected

When the compiler finds a type mismatch it will not only try to apply implicit conversions in scope but also those defined in the companion object of the expected type (target type) class:

case class A(x: Int)

case class B(x: Double)

object B { //B class companion object
implicit def fromA(o: A): B = B(o.x.toDouble)
}

val b: B = A(3) //B is the target class

//B, formal parameter class, is the target class
def show(x: B) = println(x)
show(A(42))

Implicit conversions defined at a source class companion object are available for the compiler to use source’s instances where target’s are expected

Likewise the case above, the compiler will also try implicit conversions defined in the companion object of the passed instance class (source class).

case class A(x: Int)
case class B(x: Double)

object A { //A class companion object
implicit def toB(o: A): B = B(o.x.toDouble)
}

val b: B = A(3) //B is the target class

//B, formal parameter class, is the target class
def show(x: B) = println(x)
show(A(42))

This subspace of the implicit context is great for expanding old libraries without modifying their source code since it allows you to make your own types compatible with them by just adding the convenient implicit conversions in your classes companion objects.

Yes, your implicit context is a big satchel. So far we have been exploring its pockets so it seems a good a idea to have its image as a whole:

It ins’t that the scope is more important than any other implicit context part but it is drawn a little bit bigger because of its dynamism: It can mutate at any time by the means of new declarations of the use of import.

This is just a simplification of the principles determining the current implicit context at a Scala program point. These principles can interoperate resulting in not so intuitive effects. I find one  them particularly interesting because it highlights the nature of implicit parameters and conversions as different flavours of the same concept: Implicit value.

Consider the following pair of functions:

def show(x: B) = println(x) //Expects B instances
def f(x: A)(implicit a2b: A => B) = show(x) //Receives A instances

It may result strange that the code above compiles and even more so that the following snippet works:

f(A(42))( v => B(v.x.toDouble))

But there is no magic here once we analyse what the a2b parameter is: It is of the type A=>B which means that it is a function from to B, the same type an implicit conversion for these types would be. On the other hand, it is an implicit parameter and that implies a2b to be marked as implicit within body. Both the type and the mark as implicit are the ingredients for an implicit conversion so when the call to show expects a instance the compiler will be able to use the awkwardly declared implicit conversion a2b.

# To be continued…

So far we have reviewed the basis of implicit parameters and conversions. This is like describing what roads and cars are and how the work but not talking about how to exploit them by asking good intentioned people to drive you to your destination in exchange of contemplating your shiny and friendly thumb.

The next part (hopefully posted soon) will explain how to utilize these powerful tools in a higher level of abstraction as well as warning you of the dangerous singularities and pitfalls you may encounter in their use.

## In Functional Scalaland functions are passed to parameters

This entry is more an annotation than an article. It just pretends to describe a practical example of how functional programming offers new roles for common abstract program entities such as values, parameters and functions.

As programmers, we are used to view functions or procedures as sub-programs or logical units, that is, boxes with an input and an output (both output values for functions and side effects for procedures).

In this regard, functions tend to play the $1$ role in a $1:n$ relation where the $n$ part corresponds to different actual parameter values passed to the function during the program execution time. Functional languages, however, make of functions first order citizens, at the same level than values. In the Scala particular implementation, where everything is an object,  functions become objects too. Quite alike the values they receive as actual parameters, they are data (describing relations) with some methods attached.
Specifically, all functions in Scala provide the apply method enabling invocation with parameter values.

In this scheme, I have surprised myself inverting the roles of the relation described above: Making the parameter the fix part and using multiple functions to transform this parameter into multiple results.

– How?
– By building list of functions with the same signature and using map to apply the same parameter to all of them.

Lets see an example:

def isOdd(v: Int)  = v % 2 != 0
def isNegative(v: Int) = v < 0
def isBinaryPalindrome(v: Int) = { val bin = v.toBinaryString;
bin == bin.reverse }

val toApply = List[Int => Boolean](
isOdd,
isNegative,
isBinaryPalindrome )

toApply.map(_.apply(33))
>> res0: List[Boolean] = List(true, false, true)

This way some operations over huge data collections get easily implemented using logic quantifiers$\exists$ & $\forall$:

/* Select all numbers from 1 to 99999 which
are odd or negative or binary palindromes: */

(1 to 99999).filter(v => toApply.map(_.apply(v)) contains true)

/* Select all numbers from 1 to 99999 which
are odd, negative and binary palindromes: */

(1 to 99999).filter(v => toApply.map(_.apply(v)).forall(b => b))

I have used this approach for several purposes. One of them is the solution of a  Hackerrank problem which asked for an algorithm to calculate Sierpinsky’s fractal in two dimensions:

def subDarkFrame(frame: Frame): Frame = {
val (c, (w,h)) = frameParams(frame)
((c._1-w/4,frame._1._2+h/2),(c._1+w/4,frame._2._2))
}

def subUpperFrame(frame: Frame): Frame = {
val (c, (w,h)) = frameParams(frame)
((c._1-w/4,frame._1._2),(c._1+w/4,frame._1._2+h/2-1))
}

def subLeftFrame(frame: Frame): Frame = {
val (c, (w,h)) = frameParams(frame)
((frame._1._1,frame._1._2+h/2),(c._1-1,frame._2._2))
}

def subRightFrame(frame: Frame): Frame = {
val (c, (w,h)) = frameParams(frame)
((c._1+1,frame._1._2+h/2),(frame._2._1,frame._2._2))
}

def inTriangle(frame: Frame, pos: (Int, Int), rev: Boolean = false): Boolean = {
val (col, row) = pos
val (centerCol, centerRow) = center(frame)
val h = row - frame._1._2
val l = if (rev) (frame._2._1 - frame._1._1 + 1) / 2 - h else h
inFrame(pos, frame) && ((col >= centerCol - l) && (col <= centerCol + l))
}
def inArea(frame: Frame, pos: (Int, Int), n: Int): Boolean = {

if(n < 1) true

else {

val darkCore = subDarkFrame(frame)

val unexploredAreas: List[Frame] =
List[Frame=>Frame](
subUpperFrame,
subLeftFrame,
subRightFrame).map(_.apply(frame)).filter(
inFrame(pos,_))

def exploreInArea(alreadyDark: Boolean, areaFrame: Frame) =

((inTriangle(frame, pos) && !inTriangle(darkCore, pos, true))
/: unexploredAreas)(exploreInArea)
}
}

## Learning Scala: The unexpected journey from Pi to JavaScript

Right after becoming aware of how useful my repressed love to functional programming could be, I decided unleash that feeling and get into a mainstream functional programming language.
One great candidate to accomplish this target is Scala.  Not only for its functional programming capabilities nor just only for the easy interaction with Java libraries and modules but for its huge features set: Name any programming language perk, Scala may implement it.

In the process of learning the language I found myself solving classic programming problems, playing with type covariance and eager to play with Akka and its actors concurrent programming model.

# The initial problem

One of these “training” problems consisted on trying to calculate  digits of $\pi$ using Monte Carlo method. The idea behind this kind of numeric algorithms is to solve mathematical equations by:

1. Generating a huge set of random values within a space of interest.
2. Classifying them according to the mathematical relation given by the equation to solve and…
3. Using its statistical features to approximate its solutions.

In this concrete case, the strategy applies to the equation $A=\pi\cdot{r}^{2};\pi=\frac{A}{r^{2}}$

An equation with 2 unknown quantities, a second equation is needed in order to build a system.

$A$ is the value of the area of a circle, with radius $r$ which, could be inscribed in a square of side $2\cdot{r}$

We can focus our attention to just a quarter of this square area, and therefore, to a quarter of the circle area, in this context we can conclude that:

$\frac{A}{4}=p\cdot{r^{2}};A=4p\cdot{r^{2}}$

Where $p$ is the ratio of the quadrant area which also forms part of the circle quarter area ($Q$  from now) to the total quadrant area. Once $p$ is known, $A$ is too. Applying this to the first equation:

$\pi=\frac{4\cdot{p\cdot{r^{2}}}}{r^{2}}\underbrace{=}_{r\neq0}4\cdot{p}$

A good approximation to $p$, and therefore to $\pi$, can be easily calculated as $p=\frac{n_{q}}{n}$; having a big enough sample set (with size $n$) of points inside the quadrant, classified in two groups:

• Those forming part of $Q$: $n_{q}$ points.
• Those outside the circle: $n-n_{q}$ points.

A huge set of points in the bi-dimensional range $[0,1]\times[0,1]$ and a classification condition are enough to solve the problem. The classification condition is given by the locus of the points of the quadrant which are at a distance of $r$ or less from the centre of the circle.

This solution is easily coded in Scala as:

def pi(n: Int) = {
import scala.util.Random.nextDouble
import scala.math.hypot
def inQuadrant(a: Double, b: Double): Boolean = hypot(a,b)<=1.0
def trySum(n: Int, m: Int): Double = n match {
case 0 => m
case _ => trySum(n-1, m+{if(inQuadrant(nextDouble, nextDouble)) 1 else 0})
}
4.0*trySum(n, 0).toDouble/n.toDouble
}

Note that the formal parameter is the sample set size. The set itself is generated by consuming random  numbers.

# What does any of this have to do with JavaScript!?

On the same day I was playing with Scala, Pi and parallel programming (that story could fill another post) I read a tweet from Martin Ordesky spreading to the community the existence of a new technology: Scala.js

With the exception of great and strange minds, almost anyone who has ever been involved in modern web development has become temporarily (maybe permanently) crazy coping with JavaScript. Its interpreter could receive an entire copy of Cervante’s “Don Quijote de la Mancha” not yielding a single syntax error. For this reason I was absolutely compelled to try Scala.js, who knowns, I could even begin to like web client side development. It seemed to good to be true.

I had a beautiful problem which solution could be a great web animation! Why not try to represent it using this brand new technology?

First things first: Before any concrete change, pi function needs a way to be able to communicate to the presentation layer which points is it testing. To do that, it just need a second parameter of type function receiving 2 doubles and returning Unit (without return value). That is, a probe procedure used only for its side effects.

def pi(n: Int, tracer: ((Double, Double, Boolean) => Unit)) = {
import scala.util.Random.nextDouble
import scala.math.hypot
def inQuadrant(a: Double, b: Double): Boolean = hypot(a,b)<=1.0
def trySum(n: Int, m: Int): Double = n match {
case 0 => m
case _ => {
val a = nextDouble
val b = nextDouble
tracer(a, b, incircle)
trySum(n-1, m+{if(incircle) 1 else 0})
}
}
4.0*trySum(n, 0).toDouble/n.toDouble
}

At this point, the easiest way to get the code compiled in JS and run in a web browser is using Scala-Js-Fiddle, this way it is easy to re adapt one of Scala.js examples creating a tracer function representing each pi‘s iteration:

object ScalaJSExample extends js.JSApp{
def pi(n: Int, tracer: ((Double, Double, Boolean) => Unit)) = {
import scala.util.Random.nextDouble
import scala.math.hypot
def inQuadrant(a: Double, b: Double): Boolean = hypot(a,b)<=1.0
def trySum(n: Int, m: Int): Double = n match {
case 0 => m
case _ => {
val a = nextDouble
val b = nextDouble
tracer(a, b, incircle)
trySum(n-1, m+{if(incircle) 1 else 0})
}
}
4.0*trySum(n, 0).toDouble/n.toDouble
}

val (w, h) = (Page.canvas.height.toDouble, Page.canvas.height.toDouble)

//This is the tracer procedure
def pointTracer(a: Double, b: Double, inarea: Boolean) = {
Page.renderer.fillStyle = if(inarea) "black" else "red"
Page.renderer.fillRect(a*w, b*(h-25), 1, 1)
}

/* Main method, just as any Scala application but deffined for
an object extending js.JSApp not Application */
def main() = {
/* Next two variables are used to improve previous estimations
of Pi thus giving the feeling of animation */
var currentSum = 0.0;
var currentN = 0;
/* Estimations using samples of size 10 are run one after
another improving the overall estimation */
val step = 10;
dom.setInterval(() => {
currentSum += pi(step,pointTracer)
currentN += 1
//Boilerplate code to print the global estimation
Page.renderer.fillStyle = "black"
Page.renderer.fillRect(0, h-20, w, 20)
Page.renderer.fillStyle = "white"
Page.renderer.fillText("Pi: " + (currentSum/currentN) toString , 10, h)
}, 5)
}
}

The aim of the code above is to run pi using different samples of size 10. Each independent run gives a rude estimation of pi but the average of several results, except for some precision details, is equivalent to an increase in the sample size of 10 elements per run.

To depend on an external on-line service to get your Scala code running as JavaScript in your browser doesn’t seem a good idea when you want to deploy your personal or professional projects. However, this was just a first approach and the step towards independence is short and painless.

The only Scala-Js-Fiddle object being used in the presented code is Page.renderer. This object is just a CanvasRenderingContext2D object handling a previously created canvas. Just few changes are needed in order to get or own renderer:

• Add code to create and resize a canvas DOM object.
• Add code to obtain a 2d rendering context for the canvas.
• Additionally, and this in not a mandatory change: A reset function along with overall sample size checks in order to clean the results and drawing area periodically thus getting a cyclic demonstration of the numerical method.

 NOTE: Please, forgive me for using your CPU cycles running the simulation above. Did you think it was a GIF?http://orionsword.no-ip.org/demos/pi_scalajs/
package pijs

import scala.scalajs.js.{JSApp}
import org.scalajs.dom
import dom.{Element,HTMLCanvasElement,document}

object PiJsApp extends JSApp {

def addCanvas(h: Int, w: Int): HTMLCanvasElement = {
val canvas = document.createElement("canvas").asInstanceOf[HTMLCanvasElement]
canvas.setAttribute("height",  h toString)
canvas.setAttribute("width",  w toString)
document.body.appendChild(canvas)
canvas
}

private[this] val (w, h) = (500, 375)

def pi(n: Int, tracer: ((Double, Double, Boolean) => Unit)) = {
import scala.util.Random.nextDouble
import scala.math.hypot
def inQuadrant(a: Double, b: Double): Boolean = hypot(a,b)<=1.0
def trySum(n: Int, m: Int): Double = n match {
case 0 => m
case _ => {
val a = nextDouble
val b = nextDouble
tracer(a, b, incircle)
trySum(n-1, m+{if(incircle) 1 else 0})
}
}
4.0*trySum(n, 0).toDouble/n.toDouble
}

def pointTracer(a: Double, b: Double, inarea: Boolean) = {
renderer.fillStyle = if(inarea) "black" else "red"
renderer.fillRect(a*w, b*(h-25), 1, 1)
}

def main() = {
var currentSum = 0.0;
var currentN = 0;
val step = 1000;
val sampleSize = 1000000

def reset: Unit = {
currentSum = 0.0
currentN = 0
renderer.fillStyle = "white"
renderer.fillRect(0,0,w,h)
}

reset

dom.setInterval(() => {
if(currentN*step >= sampleSize) reset
currentSum += pi(step,pointTracer)
currentN += 1
renderer.fillStyle = "black"
renderer.fillRect(0, h-20, w, 20)
renderer.fillStyle = "white"
renderer.fillText("Pi: " + (currentSum/currentN) toString , 10, h)
}, 200)
}
}

## Proof of detecting the start of cycle in linked list

 Note: This post is a copy of the one published at this site on 19 March 2013. It was lost, among other articles, when my previous host lost all this blog data. Although I had many backups, the last post I could save was from January 2013. Thankfully, I had used my own article to answer a question at StackOverflow so I can now reproduce it.

Some times I like to review my own knowledge on trivial computer programming topics.
Althought it is quite common to know that a loop on a linked list can be easily detected using two pointers iterating over it at different rates, it is not so easy to find a mathematical proof of the correctness of this loop existence test.

That’s why I decided to try to find such a proof by myself.

The algorithm to detect loops is described as follows:

1. Declare two pointers (pFast) and (pSlow).
2. Make pSlow and pFast point to list.
3. Until (pSlow), (pFast) or both point to NULL:
1. If , then STOP as a loop has just been found.
4. If this point has been reached (one or both two pointers are NULL) then there are no loops in the list.

Lets assume that this algorithm is correct. In this scheme, a loop situation is represented by the following diagram:

Note how every node, except the one pointing to the begining of a loop, is labeled with the number of its target minus one. So, if one node is labeled with i and it is not the begining of a loop, then it is pointed as next element by the node labeled with i-1.

The algorithm explained above can be described as a loop since its main step (3) is a set of sub-steps which repeated until the exit condition is satisfied. That forces us to represent pFast and pSlow in function of the iteration number in the algorithm execution (t).

If the list hadn’t have loops, pointers positions would be described in function of t as:

However there is a node where the loop starts and that function stops describing the evolution of the pointers. Assuming that this pointer is tagged with m, that the loop contains nodes (that is  and ), and setting t=0 as iteration value when  then:

If one pointer is indeed enough to detect loops using the algorithm described, then it must exist at least a solution to the equation .

This equation is true if and only if there is a value for t that makes:

This ends in a function, which generates values of t that are valid solutions to the equation described above:

Thus It is proved that one slow pointer and one fast pointer are enough to detect loop conditions in a linked list.