~stew/blog $ _


My thoghts on the LambdaConf 2016 decision

Firstly, as I have said many times, and I'll keep saying: I think that the situation which the LambdaConf team has found themselves in is a very tough situation. Nobody would envy being in their position of getting this bomb dropped on them, and having to figure out the right way to navigate this situation. I think they did a very good job of coming up with a good plan of how they would arrive at a decision, and they did an admiral job of being very open about how and why they arrived at the decision they arrived at. There is one part of their process I take issue with. My understand is that a large part of how they came to a decision was to poll the minority speakers about their opinion about having Curtis Yarvin speak at the conference. This, I think is a good idea. What I think is also clear is that some of them clearly said that they weren't comfortable with him as a speaker, but they ultimately decided to go with the majority view that it was okay to have him as a speaker. Going with the majority opinion instead of listening to the vocal minority seems to be the opposite of what they were trying to achieve. Anytime someone is willing to speak up, I think it is likely that there were others that weren't willing, and I question how well you do at getting honest opinions from someone when you are putting someone from an already marginalized group on the spot.

There are lots and lots of things that made this a very tough decision for them to make. This would be a very easy decision to make, for example, if the proposed talk was "I started urbit just as a new blog engine for my 'I want to kill black people because they are ignorant slaves' blog, but I ended up creating a whole new computing framework for fascism'". The proposed talk was obviously much more innocuous than this.

You will find lots of people online that claim Curtis Yarvin to be a slavery apologist, and a racist. But its not like you just type "Curtis Yarvin" into google and are served up page hit after page hit of obvious racist speech by this guy. What you will find is hundreds of thousands of words he has written. Very long blog post after very long blog post. Written in a style which is rambling, hard to follow, making lots of references to stuff you have never heard of, and you find yourself asleep at the keyboard 100 times before you find him saying anything that is straightforward enough to be directly objectionable. The majority of the racism I could nail him down on through my own searching is often very veiled. He doesn't talk about "blacks" or "whites" or "jews", but instead he'll talk about "human biodiversity", and it is easy to miss the underlying point he is trying to make about race, because you are already snoozing at the keyboard, because you didn't get half of the references he made leading up to this point. The stuff is out there though.

Another thing that will make a decision like this tougher for some people to make than others is trying to figure out where it is that we should draw the line. If I'm having a conversation with my daughter, and she says something racist, this is my problem. It is my responsibility to do something about this, I am personally responsible for teaching her her right from wrong. If I see some random dude on the street doing the same thing, now I'm a level of indirection away. This is not directly affecting me, but I'm right there witnessing it. Must I try to stop this? Personally, for me, I think yes. Someone else might decide that its not their fight. If I'm organizing a conference, and I find out that this guy who wants to participate in my conference says this stuff on his blog, now we are increasing the levels of indirection, and it becomes less clear for people when something must be done, should be done, must not be done, etc. For us at Typelevel trying to decide if our organization which is holding an event which is associated with LambdaConf needs to do something about a speaker we weren't involved in selecting, that says stuff on his blog, using a pseudonym (is that even relevant?). Now we are into many more levels of indirection, and certainly where you draw the line has become murkier.

But not for some of us. If this guys crime was "sometimes he pees on the seat in public bathrooms", I'm probably going to stop at a small number of levels of indirection. For me, with racism, I don't stop. If this guy is putting racist stuff on his blog that he writes to the followers of some 'neoreactionary movement' he has started. I'll certainly say shit about it when I'm given the opportunity. I have absolutely zero tolerance for racism.

For me, and for others, the fact that this person is being given a speaker role is a very important element. A speakership at a conference is a great privilege to be bestowed on someone. This is a conference that a LOT of people pay attention to. Being bestowed a speakership is giving a lot of attention to the speaker. We certainly in our community attach prestige to people that speak at conferences. People are competing for these chances to get in front of an audience. What LambdaConf is doing is saying "We have a limited number of people that we can put in front of you to present ideas, and this is one of the people we think you should listen to". From my personal feeling that I don't want to see racists succeed in life, Being granted a speaker slot is something that helps someone succeed at life. It's going to be something he can put on his resume that is going to help him get his next nice high paying job, which will allow him to comfortably sit in his silicon valley home and work on his projects, such as writing more hateful blog posts.

Having said all this, I want to make clear that I understand that my feelings on this are just my feelings on this, and I'll respect that other people won't feel as strongly as I do. I understand that someone else will come to the conclusion that "as long as he's willing to leave that part of his life to his pseudonym and not bring it to the conference that is fine. I'm not pressuring anyone else into pulling out of the conference, and I'm not going to think negatively of anyone that wants to move forward and just get on with the functional programming.

All of this makes me very sad, because as others have said. The previous LambdaConf I attended was my favorite ever conference. I think they did an amazing job of putting together a great group of people to hear a great group of diverse speakers. It was a conference that thought more about being inclusive than any other I've attended to date. With day care for kids. Unisex bathrooms, Paleo / Gluten-free / Vegan / Vegetarian / etc. meals available. I fear that this problem that unfortunately stumbled into their laps is going to make it tough for them to get back to that magical place...


The current state of the Cats project, as I understand it

Since Erik Osheim's creating of the cats github repository last week there has been a lot of buzz around the cats project. Many of us were anticipating such a thing to come for some time now. Although I expected that this project would get a lot of quick momentum, the amount of attention the project has gotten so quickly, and the contributors that have already flocked to the project have been remarkable.

What is the Cats project?

Like scalaz, it intends to be a library that provides abstractions which aid in programming with functions in scala.

Why was the Cats project started?

I'm not going to make the mistake of trying to summarize all the events that led up to this. Instead what I'll say is that it became clear that there were fundamental disagreements about how the scalaz project should be structured and run. Because these disagreements were so severe, it was clear to many that there should be separate projects.

The Cats project will differ from the scalaz project in some fundamental ways. Cats will have a strong focus on approachability. We want to focus on making sure there is a smoother onboarding for new users, and for new contributors. There is going to be a big focus on documentation, both in the form of in-line comments in the source, and also external documentation. There will be a strong focus on the approachability of the community newcomers. We are operating under the Typelevel Code of Conduct, and beyond that, we are trying to craft language in our README file which states clearly that we want to be proactive about trying to ensure that people trying in some way to participate in or use this project don't feel harmed in any way by others involved in this project. If people experiencing any kind of pain from the project we want to quickly get to the bottom of the cause, if that is a misunderstanding or miscommunication, or some bad actor.

Is this a fork of scalaz?

No, this library is being written from the ground up. It will share many of the same abstractions as scalaz. The project will be organized differently, it will be more modular; typeclass definitions will be in a different module than data structures, which will be in a different module than typeclass instances. Some of the typeclasses in Cats have different names than their counterparts in scalaz. Many of the typeclasses and data types that exist in scalaz will not have counterparts in cats.

This is an exciting opportunity to rethink some of the decisions that were made in scalaz, we have no users now, we are able to make breaking changes rapidly in order to experiment. I think this is going be something important in making the experience using this library more painless.

Is Cats a typelevel project?

Not currently. It might be put under the typelevel umbrella sometime in the future. Currently the canonical location for the cats library is Erik's github account. As we move towards an actual release, this is almost certainly going to move to some other location. This might be a move to typelevel, perhaps that is even likely, but its an open question now.

So is everything awesome?

No, certainly not. Many are sad that it got to this point. A lot of feelings have been hurt, sometimes unnecessarily, sometimes irreparably. Many other people are feeling discomfort because they don't want to participate in the discussions that have caused this rift in the scala functional programming community at all, but now feel like they are making an implicit statement in lieu of an explicit statement just by choosing which library they use, or which libraries their public projects depend on.

What about scalaz-streams, scalaz-concurrent?

There are many people which are already contributing to Cats asking this question. It seems like there is a lot of interest in these, but it is also clear that these libraries are very tricky, very nuanced, and nowhere near as easy to get done right as cats-core will be. Many of us, myself included, are very much tied to scalaz.concurrent.Task and scalaz.stream.Process, and this is going to keep us on scalaz for months or years to come. I think that there will be some form of Task, and some form of Process that will eventually exist using cats. Its unclear now when that might happen, who might be leading the effort, or if there is some hope of making the right shared abstractions that it might be possible for these libraries to be shared between Cats and scalaz.

How can I help?

As it is such a new library, which is hoping to accomplish so much, there is tons of work to be done. In order to acheive our goals of this library being one which is easily approachable, there is much documentation to be written. We are currently working on putting together a contributor guide. But the canonical answer to "how can I help" is going to be to look at the issues on GitHub and see if there are some you can solve. We are additionally trying to tag bugs good for people new to the library with the "low-hanging fruit" tag. You should feel free to grab any bugs in the "Ready" column here


scalaz actors -- A more reasonable approach to a possibly unreasonable programming model.

Why do I hate akka?

If you have talked to me for more than a few minutes about the current state of the world of scala programming, you probably have learned that at some point I started hating akka. Why? There are many reasons, but the one I will name first is the one that many will name first. That a partial function Any => Unit is a horrible type to build a framework around. This, is probably not very controversial, and is an easy target for poking a stick at. I think it is a legitimate complaint. The specific partial function I'm talking about is the receive method of an actor. Here are the relevant snippets from the (akka source)[https://github.com/akka/akka/blob/67925cb94eab0c86eecddb1be143477310ddb16c/akka-actor/src/main/scala/akka/actor/Actor.scala]:

   * Type alias representing a Receive-expression for Akka Actors.
  type Receive = PartialFunction[Any, Unit]

   * This defines the initial actor behavior, it must return a partial function
   * with the actor logic.
  def receive: Actor.Receive

This is a method you must implement when you create an actor. Any means that it can take any type as input, Unit means you get no return value, it is only called for side-effects, and Partial means that it might throw an exception if send it a message type for which is isn't defined.

Sending a message to an actor is either done with actor.tell(message,sender) or actor ! message:

  def !(message: Any)(implicit sender: ActorRef = Actor.noSender): Unit
  final def tell(msg: Any, sender: ActorRef): Unit = this.!(msg)(sender)

The tell method that you use to send a message to an actor is a total function from Any => Unit, you can always send, you have no idea if your message was actually processed until runtime. Also, note, this tell method is not a method of an actor itself, you don't send messages directly to actors, you send messages to ActorRefs, which is a reference to an actor that might not exist at all. Maybe it existed at some point, but it might be dead now, but you can still try to send messages to it, and this will always succeed. In this case you might notice a log message about your message going to deadLetters, or you notice that "hmm, things aren't uhh happening".

How did we get here?

So how did we get here, why is akka built around this type? The biggest culprit is a method on akka's actor class, the become method. The become method is a method which lets you change the value receive. This makes an actor act like a state machine, but where some messages don't make sense for some of the states an actor might happen to be in at the time the message is received by the actor. When such a message arrives, the message is ignored. This is not something that with this system, we could expect compile-time verification of, since the compiler can't know what the state of the actor might be in at the time when the message arrives, especially since it could be queued behind other asynchrounouslly delivered messages, any of which might change the state of an actor.

After having worked on several large systems build around akka actors. I just absolutely hate them. The systems have a tendency towards the unmaintainable, because we have given up so much potential for the compiler to do static checking for us. The compiler can't tell us if the message we are sending to an actor would definitely be ignored by the actor. If we think there is a message handler we no longer need in an actor, the compiler won't tell us we were wrong when we try to remove it. Instead your code compiles and at runtime things (perhaps silently) fail, you get to try to debug at runtime why things stop moving.

Here's one possible failure that might show how easily these things can become frustrating:

scala> val s = ActorSystem("stew")
s: akka.actor.ActorSystem = akka://stew

scala> /* lets create an actor which reacts to a "foo" message by side-effecting (natch) */
scala> val a = s.actorOf(Props(new Actor { def receive: Receive = { case "foo" => println("bar") } }))
a: akka.actor.ActorRef = Actor[akka://stew/user/$a#2102040257]

scala> a ! "foo"

And it works! no problems there. But instead of a "foo" message, lets send a case class instead.:

scala> val s = ActorSystem("stew")
s: akka.actor.ActorSystem = akka://stew

scala> case class Foo() // the message our actor will react to
defined class Foo

scala> val a = s.actorOf(Props(new Actor { def receive: Receive = { case Foo() => println("bar") } }))
a: akka.actor.ActorRef = Actor[akka://stew/user/$a#2102040257]

scala> a ! Foo
scala> /* nothing happens */

So this all compiles fine, but I don't get my side-effect anymore. Did you spot the error?

Here's the source of the bug:

scala> Foo.isInstanceOf[Foo]
<console>:13: warning: fruitless type test: a value of type Foo.type cannot also be a Foo
res1: Boolean = false

scala> Foo().isInstanceOf[Foo]
res2: Boolean = true

We were sending the Foo companion object to the actor instead of an instance of Foo. But whatever, they are all Any anyway, right?

OK, so lets talk about the problem of "is anyone actually Sending Foo to my actor anymore? can I remove the case Foo() => println("bar") from my actor receive method? How can I know? We can grep our codebase for ! Foo() or tell(Foo())? No. that's not exhaustive enough. Here's another repl session to tell you why.

scala> import akka.actor._
import akka.actor._

scala> val s = ActorSystem("stew")
s: akka.actor.ActorSystem = akka://stew

scala> trait Bar ; case class Foo() extends Bar
defined trait Bar
defined class Foo

scala> val a = s.actorOf(Props(new Actor { def receive: Receive = { case Foo() => println("bar") } }), "x")
a: akka.actor.ActorRef = Actor[akka://stew/user/x#214311873]

scala> s.eventStream.subscribe(s.actorFor("/user/x"), classOf[Bar])
res0: Boolean = true

scala> s.eventStream.publish(Foo())

So anyone that can get a reference to our actor can subscribe our actor to receive messages that weren't even sent to us, and how messages are selected is based on a classifier (here classOf[Bar]), which might be hard to discover.

why actors at all?

I think we got here mostly because erlang has some incredible success stories about building resilient systems, using the actor model, in a dynamically typed language. And this is something that isn't deniable. These success stories are well documented. Does this mean that the right way or the only way to build resilient systems is with actor models and throwing away types? I doubt there are many people that would agree to that.

So why is akka so popular? Its really popular, there is no denying that. Although now I would guess that the thing which is asserting the most gravity drawing people towards scala right now is probably spark, last year I would have certainly said akka. Very soon after I started working with scala, I became spellbound by akka. It is certainly a much easier way of thinking about how to protect some mutable state, easier than trying to use the java primitives. You can quickly get something up and running, and it seems kinda fun, messages are being sent around, stuff starts happening... For many of us akka haters, the honeymoon period lasts until you start to try to maintain larger systems involving akka, and trying to track down bugs. Or trying to have some confidence in a refactor, when you don't have the assistance of a type-checker for portions of your code.

Is the actor model good for anything?

Sure. marginally. Actors make for a very easy to think about way of protecting a mutable datum. I can create an actor, have that actor be the only thing that mutates, and just know that my actor will never be processing more than one message at a time, so I don't have to worry about guarding by datum from multi-threaded access. I, however, agree with Paul Chiassano's post about Actor's making a bad concurrency model (you should also read his follow up post).

Do I use them? these days as little as possible. If I do, it wouldn't be with akka, and it wouldn't be a network of actors or a system of actors, but probably a single actor, protecting a single datum, and it would be buried somewhere behind a more sane API. not something I'd expose directly. So if I'm not using akka, what am I use?

scalaz actors

I think the scalaz actors do a fine job at doing the one thing I want from an actor, allow me to easily protect some mutable state without having to think hard about the concurrency story. But, scalaz-actors do away with a lot of the things in akka that I find to be the sources of big trouble. With scalaz actors, we've thrown away the become method. Actors are no longer a Finite State Machine, this allows us to be much more specific about what messages an actor can handle. When you provide a receive method for a scalaz actor, you provide a total function from A => Unit, where A is the ONLY type of message your actor handles, and it should handle any A which is commonly some sealed trait. Now the compiler can check that any message we send to the actor is actually an A, and in our function handling the message, if we do pattern matching, the compiler can do exhaustiveness checking.

Here's a repl session with a scalaz actor:

scala> import scalaz.concurrent._
import scalaz.concurrent._

scala> case class Foo()
defined class Foo

scala> val a = Actor[Foo](f => println("bar"))
a: scalaz.concurrent.Actor[Foo] = Actor(<function1>,<function1>)

scala> a ! Foo()

scala> a ! Foo
<console>:14: error: type mismatch;
 found   : Foo.type
 required: Foo
              a ! Foo

And look, when we try to send the wrong type, the compiler complains. Let's try another:

scala> import scalaz.concurrent._
import scalaz.concurrent._

scala> sealed trait Bar ; case class Foo() extends Bar ; case class Baz() extends Bar
defined trait Bar
defined class Foo
defined class Baz

scala> val a = Actor[Bar]{ case Foo() => println("foo") }
<console>:13: warning: match may not be exhaustive.
It would fail on the following input: Baz()
       val a = Actor[Bar]{ case Foo() => println("foo") }
a: scalaz.concurrent.Actor[Bar] = Actor(<function1>,<function1>)

Yay! compiler warning that we might not handle all the types we claim to. (and you DO have warnings turned into errors in your build, right? You should, you should be using whatever Rob Norris recommends here)

For extra-credit, I highly recommend looking at the source for scalaz's actors. I'm amazed at how little code there is, and figuring out how it actually works is a little mind-bending :) Happy hAkking or whatever.


Connecting akka actors to scalaz-streams

I've been using scalaz-stream more and more lately, and I'm finding it to be quite pleasant to work with. Unfortunately I'm still running into akka actor systems often, as they seem to be ubiquitious. Luckly there is a nice queueing mechanism in the scalaz.stream.async package which can be used to easily create a process which is asynchronously recieving data from somewhere such as an actor. This asynchronous queueing mechanism has two components, a queue which you can write stream elements to, and a Process[Task,A] which emits values which were enqueued.

We'll start by creating an unbounded queue of strings, then we can call dequeue on the queue to get a Process[String] which will stream the Strings which were enqueued:

  // create a queue
  val queue: Queue[String] = async.unboundedQueue[String]
  // get the process fed by this queue    
  val strings: Process[Task,String] = queue.dequeue 

Now we can hand our queue to an actor which can asynchrounously write to the queue, here are the messages we will plan to send to our actor:

/** A string to enqueue */
case class Str(s: String)
/** A signal to terminate the process normally */
case object End
/** A signal to fail the process with an error */
case object AbEnd

The implementation of our actor is simple for our demonstration, it is just feeding the queue based on the messages it receives. When it receives a string, it enqueues it, which will make it available to the Process backed by the queue. When it receives an End message it will close the queue, which causes the Process to halt normally. When it receives an AbEnd message it fails the queue, which will cause the Process it backs to halt with the Exception passed to fail.

class EnqueueActor(queue: Queue[String]) extends Actor {
  def receive: Receive = {
    case Str(s) =>
      // add the string to the queue
      val enq: Task[Unit] = queue.enqueueOne(s)

    case End =>
      // close the queue which will halt the Process normally
      val close: Task[Unit] = queue.close

    case AbEnd =>
      // fail the queue which will halt the Process with an error
      val fail: Task[Unit] = queue.fail(new Exception("fail"))

Now we need something which is sending messages to the actor. For this demo, we are going to start a Thread which will read lines from stdin and send a message to the actor for each line of input, so we will create a Sink which looks for the special input lines "bye" or "die" as a signal to terminate the process, otherwise it passes the input line to the actor.

  // a Sink which will pass messages to our akka actor
  def toActor(recv: ActorRef): Sink[Task,String] = io.channel { str =>
    str match {
      case "bye" => Task.delay {
        recv ! End
        throw Cause.Terminated(Cause.End)
      case "die" => Task.delay {
        recv ! AbEnd
        throw Cause.Terminated(Cause.End)
      case x => Task.delay {
        recv ! Str(x)

Our Thread for driving input to the actor has a simple run method which creates the actor, then hooks up stdin to our sink:

class ConsoleInput(queue: Queue[String]) extends Runnable {
  val system = ActorSystem("queue-demo")

  override def run(): Unit = {
    val actor = system.actorOf(Props(classOf[EnqueueActor], queue))
    (io.stdInLines to toActor(actor)).run.run

Here's our demo's Main class, all put together. It creates the queue, starts the input thread, then writes lines coming out of our output process to stdout:

object QueueDemo extends App {
  val queue: Queue[String] = async.unboundedQueue[String]
  val strings: Process[Task,String] = queue.dequeue

  val t = new Thread(new ConsoleInput(queue))

  val counted = strings map (str => s"${str.length} chars")
  (counted to io.printLines(System.out)).run.run


Here's a sample session with our application. I typed two strings "Hello world" and "0123456789" before sending "bye" which is a signal to terminate the process.

❯ sbt run
[info] Set current project to queue-demo (in build file:/Users/stew/devel/queuedemo/)
[info] Running queue.QueueDemo 
Hello world
11 chars
10 chars
[success] Total time: 19 s, completed Dec 7, 2014 1:50:19 PM

Its worth noting that here that we used an unboundedQueue, this is a queue which will always accept more inputs to be enqueued, storing more inputs in memory, there will be no form of backpressure if thigns are being enqueued faster than they are being read from the Process. One can also create a boundedQueue which, when the queue is full, will not complete the enqueue Task until there is room in the queue. The queue implementation itself can be found here.

I've posted the source code as a working project to github so that you can clone it and play with it yourself.


Screencast demonstrating Shrink instances in scalacheck

I recently recorded myself live-coding some examples of using scalacheck for some property based testing, with a concentration of how to use the Shrink functionality of scalacheck which will minimize failed test cases.

In the video I also demonstrate how one can use the automatic TypeClass derivation for both Arbitrary and Shrink which now exist in the shapeless-contrib. I only added the typeclass derivation for Shrink to shapeless-contrib very recently, so as of this writing, I don't believe it has yet made it into a released version of shapeless-contrib, which is why you'll see my using a -SNAPSHOT version in this video.

The typeclass derivation stuff is very very cool. The idea is that if you can describe how to construct an arbitrary typeclass given either an arbitrary Product (in the form of an HList), or an arbitrary Coproduct, then shapeless will do the work of translating providing typeclass instances for any type it can convert into a combination of Products and Coproducts. So perhaps it makes sense for me here to take a short amount of space in order to butcher a rough definition of what a Product and a Coproduct are...


I'll write a product using the fancy ∏ (N-ARY PRODUCT) symbol, because I'm fancy like that. When you see it, think of the word "and". So a product: String ∏ Int you can think of as "A String and an Int". Think of String ∏ Int ∏ Int as "A String and an Int and another Int". Here are some scala type which are isomorphic to (the same shape as) String ∏ Int ∏ Int:

    (String,Int,Int) // aka Tuple3[String,Int,Int]
    case class Person(name: String, age: Int, height: Int)

In shapeless, Product types are represented as HLists, and example HList might have the type String :: Int :: Int :: HNil


I'll write a coproduct using the fancy ∐ symbol (N-ARY COPRODUCT), since I'M STILL THAT FANCY. When you see it, you can think of the work "or", so String ∐ Int can be read as "A String OR an Int". A scala type which is isomorphic to String ∐ Int would be:

    sealed trait StringOrInteger
    case class S(str: String) extends StringOrInt
    case class I(int: Int) extends StringOrInt

The shapeless encoding of a coproduct is this:

    sealed trait Coproduct
    sealed trait :+:[+H, +T <: Coproduct] extends Coproduct
    final case class Inl[+H, +T <: Coproduct](head : H) extends :+:[H, T]
    final case class Inr[+H, +T <: Coproduct](tail : T) extends :+:[H, T]

So the type of a coproduct would be something like String :+: Int, and a value of that type would either be an Inl(string) or an Inr(int). These can be combined into larger coproducts. We can have a String :+: Int :+: Float which be any of a Inl(String) or an Inr(Inl(Int)) or an Inr(Inr(Float)).

Algebraic Data Types

Products and Coproducts can be combined to create more general Algebraic Data Types. Here is a type in scala:

    sealed trait Thing
    case class Person(name: String, age: Int) extends Thing
    case class Place(name: String, lat: Double, long: Double) extends Thing
    case class ContainerOfBeer(firkins: Float) extends Thing

And this type Thing is isomorphic to the coproduct of products: (String ∏ Int) ∐ (String ∏ Double ∏ Int) ∐ (Float). Or in shapeless types, (String :: Int :: HNil) :+: (String :: Double :: Int :: HNil) :+: (Float)

Typeclass derivation

So now that we know what a product and coproduct are, let's have a look at how the automatic typeclass derivation in shapeless-contrib is actually put together. As a review from the video above, what a Shrink instance is, is a way of taking some value, and returning a stream of "smaller" values of that same type. So for a String value, such as "asdf", "smaller" values might be "asd", "sdf", "adf", "as", "af", "", etc.

Now, in order to create arbitrary Shrink instances, we need to implement 5 methods:

  • emptyProduct -- which returns a Shrink instance for a 0-ary product
  • emptyCoproduct -- which returns a Shrink instance for a 0-ary coproduct
  • product -- which returns a Shrink instance for an HList (A :: B :: Nil)
  • coproduct -- which returns a Shrink instance for a Coproduct (A :+: B )
  • project -- given an Shrink[B], a A=>B and a B=>A, create a Shrink[A]. This is so that if we can convert value of some type A to a Coproduct of Products, from which we can derive a typeclass instance using the above 4 methods, we can then create a typeclass instance for A

The first two we can implement trivially, by returning a Shrink which just emits an empty Stream

    def emptyProduct = Shrink(_ => Stream.empty)
    def emptyCoproduct: Shrink[CNil] = Shrink(_ => Stream.empty)

The next easiest to implement is Coproduct. For this, we are given an A or a B, we just have to see which we have, then shrink it:

    def coproduct[L, R <: Coproduct](sl: => Shrink[L], sr: => Shrink[R]) = Shrink { lr =>
      lr match {
        case Inl(l) => sl.shrink(l).map(Inl.apply)  // if it is a left, shrink the left, put the results back on the left
        case Inr(r) => sr.shrink(r).map(Inr.apply)  // if it is a right, shrink the right, put the results back on the right

For product, given a way to Shrink As and a way to Shrink Bs we can implement a way to Shrink (A ∏ B ) by getting a stream of shrunken As and appending to it a stream of shrunken Bs

    def product[F, T <: HList](f: Shrink[F], t: Shrink[T]) = Shrink { case a :: b =>
      f.shrink(a).map( _ :: b) append  // shrink the As, leave the Bs alone
      t.shrink(b).map(a :: _)          // shrink the Bs, leave the As alone

for project, we convert our value into the shapeless equivalent, shrink that using the automatically derived shrink instance, then convert the result values back to the 'goal' type

    def project[A, B](bshrinker: => Shrink[B],  // automatically derived from above 4 methods
                      ab: A => B,               // a way to convert from A a shapeless representation
             ba: B => A) =             // a way to convert back to As
      Shrink { a =>                             // given an A
        val bvalue = ab(a)                      // convert the A that needs to be shrunk to a B which we know how to shrink
        val shrunkenB: Stream[B] = b.shrink(bvalue) // shrink the b
        val shrunkenA: Stream[A] = shrunkenB map ba // convert the shrunken Bs to As

And that's it, Now, anywhere we import shapeless.scalacheck._, we will have implicit[Shrink[A]] instances in scope for any A for which shapeless can convert the type into a coproduct of products, which I believe is tuples, case classes and sealed traits of case classes. You can see the full source for Shrink derivation here here. I'm looking forward to seeing more of these automatically derived typeclasses become available in the future.


Converting my .emacs to org-babel

Recently a coworker pointed me to this blog post which shows how to use org-babel to organize your .emacs file into an org-mode document. I've made this change and I love it. I had for a long time been meaning to organize my huge, mostly monolithic emacs initialization file into more managable chunks. This let me keep my large monolithic config while still letting me organize it into better chunks.

Now my ~/.emacs/init.el consists of only these lines:

    (require 'package)
    (add-to-list 'package-archives '("melpa" . "http://melpa.milkbox.net/packages/") t)
    (add-to-list 'package-archives '("org" . "http://orgmode.org/elpa/") t)
    (require 'ob-tangle)
    (org-babel-load-file "~/.emacs.d/Stew.org")

and the rest of my configuration gets organized into the Stew.org file. That file can now be organized using org mode, and contain marked code sections which are exported by orb babel into a Stew.el file, which is then loaded as part of initialization. Here's an example section of my Stew.org:

    * widgets
    ** ace jump
    fast cursor movement. see the demo:
    #+begin_src emacs-lisp
       "Emacs quick move minor mode"
    (eval-after-load "ace-jump-mode"
    (define-key global-map (kbd "C-x SPC") 'ace-jump-mode-pop-mark)

And now2 that my .emacs file is organized in org-mode, I can export it as a decent looking .html file or PDF. You can view an htmlized version of my .emacs file here..

There are clearly some improvements I should be making to this file. I could be adding more comments. I could be making better use of org-mode (always, everyone always could be). One particularly interesting thing I see in Sacha's .emacs file is that she is using the :drill tag which should let her use Org-Drill to treat these entries as flashcards. This to me seems great, there are neat little tricks I find, drop in my .emacs file and forget about them. This would be a way to go back and remind myself of some of the lesser known things in my .emacs files that I wish I was making more use of. (Like ace-jump mode above!).