Joshua Suereth's Blog, page 2

December 7, 2011

Scala Fresh is Alive

It seems a lot of posts I've meant to write for a long time have been able to be used optimally in response to other blogs.



Todays post is brought to you by David Pollak's fragility post,



So let's talk about the problem of "Binary Compatibility(TM)" in Scala.



First off, David raises a big issue in Scala. One that most of us have seen. Scala's new language features necessarily break binary compatibility of the bytecode. Adding things like Specialization change Scala's standard library enough that you cannot use code compiled against older versions with the new version. This is a two-edged sword. The Scala experience should be as smooth as possible for all customers. Things like java.util.Date shouldn't survive endlessly our standard library. This requires a careful balance between breaking binary compatibility and advancing the language. So far, I'm pretty happy with the way this has been done, but would like to see things stabilize in the future. Let's take a look at what's happened:



Scala's new binary compatible releases

Scala 2.9.1 is binary compatible with Scala 2.9.0. If you compile code against Scala 2.9.0, you can use it with the standard library from Scala 2.9.1. This will hold for all 2.9.x releases. In Scala, binary compatibility will be at the bug-fix release. All 2.10.x releases will be binary compatible with previous 2.10 releases. This means that as long as your dependencies are compiling for a given Scala minor version, then you can continue to enjoy binary compatibility of libraries.



This was made possible by the freely available Migration Manager tool. A lot of us at typesafe use the publicly available version when developing to ensure binary compatibility of libraries.



Note: This was not true for a lot of David Pollak's Scala experience, and is the result of many of us petitioning for better binary compatibility. It's my opinion that this guarantee solves 50-80% of the problem.



What happened to Scala fresh?

I've begun work on what can only be called "Scala Fresh 2.0". That is, a place where community libraries will be built and deployed against the latest version of Scala.



This can serve two purposes:




Ensure that future versions of Scala do not break community libraries

Ensure that the core libraries of Scala are available for every major version of Scala.


You can find the project publicly available here. All of the work is public and will be migrate to the "scala" user on github once complete and ready for more contributions. Feel free to contribute or offer suggestions.



People may not realize but Scala Fresh failed because I failed Scala Fresh. I had very little time between a more than full time commitment at Google, writing commitments, and kids. This should not be an issue in the future, thanks to Typesafe taking binary compatibility seriously. I now have a prototype build that you can migrate your projects into.



My coworker likens this idea to linux distribution repositories. I think it's a decent way to think of it. I, and others, are working hard to ensure the ecosystem for every major Scala release is easy to use and stable. Migrating major Scala versions should be as difficult as finding deprecation warnings and removing them.



SBT cross releasing

Mark Harrah, recognized that Scala remains generally source compatible between major releases (2.8 to 2.9). A long time ago, Mark met me at a Panera in Anne Arundel, MD to discuss a mechanism of cross deploying libraries so that they were available for every release of Scala. This was well before the 2.9.x binary compatibility days and is still in heavy use within the community. It's been pretty widely adopted, but can only go so far. It's a good stop-gap solution that we, the community, can improve on.



Binary compatibility is a community effort. I know Typesafe is doing what it can with its resources, and I'm personally tackling as much as I can (probably trying to juggle too many balls). However, if you want to help, please email me!



The JVM is amazing, don't doubt its powers Finally, I want to clarify a confusion that a lot of people have about binary compatibility. That is, that traits are HUGE ISSUES FOR BINARY COMPATIBILITY ZOMG!!!. In actuality, traits are only slightly more dangerous than interfaces + implementation pairs.



In the code below, the method colored in red was added in a bug-fix release. It is binary compatible with the previous version.



trait Foo {
def foo = "foo"
*def bar = "NEW BAR IS NEW BAR!"*
}

That's right, adding methods, even with implementation, does NOT break linkage. Think about it. How did Java support JDBC 4.0 interfaces running JDBC 3.0 drivers? The trick is that trait-linkage errors happen at runtime when calling a method that has no linked implementation. This is the kind of magic that Java can get away with to ensure binary compatibility and it's just as useful in Scala.



Now, there is an issue where you can break binary compatibility. Again, red italics will denote new code



trait Bar {
def foo = {
*bar +* " foo"
}
*def bar = "NEW BAR IS NEW BAR!"*
}

NOW, the implementation of a previous method calls a new method. While this new method has an implementation in the trait, compiled code against the trait does not. This is akin to the following Java scenario:



interface Foo {
public String foo();
*public String bar();*
}
abstract public class AbstractFoo implements Foo {
@Override public String foo() { return *bar() + "* foo" }
}

The abstract method implements one of the trait's methods, but not the new one. However, the implementation is modified to call the new method.



Scala's collections underwent a minor change in 2.10 to improve both class file size and unintentionally binary compatibility. That is, the collections now follow this pattern:


trait Traversable[A] //....
abstract class AbstractTraversable[A] extends Traversable[A] {}
class Vector[A] extends AbstractTraversable[A] with Traversale[A] //...


In this scenario, if your collection extends an Abstract collection (rather than its own parent class), you can remain binary compatble and allow new methods to be added to a trait and be called be the implementation. Vector is now "binary resilient" to changes in the Traversable trait. There are no guarantees that a trait is not used directly, so we can't take advantage of this when fixing bugs in collections in the Scala library. It's still a nice trick to use.



You do lose some flexibility doing this. Specifically, you can't have multiple class parents unless they're linear (like Java).



So, my point is that Binary Compatibility is a community issue. The JVM and Scala do what they can. I hope to see the compiler do more in the future. The Migration Manager tool from Typesafe can detect a lot of binary compatibility issues and has been crucial to ensuring 2.9.x Scala releases are binary compatible.



I'm trying to organize projects to help ensure all libraries for Scala can be released against all the major versions and remain binary compatible. The story is changing, and has done so rapidly over the past two years.



I'd love to see Lift, and others start adopting binary compatibility standards for there releases as well. This is not just a scala-the-library or scala-the-langauge issue. Libraries matter, just as they do in Java.



So while David's post highlights an issue, my response is:



Help us (the Scala community) out! We're going there, we're doing that.

 •  0 comments  •  flag
Share on Twitter
Published on December 07, 2011 00:00

November 29, 2011

Macro vs. Micro Optimisation

So there's recently been a bit of hype about another Colebourne article: http://blog.joda.org/2011/11/real-life-scala-feedback-from-yammer.html



I'd like to respond to a few points he makes.



First

You should evaluate Scala and pay attention to its benefits and flaws before adopting it. Yes, there are flaws to Scala. Working at typesafe makes you more aware of some of them. We're actively working to reduce/minimize/get rid of these. In my opinion, the negatives of using Scala are peanuts compared to the postives of choosing Scala over Java. I think everyone should make up their own mind about this. Not everyone is going to choose Scala. I feel bad for those who don't, but I make no effort to convince you further than showing you the 40+ Open source Scala projects I have on github. It's a language with a lot to like and a bit to dislike.



Now, to the meat of what I want to say. Don't get lost in micro optimization when discussing programming.



The blog article discusses writing high-performance code in the critical path that has crazy performance needs. This is not your every day development. Scala loses a lot of benefits in this world, because features like closures have overhead on the JVM. Hopefully when Java adopts closures, this overhead can be alleviated, but it is there right now. The set of rules from the email is known to a lot of us Scala devs when writing a performance intensive section of code.



I'll reiterate a few to agree with:




Avoid closure overhead. If you can't get the JVM to optimize away and need to allocate a closure constantly in a tight loop, this can slow down a critical section. This is correlated with: Don't use a for loop. For expressions (and loops) in Scala are implemented as closures. There are some efforts underway to inline these closures as an optimization. This performance hit isn't a permanent one. As the compiler matures, you'll see a lot of optimization work happen.

Use private[this] to avoid an additional method. Scala generated methods and fields for val/var members. Using private[this] informs the compiler that it can optimize away the method. Again, while hotspot can optimize this away if you're in a very critical section, it may be a good idea to optimize. In fact, the whole promotion to fields and methods aspect of Scala classes deserves attention from anyone writing performance critical code. The rules are also there in Java, it's just that you probably have people that already know them.

Avoid invokevirtual calls. (Coda lists this as avoid Scala's collections). The true issue here is that invoke-virtual can make a difference in performance critical sections of code. Again, this is one I think we can improve with a few final declarations and maybe an annotation or two.


Here's the big missing point in all that feedback. This is for performance critical sections of code, not for general purpose applications. I think when it comes to performance bottlenecks, you need to pull out the stops and optimize the heck out of your apps. Look at the Akka framework from typesafe. Akka uses a lot of "dirty" scala code to achieve high non-blocking concurrent performance. It's one of the most amazing libraries, and the code is pretty low level. It also supports a very high level of abstraction when writing your application. It uses PartialFunctions (which are sort of a combination of pattern matching and closures) and traits, both of which have some overhead. However the resulting application is fast. Why? The inner loops are fast and optimized and the application architecture can be optimised. In a high level language, you can take advantages of designs that you would never execute in a lower-level language because the code would be unmaintainable.



You see, most of us like to be able to read and understand what our code does. Scala has some features where you can write very expressive code in a few lines. After getting over the initial hump, this code is pretty easy to maintain. If I were to write that same code in Java, it would look odd, confusing as hell, and no one would want to maintain it.



I learned this lesson at Google.

Google has a pretty stringent C++/Java style guide. One that is tuned for high performance servers. The C++ style guide is public. The style guide frowns on things like the use of 'smart' pointers that 'garbage collect' because you can't be sure that GC will happen at a critical moment in the code.



Google also has a high performance Map-Reduce library. This thing is pretty amazing, with all sorts of crazy cool features like joining data on the map part of the map-reduce. The library followed all the google coding conventions and is generally held up as a piece of awesome software, which its.



However, writing applications with the Map-Reduce library was less than idea. I could write pretty clean code with the MR library. My Mappers were pretty light weight and minimalistic. Same with my reducers. You'd then string together a series of map-reduces in this crazy guitar inspired "patch it together" configuration and the thing would be off to the races. The downside is that the throughput of this kind of processes was sub optimal.



It wasn't anything inherent in the libraries. The libraries were fast and good. The APIs were optimised for high speed performance. However, writing optimal architectures in the framework was tough. If I wanted to do any crazy performance features, like combining map functions and reduce functions in a single map reduce run, the code got ugly fast. Not only that, it was very difficult to maintain because of all the odd bookkeeping. I have 10 outputs ordered by key, this is the one writing to that file right? KRAP.



I spent 6 months writing applications of this nature and feeling like there had to be something better. I started on a venture to write something, and as usual found that someone else already had. That's when I found out about Flume and met Craig Chambers.



FlumeJava was a map reduce library that was gaining a lot of traction at Google, but I had heard a lot of complaints about its API. Around the time I was looking at it, Flume C++ was coming into existence. My team was one of the alpha users of the C++ variant.



The C++ API was a thing of beauty. Like its Java cousin, it treats data a set of Parallel Distributed Collections. You can see Daniel and my talk on the Scala equivalent here. Converting Mappers and Reducers into this API was pretty simple. You could even do it directly by just annotating your Mappers and Reducers with types.



My team saw a 30-40% reduction in code size for some of our map-reduce pipelines. We had an 80% reduction in code for unit tests (which was by far the most amazing benefit). Not only that, the Flume library optimizes the pipeline by performing all sorts of dirty map-reduce tricks for you. In some cases we dropped a map-reduce call or two. In the worst case, we had the same number we had started with.



What was wrong with the library? It violated almost every style rule for C++ at google. That's right, smart pointers, classes with inline member definitions the works. I loved every second of it. Why? Because I was getting stuff done faster than before with less code and the pipelines ran faster. It was a crazy win.



The startup time might not have been as optimal as it could be. Flume ran an on-the-fly optimization before running your pipeline. That was being improved all the time with neat tricks. Things I didn't have to write to watch my app speed up, both runtime and startup time.



The key here is that the designers of Flume weren't focused on micro optimization but macro optimization. The inner guts of the library used the very fast and efficient Map-Reduce library and the wrappers they had were as efficient as they could make them. My code did not have to follow these rules, because the core loops were fast. When it came down to it, my code used high level concepts. Something akin to a closure and for-expressions in Scala (Note: The Scala equivalent did use for-expressions and closures with no noticeable performance hit).



There are times when writing Scala code requires care and optimization in the micro level. However, don't lose the forest for the trees. Think about the entire application architecture. Scala will open up possibilities for writing code that you'd never dream of trying to maintain in C++ or Java. Take Akka as a shining example.



And when you need the performance, listen to those techniques. Viktor Klang can probably give you a ton more. I know I learn more every time I talk with him.

 •  0 comments  •  flag
Share on Twitter
Published on November 29, 2011 00:00

September 16, 2011

SBT and Plugin design

Sbt 0.10 brings a lot of power to the table. SBT 0.10 switched from a class/inheritance based build system into a more functional approach. For those who aren't familiar, here's the quick spiel on SBT.



Basics of SBT

In SBT, a project is composed of Setting[_] values. A Setting is sort-of a name-value pair (more of a name computation pair). In the SBT command-line you can type the name of a setting and get its value (or computed value). For example, test is a task in SBT that you can type in the command line. The setting's computation is executed and the value returned. This setting may depend on other settings for its value.



SBT provides a simple way to construct a project. In the root directory, any *.sbt file is compiled to Setting[_] values. A Setting[_] is two things: A name (Key + Scope) and a Value (or computation, called Initialize in SBT). One can construct a Setting[_] via the SBT dsl:



sourceDirectory in Compile <<= baseDirectory apply { dir =>
dir / "src" / "main" / "scala"
}

In this example, the sourceDirectory Key (name) is assigned an Initailzation (value/computation). The <<= operator is used to construct a Setting[_] by joining a Key and an Initialiation. In the above example, the Initialization is constructed to pull the current value of the baseDirectory key and modify it for the value of the sourceDirectory key.



Note: The apply method is used on baseDirectory because both baseDirectory and sourceDirectory are SettingKey[_]s. SBT distinguishes between three types of Setting[_] values: Setting, Task and InputTask with corresponding SettingKey, TaskKey and InputKey "name" types. The three are distinguished as follows:




SettingKey - Something that is computed once on project load (or reload). like a val.

TaskKey - Something that is computed each time it is called, like a def.

InputKey - Something that takes user input to perform its task.


Configurations

SBT uses a configuration matrix to define the same task against different configurations. For example, SBT defines a task for compiling Scala code called compile. This mechanism has a bunch of required settings. However, it wouldn't be DRY to repeat all these settings for compiling test code as well. So instead, SBT defines the same settings in two different configurations, on called Test and another called Compile. To compile just tests in SBT, you can prefix a task with its configuration, e.g. test:compile.



Plugin Design

So what does this have to do with plugin design? SBT plugins need to integrate Setting[_] values into a build without conflicting with SBT default settings and other plugins. To complicate matters, SBT imports all the members of plugin classes into scope of a project using a wildcard import. This means all the plugins you use could have conflicting names that step on each other. Combined with potentially conflicting key names, plugins need to be very careful with how they define things.



Having worked on several plugins recently, I'd like to outline a strategy that I think achieves a certain elegance in definition and usage, as well as the safety one wants from a plugin.



The basic pattern is as follows. Define an object with the name you want for your plugin inside the Plugin class. For example, if I want a xsbt-suereth-plugin, I would define the following:



import sbt._
import Keys._

object SbtSuerethPlugin extends Plugin {

object suereth {
// Your code here
}
}

Inside of the suereth object I hide all my definitions and code. This isolates my plugin from other sbt plugins, as long as no one names their methods "suereth".



Next, let's define a new Config object that we can use to protect our keys from other plugins.



object suereth {
val Config = config("suereth")
// Your settings here
}

The Config also shares the name of the plugin, so in the command line tasks and settings can be run using suereth:<your-task-here>.



The next step is define whatever custom keys your plugin will use. Let's create a blog key.



object suereth {
...
val blog = SettingKey[String]("blog", "location of the blag") in Config
}

The key is automatically placed into the suereth configuration using the in method. This has two benefits:



When defining the Initialization for a Setting[_], there's no need to continue writing blog in Config. Users of the plugin can directly access suereth.blog without needing to specify suereth.blog in suereth.Config. Note: You can also reference SBT keys in your configuration by writing: val sources = Keys.sources in Config.



Finally, we can provide default values/computations for tasks and settings in our plugin. By convention, calling these settings is a good idea.



object suereth {
...
lazy val settings: Seq[Setting[_]] = Seq(
blog := "http://suereth.blogspot.com"
)
}

Notice how the keys are access directly but are actually in the appropriate config matrix. This helps defining your plugin source code, but will also help users of your plugin. Let's look at what a build.sbt file will be for this plugin.



seq(suereth.settings:_*)

suereth.blog := "http://blog.typesafe.com"

Notice how the settings for this plugin are completely namespaced by the suereth object. We've tied the concept of a "configuration" axis for keys with accessing values in an object.



I find this mechanism of defining plugins both helpful from a development perspective and a user perspective. Curious to hear what others think.

 •  0 comments  •  flag
Share on Twitter
Published on September 16, 2011 00:00

July 22, 2011

Leaving Google for Typesafe

So, as some of you may have found out already, I'll be leaving Google effective July 22nd and moving to a new role at Typesafe. Some of you may be wondering why, so I'm offering my reasons in a blog post for the curious:




Scala has been gaining lots of traction in the market space recently. I think Scala is and has been reaching its tipping point. I'm hoping to contribute everything I can to sure it remains a viable language for everyday development.

While Google does have 20% time to work on individual projects, this correlated to about a day a week to work on all things Scala within Google. Rare was the opportunity to do anything significant or to contribute back to the community, due to time constraints. While I was able to accomplish one pretty good thing, aligning my work with my passions seemed like the best thing to do.

It's part of a new weight loss program called "pay for your own food". This was tempered by my wife's purchase of a new grill for my birthday, which has seen heavy use in the past few weeks.


So, while I see Scala's future as bright and rosy and I can't explain how excited I am to start at typesafe, I'd also like to do a little reflection on Google and some of what makes it a great company.



Google cares about employees

This is not a lie. Google, as best as it can, tries to mean what it says. At my previous companies you'd hear "I'd love to do this, but my hands are tied". At Google, you often hear "I don't know, but let's ask someone who does" which usually turns into "Yes, go ahead". It was rare that something I asked for was denied. My perception of what was acceptible is an entirely different thing. At Google, there are people who think what you think (at over 10k engineers, it's guaranteed one will probably agree with you about something), and that engineer might have also tried to accomplish the same task as you. Go ask about it and find like-minded individuals. You'll be surprised how much



Google tries to do what they say.

The other side to this coin is that the corporate entity and by that I mean the higher ups like to do things for all Google employees. Like bonuses. They really do happen, and they really do try to treat you like a human being.



The other side to this coin is the interview process. I've seen so many negative posting about the interview process. Well, I'm here to dispell a few of those. Google's interviews will challenge your technical abilities. If this annoys you and you don't take the job, then good. You're probably not the type to herd cats deal with politics/relationships involves with a large developer base. If you can't cut the interview, then it gives you a reason to go back and practice. However, more importantly, Google would rather turn away a good candidate than hire a bad one. This mindset is very impressive. It really makes a huge difference on the ability of teams to accomplish things. I knew every employee I worked with at google was reliable to get work done, not something that's always the case externally.



In the future I know my interview style will change. No more will interviewees be allowed to just talk about experiences without showing some code. It's amazing some of the depths you can learn here. It's even great when an interviewee fails to answer the question correctly because you learn their thought process. I know for me, a few candidates who struggled were the ones I wanted sitting beside me coding, more so than the ones who blazed through a question but had a more 'better than thou' aura. So, the point here is care about your employees and care about who you hire. Your company will be in far better shape if you do this.



Culture is faster than process

Google tries to instill a culture of 'doing the right thing' its employees, rather than outlining software process to a T. There are a few 'inconveniences' that exist, but other than mandatory code reviews, a lot of the process is up to the team to do what's best. The other side to this coin is the corporate culture tries to help define and change what's best. It's amazing how one executive making a statement in an all hands meeting can suddenly alter the perceived "best way to code" and get the company to move. It's also surprisingly hard to change culture once it has been really embedded into the engineers, which is the danger. A sense of 'right' in the ways of writing software can be beneficial, but can also turn into anti-patterns if not tempered. I'd love to go into more details here, just feel free to bug me.

 •  0 comments  •  flag
Share on Twitter
Published on July 22, 2011 00:00

June 16, 2011

A Generic Quicksort in Scala

So, I decided to create a quicksort algorithm in Scala that showcases how to write 'generic' collection methods. That is, how can we write an external method that works across many types of collections and preserves the final type.



Well, here's how you do it:


import scala.collection.SeqLike
import scala.collection.generic.CanBuildFrom
import scala.math.Ordering

object QuickSort {
def sort[T, Coll](a: Coll)(implicit ev0 : Coll <:< SeqLike[T, Coll],
cbf : CanBuildFrom[Coll, T, Coll],
n : Ordering[T]) : Coll = {
import n._
if (a.length < 2)
a
else {
// We pick the first value for the pivot.
val pivot = a.head
val (lower : Coll, tmp : Coll) = a.partition(_ < pivot)
val (upper : Coll, same : Coll) = tmp.partition(_ > pivot)
val b = cbf()
b.sizeHint(a.length)
b ++= sort[T,Coll](lower)
b ++= same
b ++= sort[T,Coll](upper)
b.result
}
}
}


I've chosen a somewhat imperative approach to the problem. The quick sort algorithm is split into two parts: The first checks for small collections and returns them, the second picks a pivot and decomposes the collection into three pieces. Theses pieces are sorted (if necessary) and pushed into a builder, cleverly named "b". This "b" is given a hint to expect the entire collection to eventually wind up in the built collection (hopefully this helps performance). Finally, after passing the three partitions to the builder, the result is returned.



The magic here is in the rather confusing type signature:


def sort[T, Coll](a: Coll)(implicit ev0 : Coll <:< SeqLike[T, Coll],
cbf : CanBuildFrom[Coll, T, Coll],
n : Ordering[T]) : Coll


Let's decompose this a bit. T is the type parameter representing elements of the collection. T is required to have an Ordering in this method (the implicit n: Ordering[T] parameter in the second parameter list). The ordering members are imported on the first line of the method. This allows the < and > operations to be 'pimped' onto the type T for convenience.



The second type parameter is Coll. This is the concrete Collection type. Notice that no type bounds are defined. It's a common habit for folks new to Scala to define generic collection parameters as follows: Col[T] <: Seq[T]. Don't. This type does not quite mean what you want. Instead of allowing any subtype of sequence, it only allows subtypes of sequence that also have type parameters (which of course, is most collections). Where you can run into issues is if your collection has (a) no type parameters or (b) more than one type parameter. For example:


object Foo extends Seq[Int] {...}
trait DatabaseResultSetWalker[T, DbType] extends Seq[T] {...}


Both of these will fail type checkking when trying to pass them into a method taking Col[T] >: Seq[T].



To get the compiler to infer the type parameter on the lower bound, we have to defer the type inferencer long enough for it to figure this out. To do that, we don't enforce the type constrait until implicit lookup using the <:< class.



The type parameter: Coll <:< SeqLike[T, Coll] Ensures that the type Coll is a valid Seq[T]. You may be asking why this signature uses SeqLike rather than Seq.



GOOD QUESTION!



SeqLike differs from Seq in that it retains the most specific type of the sequence. This one of the magic tricks behind Scala's collections always returning the most specific type known. That type is embedded in SeqLike. To ensure that we can return the most sepcific type, we can Capture Coll as a SeqLike with Coll as the specific type. This means that filters, maps, flatMaps, partitions should all try to preserve the type Coll.



The last implicit parameter is the cbf CanBuildFrom. Because we don't know how to construct instances of type Coll (because we don't know the type Coll at all), we need to implicitly receive evidence for how to construct a new Coll with sorted data.



Let's look at the result:



scala> QuickSort.sort(Vector(56,1,1,8,9,10,4,5,6,7,8))
res0: scala.collection.immutable.Vector[Int] = Vector(1, 1, 4, 5, 6, 7, 8, 8, 9, 10, 56)

scala> QuickSort.sort(collection.mutable.ArrayBuffer(56,1,1,8,9,10,4,5,6,7,8))
res1: scala.collection.mutable.ArrayBuffer[Int] = ArrayBuffer(1, 1, 4, 5, 6, 7, 8, 8, 9, 10, 56)

scala> QuickSort.sort(List(56,1,1,8,9,10,4,5,6,7,8))
res18: List[Int] = List(1, 1, 4, 5, 6, 7, 8, 8, 9, 10, 56)

scala> QuickSort.sort(Seq(56,1,1,8,9,10,4,5,6,7,8))
res: Seq[Int] = List(1, 1, 4, 5, 6, 7, 8, 8, 9, 10, 56)

You may be asking why I've chosen Seq instead of GenSeq or GenTraversable or even GenIterable. No particular reason, besides I wanted a reasonable assurance that the collection author expects the .length and indexed access methods to be called.



So, what are the lessons to be learned here?




Use *Like subclasses to preserve the specific collection type

Defer inference using <:< to give the type checker a hope of succeeding

Provide @usecase comments for scaladoc so users won't get distracted by typesafe details.


At lest, IMHO, this is the current way of creating generic collection code.

 •  0 comments  •  flag
Share on Twitter
Published on June 16, 2011 00:00

June 12, 2011

Scalatypes Podcast

Well, Daniel, Yuvi and I started podcasting interviews and discussions on Scala. Here's the first interview with Paul Phillips. We have a lot more content to publish, but mastering audio still takes a backseat to day jobs and writing Scala in Depth. Let us know what you think!

 •  0 comments  •  flag
Share on Twitter
Published on June 12, 2011 00:00

June 3, 2011

Parallel Distributed Collections API

Here's my ScalaDays 2011 talk that I gave with Daniel Mahler.

 •  0 comments  •  flag
Share on Twitter
Published on June 03, 2011 00:00

Joshua Suereth's Blog

Joshua Suereth
Joshua Suereth isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Joshua Suereth's blog with rss.