Saturday, October 3, 2015

Minimal Scala pom.xml for Maven

You would think it would be easy to find an example pom.xml for Scala somewhere on the web. It's not. And the example one at doesn't work because its <sourcedir>src/main/java</sourcedir> excludes all your Scala files from src/main/scala!

Without further ado, below is a minimal pom.xml for Scala.
<project xmlns=""

Tuesday, September 29, 2015

39 Machine Learning Libraries for Spark, Categorized

Apache Spark itself

1. MLlib


Spark originally came out of Berkeley AMPLab and even today AMPLab projects, even though they are not in Apache Spark Foundation, enjoy a status a bit over your everyday github project.

ML Base

Spark's own MLLib forms the bottom layer of the three-layer ML Base, with MLI being the middle layer and ML Optimizer being the most abstract layer.

2. MLI

3. ML Optimizer (aka Ghostface)
Ghostware was described in 2014 but never released. Of the 39 machine learning libraries, this is the only one that is vaporware, and is included only due to its AMPLab and ML Base status.

Other than ML Base

4. Splash
A recent project from June, 2015, this set of stochastic learning algorithms claims 25x - 75x faster performance than Spark MLlib on Stochastic Gradient Descent (SGD). Plus it's an AMPLab project that begins with the letters "sp", so it's worth watching.

5. Keystone ML
Brought machine learning pipelines to Spark, but pipelines have matured in recent versions of Spark. Also promises some computer vision capability, but there are limitations I previously blogged about.

6. Velox
A server to manage a large collection of machine learning models.

7. CoCoA
Faster machine learning on Spark by optimizing communication patterns and shuffles, as described in the paper Communication-Efficient Distributed Dual Coordinate Ascent



8. DeepLearning4j
I previously blogged DeepLearning4j Adds Spark GPU Support

9. Elephas
Brand new and frankly why I started this list for this blog post. Provides an interface to Keras.


10. DistML
Parameter server for model-parallel rather than data-parallel (as Spark's MLlib is).

11. Aerosolve
From Airbnb, used in their automated pricing

12. Zen
Logistic regression, LDA, Factorization machines, Neural Network, Restricted Boltzmann Machines

13. Distributed Data Frame
Similar to Spark DataFrames, but agnostic to engine (i.e. will run on engines other than Spark in the future). Includes cross-validation and interfaces to external machine learning libraries.

Interfaces to other Machine Learning systems

14. spark-corenlp
Wraps Stanford CoreNLP.

15. Sparkit-learn
Interface to Python's Scikit-learn

16. Sparkling Water
Interface to H2O

17. hivemall-spark
Wraps Hivemall, machine learning in Hive

18. spark-pmml-exporter-validator
Export PMML, an industry standard XML format for transporting machine learning models.

Add-ons that enhance MLlib's existing algorithms

19. MLlib-dropout
Adds dropout capability to Spark MLLib, based on the paper Dropout: A simple way to prevent neural networks from overfitting.

20. generalized-kmeans-clustering
Adds arbitrary distance functions to K-Means

21. spark-ml-streaming
Visualize the Streaming Machine Learning algorithms built into Spark MLlib


Supervised learning

22. spark-libFM
Factorization Machines

23. ScalaNetwork
Recursive Neural Networks (RNNs)

24. dissolve-struct
SVM based on the performant Spark communication framework CoCoA listed above.

25. Sparkling Ferns
Based on Image Classification using Random Forests and Ferns

26. streaming-matrix-factorization
Matrix Factorization Recommendation System

Unsupervised learning

27. PatchWork
40x faster clustering than Spark MLlib K-Means

28. Bisecting K-Meams Clustering
K-Means that produces more uniformly-sized clusters, based on A Comparison of Document Clustering Techniques

29. spark-knn-graphs
Build graphs using k-nearest-neighbors and locality sensitive hashing (LSH)

30. TopicModeling
Online Latent Dirichlet Allocation (LDA), Gibbs Sampling LDA, Online Hierarchical Dirichlet Process (HDP)

Algorithm building blocks

31. sparkboost
Adaboost and MP-Boost

32. spark-tfocs
Port to Spark of TFOCS: Templates for First-Order Conic Solvers. If your machine learning cost function happens to be convex, then TFOCS can solve it.

33. lazy-linalg
Linear algebra operators to work with Spark MLlib's linalg package

Feature extractors

34. spark-infotheoretic-feature-selection
Information-theoretic basis for feature selection, based on Conditional likelihood maximisation: a unifying framework for information theoretic feature selection

35. spark-MDLP-discretization
Given labeled data, "discretize" one of the continuous numeric dimensions such that each bin is relatively homogenous in terms of data classes. This is a foundational idea CART and ID3 algorithms to generate decision trees. Based on Multi-interval discretization of continuous-valued attributes for classification learning.

36. spark-tsne
Distributed t-Distributed Stochastic Neighbor Embedding (t-SNE) for dimensionality reduction.

37. modelmatrix
Sparse feature vectors


38. Spatial and time-series data
K-Means, Regression, and Statistics

39. Twitter data

UPDATE 2015-09-30: Although it was a post regarding the Spark deep learning framework Elephas that kicked me off compiling this list, most of the rest comes from AMPLab and, plus a couple came from memory. Check AMPLab and for future updates (since this blog post is a static list). And for tips on how to keep up in general on the fast-moving Spark Ecosystem, see my 10-minute presentation from February, 2015 (scroll down to the second presentation of that mini-conference).

Tuesday, September 22, 2015

Four ways to retrieve Scala Option value

Suppose you have the Scala value assignment below, and wish to append " World", but only if the value is not None (from Option[]).

val s = Some("Hello")

There are four different Option[] idioms to accomplish this:
  1. The Java-like way

    if (s.isDefined) s.get + " World" else "No hello"
  2. The Pattern Matching way

    s match { case Some(s) => s + " World"; case _ => "No hello" }
  3. The Classic Scala way + " World").getOrElse("No hello")
  4. The Scala 2.10 way

    s.fold("No hello")(_ + " World")

Wednesday, September 9, 2015

Breakdown of Spark 1.5 Improvements

Spark 1.5 was released today. Of the 1,516 Jira tickets that comprise the 1.5 release, I have highlighted a few important ones below, broken down by major Spark component.

Spark Core

  • Project Tungsten
    The first major phase of Project Tungsten (aside from a small portion that went into 1.4)
  • Data locality for reduce
    Prior to the Spark 1.2 release, I jumped the gun and announced this made it into Spark 1.2. In reality, it didn't make it in until Spark 1.5.


The developers behind Spark are focusing their efforts on the package, which supports pipelines, and are in the long process of transferring all the spark.mllib functionality over to For the Spark 1.5 release, the big exception to this rule is the large set of improvements to LDA in spark.mllib.


LDA received a ton of upgrades:


Other than bug fixes and minor improvements, GraphX did not get upgraded in Spark 1.5. However, one of the spark.mllib functions can now accept a Graph as input:

Spark Streaming

  • New scheduling mechanism
    E.g. a job no longer fails if a receiver fails 4 times, and it is now possible to schedule all receivers to run on a particular node (e.g. for rack locality to a Kafka cluster)
    SPARK-8882 and SPARK-7988
  • Dynamic Rate Controller
    While it was previously possible to set a rate limit in Spark Streaming, Spark 1.5 introduces a dynamic and automatic rate limiter. There is no API; it's just automatic. An API to provide configuration and flexiblity did not make it into 1.5.

Spark SQL

Half the Spark 1.5 Jira tickets concerned Spark SQL, but almost all of them were miscellaneous bug fixes and performance improvements. Two notable exceptions are:
  • Project Tungsten
    Already described above for Spark Core, Project Tungsten is really aimed at Spark SQL primarily. The developers behind Spark are aiming their performance improvements primarily at DataFrame from Spark SQL and only secondarily to plain old RDDs. Ever since the Spark 1.3 release, they have been positioning DataFrame as an eventual kind of replacement for RDD, for several reasons:
    1. Because a DataFrame can be populated by a query, Spark can create a database-like query plan which it can optimize.
    2. Spark SQL allows queries to be written in SQL, which may be easier for many (especially from the REPL Spark Shell) than Scala.
    3. SQL is more compact than Scala.

  • Data Source API improvement
    The Data Source API allows Spark SQL to connect to external data sources. 19 jira tickets constitute the Spark 1.5 improvements SPARK-5180, with 5 more slated for Spark 1.6 SPARK-9932.

Wednesday, June 24, 2015

My Spark Summit Presentation On Word2Vec and Semi-Supervised Learning


MLLib Word2Vec is an unsupervised learning technique that can generate vectors of features that can then be clustered. But the weakness of unsupervised learning is that although it can say an apple is close to a banana, it can’t put the label of “fruit” on that group. We show how MLLib Word2Vec can be combined with the human-created data of YAGO2 (which is derived from the crowd-sourced Wikipedia metadata), along with the NLP metrics Levenshtein and Jaccard, to properly label categories. As an alternative to GraphX even though YAGO2 is a graph, we make use of Ankur Dave’s powerful IndexedRDD, which is slated for inclusion in Spark 1.3 or 1.4. IndexedRDD is also used in a second way: to further parallelize MLLib Word2Vec. The use case is labeling columns of unlabeled data uploaded to the Oracle Data Enrichment Cloud Service (ODECS) cloud app, which processes big data in the cloud.



Wednesday, June 17, 2015

Spark 1.4 for Data Scientists; Spark 1.5 & 1.6 for core improvements

The theme at Spark Summit 2015 this week can be boiled down to "Spark 1.4 is for data scientists". The "first new supported language in over a year" is the highlight of Spark 1.4: SparkR, originally an AMPLab project, is now part of the Apache Spark distribution. Another data science improvement is Spark ML (which has the ML pipelines, and also which may eventually replace Spark MLLib) is now out of beta alpha. On the commercial side, the Databricks Cloud offering is now GA, with its ability to spin up an arbitrarily large Spark cluster at the touch of a button -- and give you a notebook-style interface to Spark. Amazon also announced turn-key Spark spin-up on AWS (but no notebook).
On the one hand, Spark is moving really fast. Half of the 8000+ Jira tickets have been entered in 2015 alone. On the other hand, there is so much people want in it that in some respects it seems it's not moving fast enough. With all that went in to Spark 1.4 for data science, improvements to Spark Core are to come in Spark 1.5 and 1.6. Although some of Project Tungsten made it into Spark 1.4, most of it is targeted for Spark 1.5, and the most interesting part -- on-the-fly compilation to Intel SIMD -- is slated for Spark 1.6. That will lay the groundwork to on-the-fly compilation to GPU, which will presumably come even later.
Another contentious issue with Spark developers (i.e. not data scientists) is the inability to launch, track, and control Spark YARN jobs via a Java API. The Bash shell is the only official way to submit Spark YARN jobs, making it impossible for Spark to, for example, serve as the back-end to a web app that expects to tightly control, monitor, and launch Spark jobs. This issue was raised again at the Bay Area Spark Meetup that was held on-site during Spark Summit, and the Databricks panel members reluctantly predicted that the upcoming "Launcher" mechanism would be in Spark 1.5 or Spark 1.6.

Monday, May 11, 2015

Yes, Neural Networks Have Grandmother Cells

The neural network portions of the above image come from Wikipedia.

The age-old debate about neural networks (both artificial and biological) is whether they have a grandmother cell, a neuron cell/node somewhere in the net that is activated when one's grandmother is viewed (assuming a biological vision scenario or computer vision application).

For biological neural networks, the jury is out, and the answer is leaning toward the "no" side. For artificial neural networks, if you Google for the answer, you'll almost always come across the admonishment to avoid grandmother cells in your neural networks. But to beginners to neural networks, this advice can be easily misunderstood.

More precisely, grandmother cells are to be avoided in the internal nodes. The reason is that internal nodes are supposed to be for latent variables, i.e., intermediate properties like "big eyes" or "big teeth". If an internal node is already recognizing grandma, then that is an indication of overfitting and that perhaps the neural network was created too large for the amount of training data or search space.

But all artificial neural networks whose job it is to classify images where one of the classes is grandma will have a grandmother cell, namely in the output layer. That's simply how you get your output from a classifier artificial neural network.