Thursday, December 17, 2015

GPU off Apache Spark roadmap: Deeplearning4j best bet for Spark GPU


Last night, Reynold Xin took SPARK-3785 "Support off-loading computations to a GPU" off the Apache Spark road map, marking it "Closed" with a resolution of "Later". This is a little different than when GPU was mentioned at Spark Summit in June, 2015 as a possibility for Project Tungsten for 1.6 and beyond.
So for now, the best bet for using GPUs on Spark is Deeplearning4j, from which their architecture diagram above came. As I've blogged previously, the DL4J folks are waiting until they have solid benchmarks before advertising them. Nevertheless, today, you can do deep learning on GPU-powered Spark.

Tuesday, December 1, 2015

Free book excerpt: Semi-Supervised Learning With GraphX

Manning Publications has made available for free an excerpt from my book Spark GraphX In Action. The excerpt is entitled Poor Man’s Training Data: Graph-Based Semi-Supervised Learning and shows how to:
  • Construct a graph from a collection of points using a K-Nearest Neighbors Graph Construction algorithm (not to be confused with KNN machine learning prediction, which actually gets used below)
  • Do the above in a way optimized for distributed computing.
  • Propagate labels to unlabeled nodes to achieve semi-supervised learning.
  • Make predictions from the trained model (using conventional KNN machine learning prediction)
And as part of Manning's site-wide MEAP sale for Cyber Monday week, the MEAP is 50% off today using the code dotd120115.
My co-author, Robin East, and I just finished the second draft this past weekend, so the print version should be available in 2016Q1.

Wednesday, November 11, 2015

Spark Streaming 1.6: Stop Using updateStateByKey()


Last night, Tathagata Das resolved SPARK-11290, "Implement trackStateByKey for improved state management", which will bring a 7x performance improvement to Spark Streaming when Spark 1.6 is released in December, 2015.
trackStateByKey() offers three benefits over updateStateByKey(), which has served as the workhorse of Spark Streaming since its inception in 2012:
  1. Internally, the performance improvement is achieved by looking at only the key/state pairs for which there is new data. The chart above, which comes from Tathagata's design document, illustrates a typical use case, where 4 million keys are being tracked (for example, 4 million concurrent users on a website or app, or streaming audio or video) but only 10,000 had some activity during the past micro-batch (of, say, two seconds duration). With updateStateByKey(), all 4 million key/state pairs would have had to have been touched due to updateStateByKey()'s internal use of cogroup.
  2. Capability to time out states is built in as a first class option. You no longer have to cobble together your own timeout mechanism. The downside is that if you use this option, you lose the performance improvement mentioned above because the timeout mechanism requires examining every key/state pair.
  3. Ability to return/emit values other than just state as a result of having examined the state. Returning to the example of tracking a web or app user, trackStateByKey() could be maintaining a running logged-in time and emit that together with some metadata for the purposes of populating a dashboard. Not only does one avoid dual-purposing the state per key for two different purposes, but the performance benefit of touching only the modified keys is also realized.

Wednesday, October 28, 2015

Spark 1.6: "Datasets" best of RDDs and Dataframes


If RDDs weren't killed by Dataframes in Spark 1.3 (as covered by my January, 2015 blog post Spark 1.3: Stop Using RDDs), surely they will be by Spark 1.6, which introduces Datasets.
As covered in Michael Armbrust's presentation today at Spark Summit Europe 2015Spark Dataframes: Simple and Fast Analysis of Structured Data, specifically in his last slide, Datasets (as originally proposed in the umbrella Jira ticket SPARK-9999) combine the best of both worlds of RDDs and Dataframes.
Datasets provide an RDD-like API, but with all the performance advantages given by Catalyst and Tungsten. groupBy(), for example, which was always a performance no-no on RDDs, can be done efficiently with Datasets without worrying for example about some groups being too large for a single node because Catalyst can spill large groups. And it's all strongly typed and type-safe.
Datasets will be previewed in Spark 1.6, due out in December.

Saturday, October 3, 2015

Minimal Scala pom.xml for Maven

You would think it would be easy to find an example pom.xml for Scala somewhere on the web. It's not. And the example one at http://www.scala-lang.org/old/node/345 doesn't work because its <sourcedir>src/main/java</sourcedir> excludes all your Scala files from src/main/scala!

Without further ado, below is a minimal pom.xml for Scala.
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/maven-v4_0_0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.technicaltidbit</groupId>
    <artifactId>scala-hello-world</artifactId>
    <version>1.0-SNAPSHOT</version>
    <dependencies>
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-library</artifactId>
            <version>2.10.5</version>
        </dependency>
    </dependencies>
    <build>
        <plugins>
            <plugin>
                <groupId>org.scala-tools</groupId>
                <artifactId>maven-scala-plugin</artifactId>
                <executions>
                    <execution>
                        <goals>
                            <goal>compile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>

Tuesday, September 29, 2015

39 Machine Learning Libraries for Spark, Categorized

Apache Spark itself


1. MLlib


AMPLab


Spark originally came out of Berkeley AMPLab and even today AMPLab projects, even though they are not in Apache Spark Foundation, enjoy a status a bit over your everyday github project.

ML Base


Spark's own MLLib forms the bottom layer of the three-layer ML Base, with MLI being the middle layer and ML Optimizer being the most abstract layer.

2. MLI

3. ML Optimizer (aka Ghostface)
Ghostware was described in 2014 but never released. Of the 39 machine learning libraries, this is the only one that is vaporware, and is included only due to its AMPLab and ML Base status.


Other than ML Base


4. Splash
A recent project from June, 2015, this set of stochastic learning algorithms claims 25x - 75x faster performance than Spark MLlib on Stochastic Gradient Descent (SGD). Plus it's an AMPLab project that begins with the letters "sp", so it's worth watching.

5. Keystone ML
Brought machine learning pipelines to Spark, but pipelines have matured in recent versions of Spark. Also promises some computer vision capability, but there are limitations I previously blogged about.

6. Velox
A server to manage a large collection of machine learning models.

7. CoCoA
Faster machine learning on Spark by optimizing communication patterns and shuffles, as described in the paper Communication-Efficient Distributed Dual Coordinate Ascent


Frameworks


GPU-based


8. DeepLearning4j
I previously blogged DeepLearning4j Adds Spark GPU Support

9. Elephas
Brand new and frankly why I started this list for this blog post. Provides an interface to Keras.


Non-GPU-based


10. DistML
Parameter server for model-parallel rather than data-parallel (as Spark's MLlib is).

11. Aerosolve
From Airbnb, used in their automated pricing

12. Zen
Logistic regression, LDA, Factorization machines, Neural Network, Restricted Boltzmann Machines

13. Distributed Data Frame
Similar to Spark DataFrames, but agnostic to engine (i.e. will run on engines other than Spark in the future). Includes cross-validation and interfaces to external machine learning libraries.


Interfaces to other Machine Learning systems


14. spark-corenlp
Wraps Stanford CoreNLP.

15. Sparkit-learn
Interface to Python's Scikit-learn

16. Sparkling Water
Interface to H2O

17. hivemall-spark
Wraps Hivemall, machine learning in Hive

18. spark-pmml-exporter-validator
Export PMML, an industry standard XML format for transporting machine learning models.


Add-ons that enhance MLlib's existing algorithms


19. MLlib-dropout
Adds dropout capability to Spark MLLib, based on the paper Dropout: A simple way to prevent neural networks from overfitting.

20. generalized-kmeans-clustering
Adds arbitrary distance functions to K-Means

21. spark-ml-streaming
Visualize the Streaming Machine Learning algorithms built into Spark MLlib


Algorithms


Supervised learning


22. spark-libFM
Factorization Machines

23. ScalaNetwork
Recursive Neural Networks (RNNs)

24. dissolve-struct
SVM based on the performant Spark communication framework CoCoA listed above.

25. Sparkling Ferns
Based on Image Classification using Random Forests and Ferns

26. streaming-matrix-factorization
Matrix Factorization Recommendation System


Unsupervised learning


27. PatchWork
40x faster clustering than Spark MLlib K-Means

28. Bisecting K-Meams Clustering
K-Means that produces more uniformly-sized clusters, based on A Comparison of Document Clustering Techniques

29. spark-knn-graphs
Build graphs using k-nearest-neighbors and locality sensitive hashing (LSH)

30. TopicModeling
Online Latent Dirichlet Allocation (LDA), Gibbs Sampling LDA, Online Hierarchical Dirichlet Process (HDP)


Algorithm building blocks


31. sparkboost
Adaboost and MP-Boost

32. spark-tfocs
Port to Spark of TFOCS: Templates for First-Order Conic Solvers. If your machine learning cost function happens to be convex, then TFOCS can solve it.

33. lazy-linalg
Linear algebra operators to work with Spark MLlib's linalg package


Feature extractors


34. spark-infotheoretic-feature-selection
Information-theoretic basis for feature selection, based on Conditional likelihood maximisation: a unifying framework for information theoretic feature selection

35. spark-MDLP-discretization
Given labeled data, "discretize" one of the continuous numeric dimensions such that each bin is relatively homogenous in terms of data classes. This is a foundational idea CART and ID3 algorithms to generate decision trees. Based on Multi-interval discretization of continuous-valued attributes for classification learning.

36. spark-tsne
Distributed t-Distributed Stochastic Neighbor Embedding (t-SNE) for dimensionality reduction.

37. modelmatrix
Sparse feature vectors


Domain-specific


38. Spatial and time-series data
K-Means, Regression, and Statistics

39. Twitter data

UPDATE 2015-09-30: Although it was a reddit.com post regarding the Spark deep learning framework Elephas that kicked me off compiling this list, most of the rest comes from AMPLab and spark-packages.org, plus a couple came from memory. Check AMPLab and spark-packages.org for future updates (since this blog post is a static list). And for tips on how to keep up in general on the fast-moving Spark Ecosystem, see my 10-minute presentation from February, 2015 (scroll down to the second presentation of that mini-conference).

Tuesday, September 22, 2015

Four ways to retrieve Scala Option value

Suppose you have the Scala value assignment below, and wish to append " World", but only if the value is not None (from Option[]).

val s = Some("Hello")

There are four different Option[] idioms to accomplish this:
  1. The Java-like way

    if (s.isDefined) s.get + " World" else "No hello"
     
  2. The Pattern Matching way

    s match { case Some(s) => s + " World"; case _ => "No hello" }
     
  3. The Classic Scala way

    s.map(_ + " World").getOrElse("No hello")
     
  4. The Scala 2.10 way

    s.fold("No hello")(_ + " World")