Getting Started

Quick Start

RasterFrames is available in in a Jupyter Notebook Docker container for quick experimentation.

  1. Install Docker for your OS flavor.
  2. Run
    docker run -it --rm -p 8888:8888 -p 4040-4044:4040-4044 s22s/rasterframes-notebooks

Additional instructions can be found here.

General Setup

RasterFrames is published via Maven Central (click link to see latest versions).

To use RasterFrames, add the following library dependencies:

sbt
libraryDependencies += "org.locationtech.rasterframes" %% "rasterframes" % "0.7.0"
Maven
<dependency>
  <groupId>org.locationtech.rasterframes</groupId>
  <artifactId>rasterframes_2.11</artifactId>
  <version>0.7.0</version>
</dependency>
Gradle
dependencies {
  compile group: 'org.locationtech.rasterframes', name: 'rasterframes_2.11', version: '0.7.0'
}
sbt
libraryDependencies += "org.locationtech.rasterframes" %% "rasterframes-datasource" % "0.7.0"
Maven
<dependency>
  <groupId>org.locationtech.rasterframes</groupId>
  <artifactId>rasterframes-datasource_2.11</artifactId>
  <version>0.7.0</version>
</dependency>
Gradle
dependencies {
  compile group: 'org.locationtech.rasterframes', name: 'rasterframes-datasource_2.11', version: '0.7.0'
}

Optional:

sbt
libraryDependencies += "org.locationtech.rasterframes" %% "rasterframes-experimental" % "0.7.0"
Maven
<dependency>
  <groupId>org.locationtech.rasterframes</groupId>
  <artifactId>rasterframes-experimental_2.11</artifactId>
  <version>0.7.0</version>
</dependency>
Gradle
dependencies {
  compile group: 'org.locationtech.rasterframes', name: 'rasterframes-experimental_2.11', version: '0.7.0'
}

It assumes that SparkSQL 2.2.x is available in the runtime classpath. Here’s how to add it explicitly:

sbt
libraryDependencies += "org.apache.spark" % "spark-sql" % "2.2.1"
Maven
<dependency>
  <groupId>org.apache.spark</groupId>
  <artifactId>spark-sql</artifactId>
  <version>2.2.1</version>
</dependency>
Gradle
dependencies {
  compile group: 'org.apache.spark', name: 'spark-sql', version: '2.2.1'
}
Note

Most of the following examples are shown using the Spark DataFrames API. However, many could also be rewritten to use the Spark SQL API instead. We hope to add more examples in that form in the future.