Adding service to HDP using REST API

One way of adding new service to the HDP is by using a graphical interface Ambari. In this post, it is explained how the same is done using Ambari’s REST API. The service added in this post is Pig. Pig is not a “classical” service, rather a package, but from REST API’s point of view, it is a service.

The documentation on how to do this is dated April 21 2014. I have followed it and eventually made it work. The documentation can be found here.

The cluster

I am using AWS EC2 services, operating system is Centos7.

I have an Ambari server, version 2.6.2 and an HDP cluster version 2.6.5. This should work on other versions as well.

My cluster has one NameNode, on which Ambari is installed as well, and one DataNode. The services installed are the bare minimum – HDFS, YARN, MapReduce2, Zookeeper and Hive.

The goal

Install Pig client on the NameNode and on the DataNode.

Adding Pig to the cluster

Variables

export AMBARI_SERVER=PUBLIC_IP
export MASTER_DNS=NAMENODE_PRIVATE_DNS
export SLAVE_DNS=DATANODE_PRIVATE_DNS
export CLUSTER_NAME=mincluster2

Create service on the Cluster

curl -u admin:admin -H "X-Requested-By:ambari" -i -X POST -d '{"ServiceInfo":{"service_name":"PIG"}}' 'http://'$AMBARI_SERVER':8080/api/v1/clusters/'$CLUSTER_NAME'/services'

Pig service is added to the list of services in Ambari.

Pig added - Configs
Pig added - Summary

Check for service on the cluster

curl -k -u admin:admin -H "X-Requested-By:ambari" -i -X GET 'http://'$AMBARI_SERVER':8080/api/v1/clusters/'$CLUSTER_NAME'/services/PIG'

curl - service info
The service is registered on the cluster.

Add components to the service

curl -k -u admin:admin -H "X-Requested-By:ambari" -i -X POST -d '{"RequestInfo":{"context":"Install PIG"}, "Body":{"HostRoles":{"state":"INSTALLED"}}}' 'http://'$AMBARI_SERVER':8080/api/v1/clusters/'$CLUSTER_NAME'/services/PIG/components/PIG'

Running the curl command from previous step to check if the component is added returns the following:

curl - service info after component.png
The component has been added according to the “components” element in the JSON output. The state of the service is still “UNKNOWN”.

Creating configuration is on the next page.

Passwordless ssh between two AWS instances

Hadoop clusters require passwordless shh between nodes for proper communication.

This is all done on the instance you wish to connect FROM!

The recipe how I made paswordless shh work between two instances is the following:

  • create ec2 instances – they should be in the same subnet and have the same security group
  • Open ports between them – make sure instances can communicate to each other. Use the default security group which has one rule relevant for this case:
    • Type: All Traffic
    • Source: Custom – id of the security group
  • Log in to the instance you want to connect from to the other instance
  • Run:
    ssh-keygen -t rsa -N "" -f /home/ubuntu/.ssh/id_rsa
    

    to generate a new rsa key.

  • Copy your private AWS key as ~/.ssh/my.key (or whatever name you want to use)
  • Make sure you change the permission to 600
chmod 600 .ssh/my.key
  • Copy the public key to the instance you wish to connect to passwordless
cat ~/.ssh/id_rsa.pub | ssh -i ~/.ssh/my.key ubuntu@10.0.0.X "cat >> ~/.ssh/authorized_keys"

If you test the passwordless ssh to the other machine, it should work.

ssh 10.0.0.X

Bash script for creating new user in Hadoop and Ambari Views

Here is a bash script I used a couple of years ago for creating Hadoop users from CLI (or batch). It might be useful for someone.

The script does the following:

  • creates a Linux user
  • generates keys
  • creates home directory in HDFS
  • adds user to a group
  • allocates HDFS space quota
  • gives access in Ambari Views
#!/bin/bash

NEW_USER="$1"
DEPT_NAME="$2"
NAMENODE="t-namenode1"
AMBARI="t-ambari"

#
echo "Creating user "$NEW_USER

#Creating user with no password with user's folder
sudo adduser --disabled-password --gecos "" $NEW_USER

#Create Linux user on the namenode
ssh -i /home/ubuntu/.ssh/key $NAMENODE 'sudo adduser --disabled-password --gecos "" $NEW_USER && sudo chown $NEW_USER:$NEW_USER /home/$NEW_USER'

#Prepare .ssh folder
cd /user/$NEW_USER
sudo mkdir .ssh
sudo chown $NEW_USER:$NEW_USER .ssh/
sudo chmod 700 .ssh

#Create private and public key
sudo -u $NEW_USER  ssh-keygen -t rsa -f $NEW_USER-key

#Copy public key to the authorized_keys
sudo -u $NEW_USER cp $NEW_USER-key.pub .ssh/authorized_keys
sudo -u $NEW_USER chmod 600 .ssh/authorized_keys

#######HDFS
echo "Create system folder for user"
sudo -u hdfs hadoop fs -mkdir /user/$NEW_USER
echo "Change owner of the system folder"
sudo -u hdfs hadoop fs -chown $NEW_USER:hdfs /user/$NEW_USER

#Defining HDFS space quota
echo "Allocate 100g of space on HDFS for the user"
sudo -su hdfs hdfs dfsadmin -setSpaceQuota 100g /department/$DEPT_NAME/users/$NEW_USER

#Access to Ambari Views
curl -iv -u admin:admin -H "X-Requested-By: ambari" -X POST -d  '{"Users/user_name": "$USER_NAME", "Users/password":  "$USER_NAME", "Users/active": true, "Users/admin": false }' http://$AMBARI:8080/api/v1/users

#Add user to a group in Ambari Views
curl -iv -u admin:admin -H "X-Requested-By: ambari" -X POST -d '[{"MemberInfo/user_name":"$NEW_USER", "MemberInfo/group_name":"$DEPT_NAME"}]' http://$AMBARI:8080/api/v1/groups/$DEPT_NAME/members

echo "User's folder on the client:"
ls -l /user/$NEW_USER

echo "User's system folder on HDFS:"
sudo -u $HDFS hadoop fs -ls /user/$NEW_USER

 

Adding service Druid to HDP 2.6 stack

Druid is a “fast column-oriented distributed data store”, according to the description in Ambari. It is a new service, added in HDP 2.6. The service is Technical Preview and the version offered is 0.9.2. Druid’s website is druid.io.

!!! Hortonworks Data Platform 2.6 is needed in order to install and use Druid !!!

Hortonworks has a very intriguing three-part series on ultra fast analytics with Hive and Druid. The first blog post can be found here.

This blog post describes how Druid is added to the HDP 2.6 stack with Ambari. The documentation I used is here. According to my experience, it does not hold water. I had to make some adjustment in order to start all Druid services.

Requirements

  • Zookeeper: Druid requires installation of Zookeeper. This service is already installed on my cluster.
  • Deep storage: deep storage layer for Druid in HDP can either be HDFS or S3. Parameter “druid.storage.type” is used to define this. Installation default is HDFS.
  • Metadata storage: for holding information about Druid segments and tasks. MySql is my metadata storage of choice.
  • Batch execution engine: resource manager is YARN, execution engine is MapReduce2. Druid hadoop index tasks use MapReduce jobs for distributed ingestion of data.

All these requirements are taken care of in Ambari, most of them with a sufficient default value.

Services within Druid

  • Broker – interface between users and Druid’s historical and realtime nodes.
  • Overlord – maintain a task queue that consists of user-submitted tasks.
  • Coordinator – serve to assign segments to historical nodes, handle data replication, and to ensure that segments are distributed evenly across the historical nodes.
  • Druid Router – serve as a mechanism to route queries to multiple broker nodes.
  • Druid Superset – if you know Superset, you know Druid Superset – data visualization tool.

Pre-work in metadata storage

As mentioned, my metadata storage is MySql. There are some objects that have to be created manually for the Druid installation to go through.

Log in to MySql as root.

Create druid database

CREATE DATABASE druid DEFAULT CHARACTER SET utf8;
CREATE USER 'druid'@'%' IDENTIFIED BY 'druid';
GRANT ALL PRIVILEGES ON druid.* TO 'druid'@'%';
FLUSH PRIVILEGES;

Create superset database

The superset objects in the database have to be created even though the documentation does not mention this. The installation will not go through unless it can connect to superset database to create tables in superset schema.

CREATE DATABASE superset DEFAULT CHARACTER SET utf8;
CREATE USER 'superset'@'%' IDENTIFIED BY 'druid';
GRANT ALL PRIVILEGES ON superset.* TO 'superset'@'%';
FLUSH PRIVILEGES;

Adding service

In Ambari, click on Add Service and check Druid service.

add service druid

In the next step, you are asked to define which Druid service is going to be installed on which node in the cluster. Remember, you can always move/add services.

assign masters to nodes

The Broker is on the Client node, since that service is the gateway to external world.

In the next step – Assigning Slaves and Clients – the following two needs to be defined where they will be installed:

  • Druid Historical: Loads data segments.
  • Druid MiddleManager: Runs Druid indexing tasks.

Generally you should select Druid Historical and Druid MiddleManager for multiple nodes. Both services are on namenode to begin with.

Next step are settings. There are some passwords and MySql server that needs to be defined. Secret key is also something one needs to define. A random string of characters would do the trick.

Be sure to create the objects in the MySql before you proceed with the installation.

installation settings

!!! Superset Database port should be 3306, just like Metadata storage port.

The advanced tab (picture above) is mostly for the superset parameters – entering name, email and password is needed to proceed with the installation. This is later on used in the visualization tool Superset.

Once you click OK, you are asked to doublecheck and change some recommended values. The following ones are related to Druid installation and should be checked to accept the recommended values.

dependency configuration.jpg

In the Review step, check if everything is as it should be and click Deploy.

After the installation completes all Druid services should be up and running. If there is the need to restart any services, do so.

Tweaking MapReduce2

There is one detail not mentioned in Hortonworks documentation when Druid is installed. There are two parameters in MapReduce2 that have to be tweaked in order for Druid to successfully load data. Explanation is at the bottom.

The parameters are:

  • mapreduce.map.java.opts
  • mapreduce.reduce.java.opts

The following should be added at the end of the existing values:

-Duser.timezone=UTC -Dfile.encoding=UTF-8

How it looks in Ambari:

map java heap size parameterreduce java heap size parameter

The service MapReduce2 should now be restarted.

Explanation

Various error messages occur in the Druid Console log files when the Druid job start to load the data. The error messages vary depending on the data, but generally, they do not provide any useful information.
From my experience, one error had a problem with the first line in a valid csv file, while in another example, the error was that no data can be indexed (code below).

Caused by: java.lang.RuntimeException: No buckets?? seems there is no data to index.
	at io.druid.indexer.IndexGeneratorJob.run(IndexGeneratorJob.java:176) ~[druid-indexing-hadoop-0.9.2.2.6.0.3-8.jar:0.9.2.2.6.0.3-8]
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:349) ~[druid-indexing-hadoop-0.9.2.2.6.0.3-8.jar:0.9.2.2.6.0.3-8]
	at io.druid.indexer.HadoopDruidIndexerJob.run(HadoopDruidIndexerJob.java:94) ~[druid-indexing-hadoop-0.9.2.2.6.0.3-8.jar:0.9.2.2.6.0.3-8]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopIndexGeneratorInnerProcessing.runTask(HadoopIndexTask.java:261) ~[druid-indexing-service-0.9.2.2.6.0.3-8.jar:0.9.2.2.6.0.3-8]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_111]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_111]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_111]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_111]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) ~[druid-indexing-service-0.9.2.2.6.0.3-8.jar:0.9.2.2.6.0.3-8]
	... 7 more

 

Upgrading HDP 2.5 to 2.6

This blog post explains how an express upgrade from HDP 2.5 to HDP 2.6 has been done.

I have a HDP 2.5 cluster on AWS, Ubuntu 14.04 is running on all instances. My metadata database of choice is MySql 5.6.

Prior to upgrading HDP, Ambari has to be upgraded to 2.5. An upgrade from Ambari 2.4 to 2.5 is described here.

Backup databases

Do a backup of all databases that are storing metadata for services installed in HDP.

Example of backing up Hive metadata:
On the server where Hive metastore database is, create a backup folder

mkdir /home/ubuntu/hive-backup

Dump the database into a file (enter password when prompted):

mysqldump -u hive -p hive > /home/ubuntu/hive-backup/hive.mysql

Backup namenode files

Create backup directory

mkdir/home/ubuntu/hdp25-backup

Backup a complete block map of the file system

sudo -u hdfs hdfs fsck / -files -blocks -locations > /home/ubuntu/hdp25-backup/dfs-old-fsck-1.log

Create a list of all the DataNodes in the cluster

sudo -u hdfs hdfs dfsadmin -report > /home/ubuntu/hdp25-backup/dfs-old-report-1.log

Capture the complete namespace of the file system

sudo -u hdfs hdfs dfs -ls -R / > /home/ubuntu/hdp25-backup/dfs-old-lsr-1.log

Go into safemode

sudo -u hdfs hdfs dfsadmin -safemode enter

Output

Safe mode is ON

Save namespace

sudo -u hdfs hdfs dfsadmin -saveNamespace

Output

Save namespace successful

Copy the checkpoint files located in ${dfs.namenode.name.dir}/current into a backup directory

sudo cp /hadoop/hdfs/namenode/current/fsimage_0000000000000485884 hdp25-backup/
sudo cp /hadoop/hdfs/namenode/current/fsimage_0000000000000485884.md5 hdp25-backup/

Store the layoutVersion for the NameNode

sudo cp /hadoop/hdfs/namenode/current/VERSION hdp25-backup/

Take the NameNode out of Safe Mode

sudo -u hdfs hdfs dfsadmin -safemode leave

Finalize any prior HDFS upgrade

sudo -u hdfs hdfs dfsadmin -finalizeUpgrade

Output

Finalize upgrade successful

Upgrading Ambari 2.4 to 2.5

This post describes how an upgrade from Ambari 2.4.1.0 to 2.5 has been done. The reason for that is to be able to further upgrade HDP to 2.6. Upgrade of HDP from 2.5 to 2.6 is described here.

Ambari Server is installed on Ubuntu 14.04. The same OS is used across the whole HDP cluster.

The following services are upgraded using this blog post:

  • Ambari Server
  • Ambari Agent
  • Ambari Infra
  • Ambari Metrics
  • Ambari Collector
  • Grafana

Backup

It is important to do a database backup of the Ambari database. Metadata for my Ambari is stored in MySql database.

Create a directory for backup

mkdir /home/ubuntu/ambari24-backup

Backup the database (enter password when prompted)

mysqldump -u ambari -p ambari_db > /home/ubuntu/ambari24-backup/ambari.mysql

Make a safe copy of the Ambari Server configuration file

sudo cp /etc/ambari-server/conf/ambari.properties ambari24-backup/

Prepare for installation of Ambari Agent and Server

Stop Ambari Metrics from the Ambari Web UI

Stop Ambari Server on Ambari Server instance

sudo ambari-server stop

Stop all Ambari Agents on all instances in the cluster where it is running

sudo ambari-agent stop

On all instances running Ambari Server or Ambari Agent do the following

sudo mv /etc/apt/sources.list.d/ambari.list db-backups/
sudo wget -nv http://public-repo-1.hortonworks.com/ambari/ubuntu14/2.x/updates/2.5.0.3/ambari.list -O /etc/apt/sources.list.d/ambari.list

Upgrade Ambari Server

sudo apt-get clean all
sudo apt-get update -y
sudo apt-cache show ambari-server | grep Version

The last command should output something like this

Version: 2.5.0.3-7
Version: 2.4.1.0-22

This means version 2.5 is available, Ambari Server can be installed

Install Ambari Server

sudo apt-get install ambari-server

Some lines from the output

The following packages will be upgraded:
  ambari-server

Unpacking ambari-server (2.5.0.3-7) over (2.4.1.0-22) ...

Setting up ambari-server (2.5.0.3-7) ...

Confirm that there is only one ambari server jar file

ll /usr/lib/ambari-server/ambari-server*jar

Output

-rw-r--r-- 1 root root 5806966 Apr  2 23:33 /usr/lib/ambari-server/ambari-server-2.5.0.3.7.jar

Install Ambari Agent

On each host running Ambari agent

sudo apt-get update
sudo apt-get install ambari-agent

Check if the Ambari agent install was a success

dpkg -l ambari-agent

Output from one node

Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name                         Version        Architecture    Description
+++-============================-==============-===============-=================
ii  ambari-agent                 2.5.0.3-7      amd64           Ambari Agent

Upgrade Ambari DB schema

On Ambari Server instance, run the following command

sudo ambari-server upgrade

The following question shows up. The backup has been done at the beginning. Type y and press Enter.

Ambari Server configured for MySQL. Confirm you have made a backup of the Ambari Server database [y/n] (y)?

Output

INFO: Upgrading database schema
INFO: Return code from schema upgrade command, retcode = 0
INFO: Schema upgrade completed
Adjusting ambari-server permissions and ownership...
Ambari Server 'upgrade' completed successfully.

Start the services

Start Ambari Server

sudo ambari-server start

Start Ambari Agent on all instances where it is installed

sudo ambari-agent start

Post-installation tasks

Hive and Oozie (which I have installed in HDP) are using MySql, so I have to put the jar file in place

sudo ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar

Installing Ambari Infra for enabling Ranger Audit Access

About the key services mentioned in this post:
Apache Solr – an open-source enterprise search platform. Ranger is using it to store audit logs.
Ambari Infra – core shared service used by Ambari managed components. The database is Solr.

Using a database for Audit Access in Ranger is not an option anymore with HDP 2.5. What is being offered now is Solr and HDFS. It is recommended that Ranger audits are written to Solr and HDFS.
Solr takes care of the search queries from th Ranger Web interface, while HDFS is for more persistent  storing of audits.

This was done on an HDP 2.5 cluster on AWS.

Installing Ambari Infra

Even though the HDP’s documentation says Solr should be installed before Ranger, I installed Ranger service first because of my previous Ranger experience when I used MySql for audit logs.

So installing Ambari Infra is really a clicking job. The only thing to check is where the service is going to be installed. I installed it on NameNode. Remember, it is easy to move services from on node to another.

Configuring Ranger with Solr

Click on Ranger and click on Configs -> Ranger Audit. From there Turn on Audit to Solr and SolrCloud.

You should now have enabled both Solr and HDFS for collecting audit logs.

If you now log in to Ranger, you should see audit logs.

If you plan to build an application in Solr, do not use the solr that is intended for Ambari Infra but install Solr.

Very useful documentation on this topic is available here.

Adding Solr via Ambari

Solr is an open source search engine service. The service is a part of the Hortonworks Data Platform and prior to installing it via Ambari, the service should be added (Zeppelin Notebook went through the same in HDP 2.4 – it had to be added manually before installing). Here is the link to the post about how to add Solr to the list of services.

Once the Solr service is available on the Add Services list, it can be installed. It is a simple process – click next a few times, deploy and that is it. Well, not really.

The following error message will appear

error while installing.JPG

This documentation at the bottom states the following:

In the case of the Java preinstall check failing, the easiest remediation is to login to each machine that Solr will be installed on, temporarily set the JAVA_HOME environmental variable, then then use yum/zypper/apt-get to install the package. For example on CentOS:

export JAVA_HOME=/usr/jdk64/jdk1.8.0_77
yum install lucidworks-hdpsearch

The Datanodes did not find any Java so I installed Java

sudo add-apt-repository ppa:openjdk-r/ppa && sudo apt-get update && sudo apt-get install openjdk-8-jdk -y

Added JAVA_HOME to /env/environment

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64

And I installed the packages on every DataNode.

cd /etc/apt/sources.list.d
sudo wget http://public-repo-1.hortonworks.com/HDP-SOLR-2.5-100/repos/ubuntu14/hdp-solr.list
sudo apt-get update -y
sudo apt-get install lucidworks-hdpsearch

Do not forget to do the same on client nodes and Namenode(s).

When the install is through and if all went well, the following output is given

Result on all nodes:

====
Package lucidworks-hdpsearch was installed
====

The NameNode challenge

The previous steps worked on all nodes except on NameNode! I figured out th

echo $JAVA_HOME

NameNode show the following

/usr/jdk64/jdk1.8.0_77

DataNodes and Client gives me this

/usr/lib/jvm/java-8-openjdk-amd64

So I installed Java on NameNode as described above (I did not remove any versions) and ran the following commands (same as above)

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
cd /etc/apt/sources.list.d
sudo wget http://public-repo-1.hortonworks.com/HDP-SOLR-2.5-100/repos/ubuntu14/hdp-solr.list
sudo apt-get update -y
sudo apt-get install lucidworks-hdpsearch

Output was

====
Package lucidworks-hdpsearch was installed
====

I went to Ambari, I deleted the Solr with Install Fail status, installed it again and it was a success!

I have no explanation why (or if) the issue was in the Java version.

Adding Solr service to list of services in Ambari

The goal in this post is to add Solr service to the Ambari Add Services list.

Operating system is Ubuntu 14.04 on AWS. I am using Ambari 2.4.1.0, Hortonworks Data Platform version is 2.5.0.0.

Ssh to Ambari server and step into /tmp directory

cd /tmp

Download the Solr package

wget http://public-repo-1.hortonworks.com/HDP-SOLR/hdp-solr-ambari-mp/solr-service-mpack-5.5.2.2.5.tar.gz

Install the package

sudo ambari-server install-mpack --mpack=/tmp/solr-service-mpack-5.5.2.2.5.tar.gz

Output

Using python  /usr/bin/python
Installing management pack
Ambari Server 'install-mpack' completed successfully.

Create definition for the HDP Search repository

sudo vi /var/lib/ambari-server/resources/stacks/HDP/2.5/repos/repoinfo.xml

Find your os and add to it repo for HDP-SOLR (below is example for Ubuntu 14)

  <os family="ubuntu14">
    <repo>
      <baseurl>http://public-repo-1.hortonworks.com/HDP/ubuntu14/2.x/updates/2.5.0.0</baseurl>
      <repoid>HDP-2.5</repoid>
      <reponame>HDP</reponame>
    </repo>
    <repo>
      <baseurl>http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/ubuntu12</baseurl>
      <repoid>HDP-UTILS-1.1.0.21</repoid>
      <reponame>HDP-UTILS</reponame>
    </repo>
    <repo>
      <baseurl>http://public-repo-1.hortonworks.com/HDP-SOLR-2.5-100/repos/ubuntu14</baseurl>
      <repoid>HDP-SOLR-2.5-100</repoid>
      <reponame>HDP-SOLR</reponame>
    </repo>
  </os>

List of other systems.

 sudo ambari-server restart 

Log in to Ambari and click on Add Services. Solr should be available now (at the bottom)
solr-add-service-list

Solr can now be installed via Ambari. This is explained in this post.

Installing sbt on Ubuntu for building Scala projects

Using Apache Spark for big data processing offers also a possibility to use Scala. Despite Python being more popular than Scala, Scala is still THE language in Apache Spark world. It is time to start writing code in it.
The interactive tool sbt helps you build Scala and Java projects. It is similar to Java’s Maven or Ant. It offers native support for compiling Scala and, among other things, offers support for mixed Scala/Java projects.

Run the following to install sbt.

echo "deb https://dl.bintray.com/sbt/debian /" | sudo tee -a /etc/apt/sources.list.d/sbt.list
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 642AC823
sudo apt-get update -y
sudo apt-get install sbt -y

Test sbt by running

sbt version

Should return something like this

[info] Set current project to ubuntu (in build file:/home/ubuntu/)
[info] 0.1-SNAPSHOT

The tool is now installed and ready to use.

You can run sbt by simply typing

sbt

Creating an example Scala project that works with Apache Spark is described in this post.