Create a Hadoop cluster

You can use Managed Service for Apache Spark to create one or more Compute Engine instances that can connect to a Bigtable instance and run Hadoop jobs. This page explains how to use Managed Service for Apache Spark to automate the following tasks:

  • Installing Hadoop and the HBase client for Java
  • Configuring Hadoop and Bigtable
  • Setting the correct authorization scopes for Bigtable

After you create your Managed Service for Apache Spark cluster, you can use the cluster to run Hadoop jobs that read and write data to and from Bigtable.

This page assumes that you are already familiar with Hadoop. For additional information about Managed Service for Apache Spark, see the Managed Service for Apache Spark documentation.

Before you begin

Before you begin, you'll need to complete the following tasks:

  • Create a Bigtable instance. Be sure to note the project ID and Bigtable instance ID.
  • Enable the Cloud Bigtable API, Cloud Bigtable Admin API, Managed Service for Apache Spark, and Cloud Storage JSON APIs.

    Roles required to enable APIs

    To enable APIs, you need the Service Usage Admin IAM role (roles/serviceusage.serviceUsageAdmin), which contains the serviceusage.services.enable permission. Learn how to grant roles.

    Enable the APIs

  • Verify that your user account is in a role that includes the permission storage.objects.get.

    Open the IAM page in the Google Cloud console.

    Open the IAM page

  • Install the Google Cloud CLI. See the gcloud CLI setup instructions for details.
  • Install Apache Maven, which is used to run a sample Hadoop job.

    On Debian GNU/Linux or Ubuntu, run the following command:

    sudo apt-get install maven

    On RedHat Enterprise Linux or CentOS, run the following command:

    sudo yum install maven

    On macOS, install Homebrew, then run the following command:

    brew install maven
  • Clone the GitHub repository GoogleCloudPlatform/cloud-bigtable-examples, which contains an example of a Hadoop job that uses Bigtable:
    git clone https://github.com/GoogleCloudPlatform/cloud-bigtable-examples.git

Create a Cloud Storage bucket

Managed Service for Apache Spark uses a Cloud Storage bucket to store temporary files. To prevent file-naming conflicts, create a new bucket for Managed Service for Apache Spark.

Cloud Storage bucket names must be globally unique across all buckets. Choose a bucket name that is likely to be available, such as a name that incorporates your Google Cloud project's name.

After you choose a name, use the following command to create a new bucket, replacing values in brackets with the appropriate values:

gcloud storage buckets create gs://[BUCKET_NAME] --project=[PROJECT_ID]

Create the Managed Service for Apache Spark cluster

Run the following command to create a Managed Service for Apache Spark cluster with four worker nodes, replacing values in brackets with the appropriate values:

gcloud dataproc clusters create [DATAPROC_CLUSTER_NAME] --bucket [BUCKET_NAME] \
    --region [region] --num-workers 4 --master-machine-type n1-standard-4 \
    --worker-machine-type n1-standard-4

See the gcloud dataproc clusters create documentation for additional settings that you can configure. If you get an error message that includes the text Insufficient 'CPUS' quota, try setting the --num-workers flag to a lower value.

Test the Managed Service for Apache Spark cluster

After you set up your Managed Service for Apache Spark cluster, you can test the cluster by running a sample Hadoop job that counts the number of times a word appears in a text file. The sample job uses Bigtable to store the results of the operation. You can use this sample job as a reference when you set up your own Hadoop jobs.

Run the sample Hadoop job

  1. In the directory where you cloned the GitHub repository, change to the directory java/dataproc-wordcount.
  2. Run the following command to build the project, replacing values in brackets with the appropriate values:

    mvn clean package -Dbigtable.projectID=[PROJECT_ID] \
        -Dbigtable.instanceID=[BIGTABLE_INSTANCE_ID]
    
  3. Run the following command to start the Hadoop job, replacing values in brackets with the appropriate values:

    ./cluster.sh start [DATAPROC_CLUSTER_NAME]
    

When the job is complete, it displays the name of the output table, which is the word WordCount followed by a hyphen and a unique number:

Output table is: WordCount-1234567890

Verify the results of the Hadoop job

Optionally, after you run the Hadoop job, you can use the cbt CLI to verify that the job ran successfully:

  1. Open a terminal window in Cloud Shell.

    Open in Cloud Shell

  2. Install the cbt CLI :
        gcloud components update
        gcloud components install cbt
  3. Scan the output table to view the results of the Hadoop job, replacing [TABLE_NAME] with the name of your output table:
        cbt -instance [BIGTABLE_INSTANCE_ID] read [TABLE_NAME]
      

Now that you've verified that the cluster is set up correctly, you can use it to run your own Hadoop jobs.

Delete the Managed Service for Apache Spark cluster

When you are done using the Managed Service for Apache Spark cluster, run the following command to shut down and delete the cluster, replacing [DATAPROC_CLUSTER_NAME] with the name of your Managed Service for Apache Spark cluster:

gcloud dataproc clusters delete [DATAPROC_CLUSTER_NAME]

What's next