Apache Spark is an open-source framework used for analyzing big data in cluster computing environments. Generally, it is used in Hadoop to improve the data processing speeds. It supports a wide array of programming languages including Java, Scala, Python, and R. Apache Spark can easily process and distribute work on large datasets across multiple computers. It is used by data scientists and engineers to perform actions on large amounts of data.
In this post, we will show you how to install Apache Spark on Rocky Linux 8.
Step 1 – Install Java
Apache Spark is a Java-based application, so you will need to install Jave to your server. You can install it by running the following commands:
dnf update -y
dnf install java-11-openjdk-devel -y
Once Java is installed, verify the Java version using the following command:
java --version
You will get the following output:
openjdk 11.0.12 2021-07-20 LTS OpenJDK Runtime Environment 18.9 (build 11.0.12+7-LTS) OpenJDK 64-Bit Server VM 18.9 (build 11.0.12+7-LTS, mixed mode, sharing)
Step 2 – Install Spark
First, download the latest version of Apache Spark for Apache’s website using the following command:
wget https://dlcdn.apache.org/spark/spark-3.1.2/spark-3.1.2-bin-hadoop3.2.tgz
Once the download is completed, extract the downloaded file with the following command:
tar -xvf spark-3.1.2-bin-hadoop3.2.tgz
Next, move the extracted directory to the /opt with the following command:
mv spark-3.1.2-bin-hadoop3.2 /opt/spark
Next, create a dedicated user for Apache Spark and set proper ownership to the /opt directory:
useradd spark chown -R spark:spark /opt/spark
Step 3 – Create a Systemd Service File for Apache Spack
Next, you will need to create a systemd service file for Apache Spark Master and Slave.
First, create a systemd service file for Master using the following command:
nano /etc/systemd/system/spark-master.service
Add the following lines:
[Unit] Description=Apache Spark Master After=network.target [Service] Type=forking User=spark Group=spark ExecStart=/opt/spark/sbin/start-master.sh ExecStop=/opt/spark/sbin/stop-master.sh [Install] WantedBy=multi-user.target
Save and close the file then create a systemd service file for Slave:
nano /etc/systemd/system/spark-slave.service
Add the following lines:
[Unit] Description=Apache Spark Slave After=network.target [Service] Type=forking User=spark Group=spark ExecStart=/opt/spark/sbin/start-slave.sh spark://your-server-ip:7077 ExecStop=/opt/spark/sbin/stop-slave.sh [Install] WantedBy=multi-user.target
Save and close the file, then reload the systemd daemon to apply the changes.
systemctl daemon-reload
Next, start the Spark Master service and enable it to start at system reboot:
systemctl start spark-master systemctl enable spark-master
To verify the status of the Master service, run the following command:
systemctl status spark-master
You will get the following output:
● spark-master.service - Apache Spark Master Loaded: loaded (/etc/systemd/system/spark-master.service; disabled; vendor preset: disabled) Active: active (running) since Tue 2021-10-12 14:46:35 UTC; 8s ago Process: 11967 ExecStart=/opt/spark/sbin/start-master.sh (code=exited, status=0/SUCCESS) Main PID: 11978 (java) Tasks: 32 (limit: 23695) Memory: 169.0M CGroup: /system.slice/spark-master.service └─11978 /usr/lib/jvm/java-11-openjdk-11.0.12.0.7-0.el8_4.x86_64/bin/java -cp /opt/spark/conf/:/opt/spark/jars/* -Xmx1g org.apache.s> Oct 12 14:46:33 RockyLinux8 systemd[1]: Starting Apache Spark Master... Oct 12 14:46:33 RockyLinux8 start-master.sh[11967]: starting org.apache.spark.deploy.master.Master, logging to /opt/spark/logs/spark-spark-org> Oct 12 14:46:35 RockyLinux8 systemd[1]: Started Apache Spark Master.
Step 4 – Access Apache Spark
At this point, Apache Spark is started and listening on port 8080. You can access it using the URL http://your-server-ip:8080. You should see the following page:
Now, start the Spark Slave service and enable it to start at system reboot:
systemctl start spark-slave systemctl enable spark-slave
You can check the status of the Slave service using the following command:
systemctl status spark-slave
Sample output:
● spark-slave.service - Apache Spark Slave Loaded: loaded (/etc/systemd/system/spark-slave.service; disabled; vendor preset: disabled) Active: active (running) since Tue 2021-10-12 14:48:47 UTC; 16s ago Process: 12064 ExecStart=/opt/spark/sbin/start-slave.sh spark://69.28.84.173:7077 (code=exited, status=0/SUCCESS) Main PID: 12077 (java) Tasks: 35 (limit: 23695) Memory: 190.6M CGroup: /system.slice/spark-slave.service └─12077 /usr/lib/jvm/java-11-openjdk-11.0.12.0.7-0.el8_4.x86_64/bin/java -cp /opt/spark/conf/:/opt/spark/jars/* -Xmx1g org.apache.s> Oct 12 14:48:44 RockyLinux8 systemd[1]: Starting Apache Spark Slave... Oct 12 14:48:44 RockyLinux8 start-slave.sh[12064]: This script is deprecated, use start-worker.sh Oct 12 14:48:44 RockyLinux8 start-slave.sh[12064]: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark/logs/spark-spark-org.>
Now, reload your Apache Spark dashboard. You should see your worker on the following page:
Now, click on the Worker. You should see the detailed information for the Worker on the following screen:
Conclusion
Congratulations! You have successfully installed Apache Spark on RockyLinux 8. You can now use Apache Spark in Hadoop to improve the data processing speeds. Give it a try on your dedicated server from Atlantic.Net!