Gcclab Manual
Gcclab Manual
Date :
AIM:
To develop a web service program for a calculator.
PROCEDURES:
Step1: Open Netbeans and go to New
Step 2: Choose Java Web and select Web Application and give next.
Step 3: Enter the project name and give next and Select the Server either
tomcat or glassfish
Step 4: Give next and select finish
Step 5: Right click the Web Application (Project Name) and Select New, and choose
Java Class
/*
*To change this license header, choose License Headers in Project Properties.
*To change this template file, choose Tools | Templates
*and open the template in the editor.
*/
/**
*
*@author cloud_grid_cmp
*/
@WebService(serviceName="MathService", targetNamespace = "http://my.org/ns/")
public class MathService {
@WebMethod(operationName = "hello")
public String hello(@WebParam(name="name")String txt){ return "Hello"+txt+"!";
}
@WebMethod(operationName = "addSer")
public String addSer(@WebParam(name="value1")int v1, @WebParam(name = "value2")int v2)
{
return "Answer:" +(v1+v2)+"!";
}
@WebMethod(operationName = "subSer")
public String subSer(@WebParam(name="value1")int v1, @WebParam(name = "value2")int v2)
{
return "Answer:" +(v1-v2)+"!"; }
@WebMethod(operationName = "mulSer")
public String mulSer(@WebParam(name="value1")int v1, @WebParam(name = "value2")int v2)
{
return "Answer:" +(v1*v2)+"!";
}
@WebMethod(operationName = "divSer")
public String divSer(@WebParam(name="value1")int v1, @WebParam(name = "value2")int v2)
{
float res = 0;
try
{
res = ((float)v1)/((float) v2);
return "Answer:" +res+"!";
}
catch(ArithmeticException e){ System.out.println("Can't be divided by Zero"+e);
return "Answer:" +e.getMessage().toString()+"!!!"; }
}
}
Give some value in the fields and check the output by pressing enter key.
AIM:
To develop a new OGSA complaint Webservice
PROCEDURE:
Step 1: Choose New Project from the main menu
In pom.xml file
The IDE creates the bundle project and opens the project in the Project Window. And
check the building plugins at pom.xml under project files.
As well as it will create org.osgi.core artifact as default and it can be view at under
Dependencies.
Step 7: Buid the MavenHelloServiceApi Project by
1. Right Click the MavenHelloServiceApi project node in the project window and
choose properties.
package com.mycompany.mavenhelloserviceapi;
public interface Hello {
String sayHello(String name);
}
8.Right click the project node in the project window and choose build.
9.After building the project, open files window and expand the project node such that
you can see MavenHelloServiceApi-1.0-SNAPSHOT.jar is created in the target folder.
Step 8: Creating the MavenHelloServiceImpl Implementation Bundle
Here you will create the MavenHelloServiceImpl in the POM Project.
1.Choose File -> New Project to open the New Project Wizard
2.Choose OSGi Bundle from the Maven category. Click Next.
3.Type MavenHelloServiceImpl for the Project Name
4. Click Browse and select the MavenOSGiCDIProject POM project as the
Location. Click Finish.(As earlier step).
6.Right click the project node in the Projects window and choose Properties.Select the
Sources category in the Project Properties dialog box.
7.Set the Source/Binary Format to 1.6 and confirm that the Encoding is UTF-8. Click
OK.
8.Right click Source Packages node in the Projects window and choose New -> Java
Class.
9.Type HelloImpl for the Class Name.
10. Select com.mycompany.mavenhelloserviceimpl as the Package. Click Finish.
11. Type the following and save your changes.
package com.mycompany.mavenhelloserviceimpl;
/**
*
*@author
linux */
public class HelloImpl implements
Hello { public String sayHello(String
name)
{
return "Hello" +name;
}
}
When you implement Hello, the IDE will display an error that you need to resolve by
adding the MavenHelloServiceApi project as a dependency.
12. Right click the Dependencies folder of MavenHelloServiceImpl in the Projects
window and choose Add Dependency.
13. Click the Open Projects tab in the Add Library dialog.
14. Select MavenHelloServiceApi OSGi Bundle. Click Add
15.Modify the start() and Stop() methods in the bundle activator class by adding the
following lines.
package com.mycompany.mavenhelloserviceimpl;
import org.osgi.framework.BundleActivator;
import org.osgi.framework.BundleContext;
}
Step 9: Bulding and Deploying the OSGi Bundles
Here you will build the OSGi bundles and deploy the bundles to GlassFish
1. Right click the MavenOSGiCDIProject folder in the Projects window and choose
Clean and Build.
# When you build the project the IDE will create the JAR files in the target folder and
also install the snapshot JAR in the local repository.
# In file window, by expanding the target folder of each of the two bundle projects it
will
show two JAR archieves(MavenHelloServiceApi-1.0-SNAPSHOT.jar and
MavenHElloServiceImpl-1.0-SNAPSHOT.jar.)
4. You can see output similar to the following in the GlassFish Server log in the
output window.
Info: Installed /home/linux/glassfish-
4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceApi-1.0-
SNAPSHOT.jar
Info: Started bundle: file:/home/linux/glassfish-
4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceApi-1.0-
SNAPSHOT.jar
Info: Started bundle: file:/home/linux/glassfish-
4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceApi-
1.0-SNAPSHOT.jar
AIM:
To develop a secured applications using basic security mechanisms available in
Globus toolkit.
PROCEDURES:
Step 1. Installing and setup of Certificate Authority. Open Terminal and move to root
user and give command as
This script will setup a Certificate Authority for signing Globus users certificates. It
will also generate a simple CA package that can be distributed to the users of the CA.
/var/lib/globus/simple_ca
The grid-ca-create program next prompts you for information about the name of CA
you wish to create:
root@linux:~# sudo grid-ca-create
It will ask few things in command prompt and give the things
i.Permission
ii. Unique subject name
iii.Mailid
iv.expiration date
v. password.
Generating Debian Packages
Get into the default simpla_ca path /var/lib/globus/simple_ca
Revoking a Certificate
SimpleCA does not yet provide a convenient interface to revoke a signed certificate,
but it can be done with the openssl command.
root@linux:/var/lib/globus/simple_ca# openssl ca -config grid-ca-ssl.conf
-revoke newcerts/01.pem
Using configuration from /home/simpleca/.globus/simpleCA/grid-ca-ssl.conf Enter
pass phrase for /home/simpleca/.globus/simpleCA/private/cakey.pem: Revoking
Certificate 01.
Data Base Updated
Renewing a CA
# apt-get update
# apt-get install opennebula opennebula-sunstone nfs-kernel-server
There are two main processes that must be started, the main OpenNebula daemon:
oned, and
the graphical user interface: sunstone.
Sunstone listens only in the loopback interface by default for security reasons. To
change it edit
# gedit /etc/one/sunstone-server.conf
and change :host: 127.0.0.1 to :host: 0.0.0.0.
# /etc/init.d/opennebula-sunstone restart
OpenNebula will need SSH for passwordless from any node (including the frontend)
to anyother node.
To do so run the following commands:
# su - oneadmin
$ cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
Add the following snippet to ~/.ssh/config so it doesn’t prompt to add the keys
to the known_hosts file:
The default password for the oneadmin user can be found in ~/.one/one_auth which
is
randomly generated on every installation.
$ nano~/.one/one_auth
Open mozilla firefox
localhost:9869
AIM:
To find the procedure to run the virtual machine of different configuration and to
Check how many virtual can be create.
root@linux:$ /etc/init.d/opennebula-sunstone restart
PROCEDURE:
Step 1: Check the processor virtualization – in boot settings.
Step 2: Execute all the commands in root user if the command start with #,
and one admin user if command start with $.
a. Moving to roo user in terminal
linux@linux:~$ sudo bash
[sudo] password for linux:
Enter the password.
root@linux:~# ls -l /dev/kvm
crw-rw----+ 1 root kvm 10, 232 Jul 1 04:09 /dev/kvm
Now check the dependencies by giving the cammand in terminal where all packages
shows as installed
root@linux:~# dpkg -l opennebula-common ruby-opennebula
opennebula opennebula-node opennebula-sunstone opennebula-tools
opennebula-gate opennebula-flow libopennebula-java
root@linux:~# ifconfig
Step 8: Changing the network interface and bridge configuration manually. The
network configuration in the ubuntu is stored under /etc/network/interfaces
root@linux:~# gedit /etc/network/interfaces
It has only few lines.
# interfaces(5) file used by ifup(8) and
ifdown(8) auto lo
iface lo inet loopback
#auto br0
iface br0 inet static
address 192.168.0.10
network 192.168.0.0
netmask 255.255.255.0
broadcast 192.168.0.255
gateway 192.168.0.1
bridge_ports em1
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off
root@linux:~# ifconfig
Step 13: Creating the kvm for localhost. It helps to create image, template and
template instantiate to work.
oneadmin@linux:~$ onehost create localhost -i kvm -v kvm -n
dummy ID: 0
AR=[
TYPE = IP4,
IP = 192.168.0.141,
SIZE = 5
]
root@linux:/var/lib/one# sudo - oneadmin
oneadmin@linux:~$ cat mynetwork.one
NAME = "private"
BRIDGE = br0
AR=[
TYPE = IP4,
IP = 192.168.0.141,
SIZE = 5
]
oneadmin@linux:~$ onevmlist
Step 17: Updating the .ssh for passwordless handshaking with oneadmin web
service. oneadmin@linux:~$ cat ~/.ssh/id_rsa.pub
## You can see the key, copy that fully and paste it in by visiting localhost:9869/
Click oneadmin and choose configuration and deselect the public ssh and finally paste
it.
Step 18: Creating Oneimage, onetemplate and one vm.
Move to the datastores folder.
oneadmin@linux:~$ cd datastores
oneadmin@linux:~/datastores$
Creating oneimage
oneadmin@linux:~/datastores$ oneimage create --name "ttylinux" –path
"/home/linux/Downloads/source/ttylinux.img" --driver raw --datastore default
Click the corner computer symbol icon, it will and ask the username and password. By
default the username is root and password is password.
Through terminal you can access the vm by oneadmin@linux:~/datastores$
[email protected] and give password
Step 20: Similarly you can create as much vm your machine supports and can
access only 5vm at a time since we limited our ip range upto 5 in mynetwork.one
You can install ubuntu, centos and etc.,
Change the unbold data in the below command and install for various vm size.
Aim:
To Find the procedure for attaching virtual block to the virtual machine and check whether it
holds the data even after the release of the virtual machine.
Procedure:
Step 1: Create the Oneimage, onetemplate and onevm through the commands
Creating oneimage
oneadmin@linux:~/datastores$ oneimage create --name "ttylinux" –path
"/home/linux/Downloads/source/ttylinux.img" --driver raw --datastore default
Creating oneimage
oneadmin@linux:~/datastores$ oneimage create --name "Ubuntu" –path
"/home/linux/Downloads/source/tubuntu1404-5.0.1.qcow2c" --driver qcow2 --
datastore default
Aim :
To install a C compiler in the virtual machine and execute a sample program.
Procedure:
step1:
Install the centos or ubuntu in the opennebula as per previous commands.
Step 2:
Login into the VM of installed OS.
Step 3:
If it is ubuntu then, for gcc installation
$ sudo add-apt-repository ppa:ubuntu-toolchain-
r/test $ sudo apt-get update
$ sudo apt-get install gcc-6 gcc-6-base
Step 4:
Write a sample program like
Welcome.cpp
#include<iostream.h>
using namespace std;
int main()
{
cout<<”Hello world”;
return 0;
}
Step 5:
First we need to compile and link our program. Assuming the source code is saved in a file
welcome.cpp, we can do that using GNU C++ compiler g++, for example
g++ -Wall -o welcome welcome.cpp
Result:
Thus the GCC compiler has been successfully installed and executed a sample program.
EXERCISE 5: Show the virtual machine migration based on the certain condition from
one node to the other
Aim:
To show the virtual machine migration based on the certain condition from one node
to the other.
Procedure:
Step 1: Open Opennebula service from root user and view in localhost:9869
root@linux:$ /etc/init.d/opennebula-sunstone restart
This will move the VM from host01 to host02. The onevm list shows something like the
following
Result :
Thus the virtual machine migration based on the certain condition
from one node to the other has been executed successfully.
HADOOP INSTALLATION
Aim:
To find the procedure for setting up the one hadoop cluster in the linux platform.
Procedures:
Step 1:
Download the latest sun java and apache hadoop from the official website.
Step 2:
To install Java and Hadoop follow the belo lines
$gedit ~/.bashrc
d. At the bottom paste the following lines by changing the path alone
#--insert JAVA_HOME
JAVA_HOME= /opt/jdk1.8.0_05
#--in PATH variable just append at the end of the line
PATH=$PATH:$JAVA_HOME/bin
#--Append JAVA_HOME at end of the export statement
export PATH JAVA_HOME
$ ssh-copy-id -ilocalhost
#--insert HADOOP_PREFIX
HADOOP_PREFIX=/opt/hadoop-2.7.0
#--in PATH variable just append at the end of the line
PATH=$PATH:$HADOOP_PREFIX/bin
#--Append HADOOP_PREFIX at end of the export statement
export PATH JAVA_HOME HADOOP_PREFIX
(or)
>sbin/stop-all.sh
Once you start the services after stopped means it shows only 4 services.
>jps
12563 Jps
4036 ResourceManager
4172 NodeManager
3685 SecondaryNameNode
Step 10. only four services will run. To start datanode and name node we have to add some
lines in hdfs- site.xml
In Terminal
$ cd $HADOOP_PREFIX/etc/hadoop
$gedit hdfs-site.xml (Paste the below lines)
<property>
<name>dfs.namenode.name.dir</name>
<value>/opt/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/opt/data</value>
</property>
Step 13. To view in Browser (Open Firefox and enter the below address)
localhost:50070
localhost:8088
Result:
Thus single node hadoop cluster has been successfully created.
Exercise 7. Write a program to use the API's of Hadoop to interact with it.
Aim:
To write a program for using the API's on hadoop and as well as to interact with it.
Procedure:
Step 1:
Start the hadoop services by giving the following command in terminal
$ sbin/start-all.sh
$ jps
Step 2:
Open web browser and open
localhost:50070
localhost:8088
Step 3:
Creating folder in web interface (HDFS) from terminal.
$ bin/hadoop fs -mkdir /bala
Wait until the command executes.
Step 4:
Open the localhost:50070
Utilities --> Browse the filesytem.
An folder has been created which we had given in terminal
Open web browser and under utilities, browse the filesytem and check whether
the content is moved
Step 6: Space took by datanode
Result:
Thus an API program has been developed for creating folder and copying files into it.
Exercise 8. Write a wordcount program to demonstrate the use of Map and Reduce tasks
Aim:
To write a wordcount program for demonstrating the use of map and reduce task.
Procedure:
Step 1:
Write a map reduce program on java
Step 2:
Create a folder in HDFS by using the command $
bin/hadoop fs -mkdir /bala2
Step 3:
Move the number of text file into the hdfs
$ bin/hadoop fs -copyFromLocal /home/bala/Downloads/data /bala
Step 4:
use the hadoop-mapreduce-examples-2.7.0.jar which is already available in hadoop.
Step 5:
Run the Mapreduce program by the following command.
$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-
examples-2.7.0.jar wordcount /bala/data /output
Input file
Loaded file
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
publicclass WordCount {
while(values.hasNext()){
sum+= values.next().get();
}
context.write(key,new IntWritable(sum));
}
}
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job,new Path(args[0]));
FileOutputFormat.setOutputPath(job,new Path(args[1]));
job.waitForCompletion(true);
}
}
Executing Map reduce program
Result:
Thus the wordcount program has been executed successfully.
Exercise 9: Mount the one node Hadoop cluster using FUSE
AIM
To mount the one node Hadoop cluster using FUSE and access files on HDFS in the
same way as we do on Linux operating systems.
PROCEDURE
FUSE (Filesystem in Userspace) enables you to write a normal user application as a bridge
for a traditional filesystem interface.
The hadoop-hdfs-fuse package enables you to use your HDFS cluster as if it were a traditional
filesystem on Linux. It is assumed that you have a working HDFS cluster and know the
hostname and port that your NameNode exposes.
libcurl3
1 upgraded, 15 newly installed, 0 to remove and 702 not upgraded.
Need to get 222 MB of archives.
After this operation, 267 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://in.archive.ubuntu.com/ubuntu/ trusty-updates/main libcurl3 amd64 7.35.0-
1ubuntu2.7 [173 kB]
Get:2 https://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib
avro-libs all 1.7.6+cdh5.8.0+112-1.cdh5.8.0.p0.74~trusty-cdh5.8.0 [47.0 MB]
Get:3 http://in.archive.ubuntu.com/ubuntu/ trusty-updates/main curl amd64 7.35.0-
1ubuntu2.7 [123 kB]
Get:4 https://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib
parquet-format all 2.1.0+cdh5.8.0+12-1.cdh5.8.0.p0.70~trusty-cdh5.8.0 [479 kB]
Get:5 https://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib
parquet all 1.5.0+cdh5.8.0+174-1.cdh5.8.0.p0.71~trusty-cdh5.8.0 [27.1 MB]
Get:6 https://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib
hadoop all 2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0 [28.2 MB]
Get:7 https://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib
libhdfs0 amd64 2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0 [320 kB]
Get:8 https://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib
hadoop-hdfs-fuse amd64 2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0 [317 kB]
Fetched 222 MB in 3min 28s (1,064 kB/s)
(Reading database ... 170612 files and directories currently installed.)
Preparing to unpack .../libcurl3_7.35.0-1ubuntu2.7_amd64.deb ...
Unpacking libcurl3:amd64 (7.35.0-1ubuntu2.7) over (7.35.0-1ubuntu2) ...
Selecting previously unselected package curl.
Preparing to unpack .../curl_7.35.0-1ubuntu2.7_amd64.deb ...
Unpacking curl (7.35.0-1ubuntu2.7) ...
Selecting previously unselected package avro-libs.
Preparing to unpack .../avro-libs_1.7.6+cdh5.8.0+112-1.cdh5.8.0.p0.74~trusty-
cdh5.8.0_all.deb ...
Unpacking avro-libs (1.7.6+cdh5.8.0+112-1.cdh5.8.0.p0.74~trusty-cdh5.8.0) ...
Selecting previously unselected package bigtop-utils.
Preparing to unpack .../bigtop-utils_0.7.0+cdh5.8.0+0-
1.cdh5.8.0.p0.72~trusty-cdh5.8.0_all.deb ...
Unpacking bigtop-utils (0.7.0+cdh5.8.0+0-1.cdh5.8.0.p0.72~trusty-cdh5.8.0) ...
Selecting previously unselected package bigtop-jsvc.
Preparing to unpack .../bigtop-jsvc_0.6.0+cdh5.8.0+847-1.cdh5.8.0.p0.74~trusty-
cdh5.8.0_amd64.deb ...
Unpacking bigtop-jsvc (0.6.0+cdh5.8.0+847-1.cdh5.8.0.p0.74~trusty-cdh5.8.0) ...
Selecting previously unselected package zookeeper.
Preparing to unpack .../zookeeper_3.4.5+cdh5.8.0+94-1.cdh5.8.0.p0.76~trusty-
cdh5.8.0_all.deb ...
Unpacking zookeeper (3.4.5+cdh5.8.0+94-1.cdh5.8.0.p0.76~trusty-cdh5.8.0) ...
Selecting previously unselected package parquet-format.
Preparing to unpack .../parquet-format_2.1.0+cdh5.8.0+12-1.cdh5.8.0.p0.70~trusty-
cdh5.8.0_all.deb ...
Unpacking parquet-format (2.1.0+cdh5.8.0+12-1.cdh5.8.0.p0.70~trusty-cdh5.8.0) ...
Selecting previously unselected package hadoop-yarn.
Preparing to unpack .../hadoop-yarn_2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-
cdh5.8.0_all.deb ...
Unpacking hadoop-yarn (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Selecting previously unselected package hadoop-mapreduce.
Preparing to unpack .../hadoop-mapreduce_2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-
cdh5.8.0_all.deb ...
Unpacking hadoop-mapreduce (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Selecting previously unselected package hadoop-hdfs.
Preparing to unpack .../hadoop-hdfs_2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-
cdh5.8.0_all.deb ...
Unpacking hadoop-hdfs (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Selecting previously unselected package hadoop-0.20-mapreduce.
Preparing to unpack .../hadoop-0.20-mapreduce_2.6.0+cdh5.8.0+1601-
1.cdh5.8.0.p0.93~trusty-cdh5.8.0_amd64.deb ...
Unpacking hadoop-0.20-mapreduce (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0)
...
Selecting previously unselected package hadoop-client.
Preparing to unpack .../hadoop-client_2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-
cdh5.8.0_all.deb ...
Unpacking hadoop-client (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Selecting previously unselected package parquet.
Preparing to unpack .../parquet_1.5.0+cdh5.8.0+174-1.cdh5.8.0.p0.71~trusty-cdh5.8.0_all.deb
...
Unpacking parquet (1.5.0+cdh5.8.0+174-1.cdh5.8.0.p0.71~trusty-cdh5.8.0) ...
Selecting previously unselected package hadoop.
Preparing to unpack .../hadoop_2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-
cdh5.8.0_all.deb ...
Unpacking hadoop (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Selecting previously unselected package libhdfs0.
Preparing to unpack .../libhdfs0_2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-
cdh5.8.0_amd64.deb ...
Unpacking libhdfs0 (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Selecting previously unselected package hadoop-hdfs-fuse.
Preparing to unpack .../hadoop-hdfs-fuse_2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-
cdh5.8.0_amd64.deb ...
Unpacking hadoop-hdfs-fuse (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Processing triggers for man-db (2.6.7.1-1) ...
Setting up libcurl3:amd64 (7.35.0-1ubuntu2.7) ...
Setting up curl (7.35.0-1ubuntu2.7) ...
Setting up avro-libs (1.7.6+cdh5.8.0+112-1.cdh5.8.0.p0.74~trusty-cdh5.8.0) ...
Setting up bigtop-utils (0.7.0+cdh5.8.0+0-1.cdh5.8.0.p0.72~trusty-cdh5.8.0) ...
Setting up bigtop-jsvc (0.6.0+cdh5.8.0+847-1.cdh5.8.0.p0.74~trusty-cdh5.8.0) ...
Setting up zookeeper (3.4.5+cdh5.8.0+94-1.cdh5.8.0.p0.76~trusty-cdh5.8.0) ...
update-alternatives: using /etc/zookeeper/conf.dist to provide /etc/zookeeper/conf
(zookeeper-conf) in auto mode
Setting up parquet-format (2.1.0+cdh5.8.0+12-1.cdh5.8.0.p0.70~trusty-cdh5.8.0) ...
Setting up parquet (1.5.0+cdh5.8.0+174-1.cdh5.8.0.p0.71~trusty-cdh5.8.0) ...
Setting up hadoop (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
update-alternatives: using /etc/hadoop/conf.empty to provide /etc/hadoop/conf (hadoop-
conf) in auto mode
Setting up hadoop-yarn (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Setting up libhdfs0 (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Setting up hadoop-mapreduce (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Setting up hadoop-hdfs (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Setting up hadoop-0.20-mapreduce (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0)
...
Setting up hadoop-client (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Setting up hadoop-hdfs-fuse (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Processing triggers for libc-bin (2.19-0ubuntu6) ...
INFO /data/jenkins/workspace/generic-package-ubuntu64-14-04/CDH5.8.0-Packaging-
Hadoop-2016-07-12_15-43-10/hadoop-2.6.0+cdh5.8.0+1601-
1.cdh5.8.0.p0.93~trusty/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-
dfs/fuse_options.c:164 Adding FUSE arg /home/hdpuser/hdfs/
hdpuser@jiju-PC:~$ ls /home/hdpuser/hdfs/
hdpuser@jiju-PC:~$ ls /home/hdpuser/hdfs/
new
NOTE: You can now add a permanent HDFS mount which persists through reboots.
Open /etc/fstab and add lines to the bottom similar to these: (sudo vi /etc/fstab)
hadoop-fuse-dfs#dfs://<name_node_hostname>:<namenode_port><mount_point>
fuse allow_other,usetrash,rw 2 0
For example:
$ mount <mount_point>
Result:
Thus fuse has been installed successfully.
Installing Eclipse
Extract Eclipse to /opt/ for global use
Press Ctrl+Alt+T on keyboard to open the terminal. When it opens, run the command below
to extract Eclipse to /opt/:
cd /opt/ && sudo tar -zxvf ~/Downloads/eclipse-*.tar.gz
You may replace “eclipse-*.tar.gz” (without quote) to the exact package name if the command
does not work.
Don’t like Linux commands? You can do this by opening Nautilus file browser via root: Press
Alt+F2 -> run gksudo nautilus.
Once done, you should see the eclipse folder under /opt/ directory.
Create a launcher shortcut for Eclipse
Press Ctrl+Alt+T, paste below command into the terminal and hit enter (install gksu from
Software Center if below command does not work).
gksudo gedit /usr/share/applications/eclipse.desktop
Above command will create and open the launcher file for eclipse with gedit text editor.
Paste below content into the opened file and save it.
[Desktop Entry]
Name=Eclipse 4
Type=Application
Exec=/opt/eclipse/eclipse
Terminal=false
Icon=/opt/eclipse/icon.xpm
Comment=Integrated Development Environment
NoDisplay=false
Categories=Development;IDE;
Name[en]=Eclipse
Finally open Eclipse from Unity Dash search results and enjoy!
We’ll also need to copy the host certificate and key into place so that the myproxy service can
use it as well.
root# install -o myproxy -m 644 \
/etc/grid-security/hostcert.pem \
/etc/grid-security/myproxy/hostcert.pem
root# install -o myproxy -m 600 \
/etc/grid-security/hostkey.pem \
/etc/grid-security/myproxy/hostkey.pem
We’ll next add the myproxy user to the simpleca group so that the myproxy server can create
certificates.
root# usermod -a -G simpleca myproxy
The important thing to see in the above is that the process is in the active (running) state.
[NOTE]
For other Linux distributions which are not using systemd, the output will be different. You
should still see some information indicating the service is running.
As a final sanity check, we’ll make sure the myproxy TCP port 7512 is in use via the netstat
command:
root# netstat -an | grep 7512
tcp 0 0 0.0.0.0:7512 0.0.0.0:* LISTEN
Check that the GridFTP server is running and listening on the gridftp port:
root# service globus-gridftp-server status
Now the GridFTP server is waiting for a request, so we’ll generate a proxy from the myproxy
service by using myproxy-logon and then copy a file from the GridFTP server with the globus-url-
copy command. We’ll use the passphrase used to create the myproxy credential for quser.
quser% myproxy-logon -s elephant
Enter MyProxy pass phrase: ******
A credential has been received for user quser in /tmp/x509up_u1001
quser% globus-url-copy gsiftp://elephant.globus.org/etc/group \
file:///tmp/quser.test.copy
quser% diff /tmp/quser.test.copy /etc/group
At this point, we’ve configured the myproxy and GridFTP services and verified that we can
create a security credential and transfer a file. If you had trouble, check the security
troubleshooting section in the Security Admin Guide. Now we can move on to setting up
GRAM5 resource management.
We can now verify that the service is running and listening on the GRAM5 port:
root# service globus-gatekeeper status
globus-gatekeeper is running (pid=20199)
root# netstat -an | grep 2119
tcp6 0 0 :::2119 :::* LISTEN
The gatekeeper is set up to run, and is ready to authorize job submissions and pass them on to
the fork job manager. We can now run a couple of test jobs:
quser% myproxy-logon -s elephant
Enter MyProxy pass phrase: ******
A credential has been received for user quser in /tmp/x509up_u1001.
quser% globus-job-run elephant /bin/hostname elephant.globus.org
quser% globus-job-run elephant /usr/bin/whoami
quser
4. create new web service add the class name.. select the tomcat server execute
6. Right click project ands run as server.
In browser check the output by giving ---> localhost:8080/projectname
if tomcat not runs copy the file from tomcat/conf and paste in workspace → server