0% found this document useful (0 votes)
74 views

Gcclab Manual

The document describes steps to develop a secured web application using basic security mechanisms in the Globus Toolkit: 1. Install and set up a Certificate Authority to issue certificates for authentication. 2. Create a simple Java client and server application that use the Globus Toolkit APIs to authenticate and authorize users via X.509 certificates. 3. Test that the application functions properly and only authorized users with valid certificates can access protected resources on the server.

Uploaded by

Ajithkandhan M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views

Gcclab Manual

The document describes steps to develop a secured web application using basic security mechanisms in the Globus Toolkit: 1. Install and set up a Certificate Authority to issue certificates for authentication. 2. Create a simple Java client and server application that use the Globus Toolkit APIs to authenticate and authorize users via X.509 certificates. 3. Test that the application functions properly and only authorized users with valid certificates can access protected resources on the server.

Uploaded by

Ajithkandhan M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 74

Expt No: DEVELOP A NEW WEB SERVICE FOR CALCULATOR

Date :

AIM:
To develop a web service program for a calculator.

PROCEDURES:
Step1: Open Netbeans and go to New

Step 2: Choose Java Web and select Web Application and give next.

Step 3: Enter the project name and give next and Select the Server either
tomcat or glassfish
Step 4: Give next and select finish
Step 5: Right click the Web Application (Project Name) and Select New, and choose
Java Class

Step 6: Type the following code


import javax.jws.WebMethod;
import javax.jws.WebParam;
import javax.jws.WebService;

/*
*To change this license header, choose License Headers in Project Properties.
*To change this template file, choose Tools | Templates
*and open the template in the editor.
*/
/**
*
*@author cloud_grid_cmp
*/
@WebService(serviceName="MathService", targetNamespace = "http://my.org/ns/")
public class MathService {
@WebMethod(operationName = "hello")
public String hello(@WebParam(name="name")String txt){ return "Hello"+txt+"!";
}
@WebMethod(operationName = "addSer")
public String addSer(@WebParam(name="value1")int v1, @WebParam(name = "value2")int v2)
{
return "Answer:" +(v1+v2)+"!";
}
@WebMethod(operationName = "subSer")
public String subSer(@WebParam(name="value1")int v1, @WebParam(name = "value2")int v2)
{
return "Answer:" +(v1-v2)+"!"; }
@WebMethod(operationName = "mulSer")
public String mulSer(@WebParam(name="value1")int v1, @WebParam(name = "value2")int v2)
{
return "Answer:" +(v1*v2)+"!";
}
@WebMethod(operationName = "divSer")
public String divSer(@WebParam(name="value1")int v1, @WebParam(name = "value2")int v2)
{
float res = 0;
try
{
res = ((float)v1)/((float) v2);
return "Answer:" +res+"!";
}
catch(ArithmeticException e){ System.out.println("Can't be divided by Zero"+e);
return "Answer:" +e.getMessage().toString()+"!!!"; }
}
}

Step 7. Run Project by pressing F6 key or Run button.

Step 8. Check Web browser


for the following name is available else give it
http://localhost:8080/WebApplication2/MathService?Tester

MathService?Tester ---> represents the java class name


Output Screen:

Give some value in the fields and check the output by pressing enter key.

Finally select the WSDL link


RESULT:
Thus the program on calculator for web service is executed successfully.
Expt No: TO DEVELOP A NEW OGSA COMPLAINT WEBSERVICE
Date :

AIM:
To develop a new OGSA complaint Webservice
PROCEDURE:
Step 1: Choose New Project from the main menu

Step 2: Select POM project from the maven category

Step 3: Type MavenOSGiCDIProject as the project name and click finish.


When you click finish, the IDE creates the POM project and opens the project in
the project window.
Step 4: Expand the project files node in the project window and double – click
pom.xml to open the file in editor and do the modification in the file and save.

In pom.xml file

<?xml version="1.0" encoding="UTF-8"?>


<project xmlns="http://maven.apache.org/xsd/maven-4.0.0.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.mycompany</groupId>
<artifactId>MavenOSGiCDIProject</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>pom</packaging>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencyManagement><dependencies><dependency>
<groupId>org.osgi</groupId>
<artifactId>org.osgi.core</artifactId>
<version>4.2.0</version>
<scope>provided</scope>
</dependency></dependencies></dependencyManagement>
</project>

Step 5:Creating OGSi Bundle Projects


Choose File -> New Project to open the New Project Wizzard
Step 6 : Choose OGSi Bundle from Maven category. Click Next
Step 6: Creating MavenHelloServiceApi as the Project Name for OGSi Bundle

The IDE creates the bundle project and opens the project in the Project Window. And
check the building plugins at pom.xml under project files.
As well as it will create org.osgi.core artifact as default and it can be view at under
Dependencies.
Step 7: Buid the MavenHelloServiceApi Project by
1. Right Click the MavenHelloServiceApi project node in the project window and
choose properties.

2.Select the source category in the project project dialog box


3.Set the Source/Binary Format to 1.6 and confirm that the Encoding is UTF-8 and
click ok
6.Select com.mycompany.mavenhelloserviceapi as the Package. Click finish.
Add the following sayHello method to the interface and save the changes

package com.mycompany.mavenhelloserviceapi;
public interface Hello {
String sayHello(String name);
}

8.Right click the project node in the project window and choose build.
9.After building the project, open files window and expand the project node such that
you can see MavenHelloServiceApi-1.0-SNAPSHOT.jar is created in the target folder.
Step 8: Creating the MavenHelloServiceImpl Implementation Bundle
Here you will create the MavenHelloServiceImpl in the POM Project.
1.Choose File -> New Project to open the New Project Wizard
2.Choose OSGi Bundle from the Maven category. Click Next.
3.Type MavenHelloServiceImpl for the Project Name
4. Click Browse and select the MavenOSGiCDIProject POM project as the
Location. Click Finish.(As earlier step).
6.Right click the project node in the Projects window and choose Properties.Select the
Sources category in the Project Properties dialog box.
7.Set the Source/Binary Format to 1.6 and confirm that the Encoding is UTF-8. Click
OK.
8.Right click Source Packages node in the Projects window and choose New -> Java
Class.
9.Type HelloImpl for the Class Name.
10. Select com.mycompany.mavenhelloserviceimpl as the Package. Click Finish.
11. Type the following and save your changes.

package com.mycompany.mavenhelloserviceimpl;

/**
*
*@author
linux */
public class HelloImpl implements
Hello { public String sayHello(String
name)
{
return "Hello" +name;
}
}

When you implement Hello, the IDE will display an error that you need to resolve by
adding the MavenHelloServiceApi project as a dependency.
12. Right click the Dependencies folder of MavenHelloServiceImpl in the Projects
window and choose Add Dependency.
13. Click the Open Projects tab in the Add Library dialog.
14. Select MavenHelloServiceApi OSGi Bundle. Click Add

14. Expand the com.mycompany.mavenhelloserviceimpl package and double


click Activator.java and open the file in editor.
The IDE automatically creates the Activator.java bundle and its manage the lifecycle of
bundle. By default it includes start() and stop().

15.Modify the start() and Stop() methods in the bundle activator class by adding the
following lines.
package com.mycompany.mavenhelloserviceimpl;

import org.osgi.framework.BundleActivator;
import org.osgi.framework.BundleContext;

public class Activator implements BundleActivator {

public void start(BundleContext context) throws Exception {


/ TODO add activation code here
System.out.println("HelloActivator::start");
context.registerService(Hello.class.getName(),new HelloImpl(),null);
System.out.println("HelloActivator::registration of Hello Service
Successfull");
}

public void stop(BundleContext context) throws


Exception { // TODO add deactivation code here
context.ungetService(context.getServiceReference(Hello.class.getName()));
System.out.println("HelloActivator stopped");
}

}
Step 9: Bulding and Deploying the OSGi Bundles
Here you will build the OSGi bundles and deploy the bundles to GlassFish

1. Right click the MavenOSGiCDIProject folder in the Projects window and choose
Clean and Build.
# When you build the project the IDE will create the JAR files in the target folder and
also install the snapshot JAR in the local repository.
# In file window, by expanding the target folder of each of the two bundle projects it
will
show two JAR archieves(MavenHelloServiceApi-1.0-SNAPSHOT.jar and
MavenHElloServiceImpl-1.0-SNAPSHOT.jar.)

2.Start the GlassFish server (if not already started)


3. Copy the MavenHelloServiceApi-1.0-SNAPSHOT.jar to the /home/linux/glassfish-
4.1.1/glassfish/domains/domain1/autodeploy/bundles ( GlassFish installed Directory)

4. You can see output similar to the following in the GlassFish Server log in the
output window.
Info: Installed /home/linux/glassfish-
4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceApi-1.0-
SNAPSHOT.jar
Info: Started bundle: file:/home/linux/glassfish-
4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceApi-1.0-
SNAPSHOT.jar
Info: Started bundle: file:/home/linux/glassfish-
4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceApi-
1.0-SNAPSHOT.jar

5. Repeat the step of copying the MavenHelloServiceImpl-1.0-SNAPSHOT.jar to


the/home/linux/glassfish-4.1.1/glassfish/domains/domain1/autodeploy/bundles
(GlassFish installed Directory)

6. You can see the output at the glassfish server log


Info: Installed /home/linux/glassfish
4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceImpl-
1.0-SNAPSHOT.jar
Info: HelloActivator::start
Info: HelloActivator::registration of Hello Service Successfull
Info: Started bundle: file:/home/linux/glassfish-
4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceImpl-
1.0-SNAPSHOT.jar
Info: Started bundle: file:/home/linux/glassfish-
4.1.1/glassfish/domains/domain1/autodeploy/bundles/MavenHelloServiceImpl-
1.0-SNAPSHOT.jar
RESULT:
Thus a new OGSA- complaint web service has been executed successfully.
Expt No:
Date: DEVELOP SECURED APPLICATIONS USING BASIC SECURITY
MECHANISM AVAILABLE IN GLOBUS TOOLKIT.

AIM:
To develop a secured applications using basic security mechanisms available in
Globus toolkit.

PROCEDURES:
Step 1. Installing and setup of Certificate Authority. Open Terminal and move to root
user and give command as

root@linux:~# apt-get install

root@linux:~# sudo grid-ca-create -noint

Certificate Authority Setup

This script will setup a Certificate Authority for signing Globus users certificates. It
will also generate a simple CA package that can be distributed to the users of the CA.

The CA information about the certificates it distributes will be kept in:

/var/lib/globus/simple_ca

The unique subject name for this CA is:

cn=Globus Simple CA, ou=simpleCA-ubuntu, ou=GlobusTest, o=Grid


Insufficient permissions to install CA into the trusted certifiicate directory
(tried
${sysconfdir}/grid-security/certificates and ${datadir}/certificates) Creating RPM
source tarball... done
globus_simple_ca_388f6778.tar.gz

Configure the subject name

The grid-ca-create program next prompts you for information about the name of CA
you wish to create:
root@linux:~# sudo grid-ca-create

It will ask few things in command prompt and give the things
i.Permission
ii. Unique subject name
iii.Mailid
iv.expiration date
v. password.
Generating Debian Packages
Get into the default simpla_ca path /var/lib/globus/simple_ca

Examining a Certificate Request


To examine a certificate request, use the command openssl req
-text -in Get into the path /etc/grid-security/

root@linux:/etc/grid-security# openssl req -noout -text -in hostcert_request.pem


Signing a Certificate Request:
root@linux:/var/lib/globus/simple_ca# grid-ca-sign -in certreq.pem -out
cert.pem

Revoking a Certificate
SimpleCA does not yet provide a convenient interface to revoke a signed certificate,
but it can be done with the openssl command.
root@linux:/var/lib/globus/simple_ca# openssl ca -config grid-ca-ssl.conf
-revoke newcerts/01.pem
Using configuration from /home/simpleca/.globus/simpleCA/grid-ca-ssl.conf Enter
pass phrase for /home/simpleca/.globus/simpleCA/private/cakey.pem: Revoking
Certificate 01.
Data Base Updated

Renewing a CA

root@linux:/var/lib/globus/simple_ca# openssl req -key private/cakey.pem -new


-x509 - days 1825 -out newca.pem -config grid-ca-ssl.conf
OUTPUT:
RESULT:
Thus the secured applications using basic security mechanism availability in Globus
Toolkit has been developed successfully.
Expt No:
INSTALLATION OF OPENNEBULA
Date:
AIM:
To install opennebula in ubuntu operating system for creating virtualization in
cloud.
PROCEDURE:

Step 1. Installation of opennenula in the Frontend 1.1. Install the repo

1.Open Terminal (ctrl+alt+t) or from dashdoard type terminal


2.Here # indirectly tells to work on root.
$ indirectly tells to work on normal user

Add the OpenNebula repository: # - root user

# wget -q -O- http://downloads.opennebula.org/repo/Ubuntu/repo.key | apt-key add -


# echo "deb http://downloads.opennebula.org/repo/4.12/Ubuntu/14.04/ stable opennebula" \
> /etc/apt/sources.list.d/opennebula.list

1.2. Install the required packages

# apt-get update
# apt-get install opennebula opennebula-sunstone nfs-kernel-server

1.3. Configure and Start the services

There are two main processes that must be started, the main OpenNebula daemon:
oned, and
the graphical user interface: sunstone.

Sunstone listens only in the loopback interface by default for security reasons. To
change it edit

# gedit /etc/one/sunstone-server.conf
and change :host: 127.0.0.1 to :host: 0.0.0.0.

Now we must restart Sunstone:

# /etc/init.d/opennebula-sunstone restart

1.4. Configure SSH Public Key

OpenNebula will need SSH for passwordless from any node (including the frontend)
to anyother node.
To do so run the following commands:

# su - oneadmin
$ cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

Add the following snippet to ~/.ssh/config so it doesn’t prompt to add the keys
to the known_hosts file:

$ cat<< EOT > ~/.ssh/config#Type the below


commands Host *
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
EOT
$ chmod 600 ~/.ssh/config
Step 3. Basic Usage

The default password for the oneadmin user can be found in ~/.one/one_auth which
is
randomly generated on every installation.
$ nano~/.one/one_auth
Open mozilla firefox
localhost:9869

Enter Username : oneadmin


Password : from ~/.one/one_auth (file)
RESULT :
Thus, opennebula has been installed successfully.
Expt No:
FIND PROCEDURE TO RUN THE VIRTUAL MACHINE OF DIFFERENT
Date: CONFIGURATION AND CHECK HOW MANY VIRTUAL MACHINE CAN BE
UTILIZED AT PARTCULAR TIME

AIM:
To find the procedure to run the virtual machine of different configuration and to
Check how many virtual can be create.
root@linux:$ /etc/init.d/opennebula-sunstone restart

PROCEDURE:
Step 1: Check the processor virtualization – in boot settings.

Step 2: Execute all the commands in root user if the command start with #,
and one admin user if command start with $.
a. Moving to roo user in terminal
linux@linux:~$ sudo bash
[sudo] password for linux:
Enter the password.

Step 3: Checking the virtualization support in terminal from root


user. root@linux:~# grep -E 'svm|vmx' /proc/cpuinfo

Step 4: Checking the Kernel Virtual Machine location and availability

root@linux:~# ls -l /dev/kvm
crw-rw----+ 1 root kvm 10, 232 Jul 1 04:09 /dev/kvm

Step 5: Setting up the opennenubla dependencies and it's packages by


downloading the following using cammand.

root@linux:~# dpkg -l opennebula-common ruby-opennebula opennebula


opennebula-node opennebula-sunstone opennebula-tools opennebula-gate
opennebula-flow libopennebula-java
Have to install the missing packages with the following commands in
terminal. root@linux:~# sudo apt-get install opennebula-node
root@linux:~# sudo apt-get install opennebula-gate
root@linux:~# sudo apt-get install opennebula-
flow root@linux:~# sudo apt-get install
libopennebula-java

Now check the dependencies by giving the cammand in terminal where all packages
shows as installed
root@linux:~# dpkg -l opennebula-common ruby-opennebula
opennebula opennebula-node opennebula-sunstone opennebula-tools
opennebula-gate opennebula-flow libopennebula-java

Step 6: Checking the opennebula and it's services status.

root@linux:~# service opennebula status ## package name


* one is running
root@linux:~# service opennebula-sunstone status ### Web interface name
sunstone * sunstone-server is running
root@linux:~#service nfs-kernel-server
status nfsd running
root@linux:~#service opennebula restart
* Restarting OpenNebula cloud
one oned and scheduler stopped
[OK]

root@linux:~# service opennebula-sunstone restart


* Restarting Sunstone Web interface sunstone-
server sunstone-server stopped
VNC proxy started
sunstone-server
started [OK]
root@linux:~#service nfs-kernel-server restart
* Stopping NFS kernel daemon [OK]
* Unexporting directories for NFS kernel daemon... [OK]
* Exporting directories for NFS kernel daemon... [OK]
* Starting NFS kernel daemon

Step 7: Setting up the physical bridge interface(br0)

root@linux:~# ifconfig

Step 8: Changing the network interface and bridge configuration manually. The
network configuration in the ubuntu is stored under /etc/network/interfaces
root@linux:~# gedit /etc/network/interfaces
It has only few lines.
# interfaces(5) file used by ifup(8) and
ifdown(8) auto lo
iface lo inet loopback

Now we have to change the br0.cfg file manually which is


located at /etc/network/interfaces.d/br0.cfg
root@linux:~# gedit

/etc/network/interfaces.d/br0.cfg And paste the following lines there.

#auto br0
iface br0 inet static
address 192.168.0.10
network 192.168.0.0
netmask 255.255.255.0
broadcast 192.168.0.255
gateway 192.168.0.1
bridge_ports em1
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off

Now move the configuration file to the interfaces.

root@linux:~# cat /etc/network/interfaces.d/br0.cfg >> /etc/network/interfaces

Now, open and view the interface file.

root@linux:~# gedit /etc/network/interfaces


(or)
root@linux:~# cat /etc/network/interfaces
# interfaces(5) file used by ifup(8) and
ifdown(8) auto lo
iface lo inet
loopback #auto br0
iface br0 inet static address
192.168.0.10 network
192.168.0.0 netmask
255.255.255.0
broadcast
192.168.0.255 gateway
192.168.0.1
bridge_ports em1
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off

Step 9: Change the settings at /etc/network/interfaces as


# interfaces(5) file used by ifup(8) and
ifdown(8) auto lo
iface lo inet
loopback auto br0
iface br0 inet static address
192.168.0.28 network
192.168.0.0 netmask
255.255.255.0
broadcast
192.168.0.255 gateway
192.168.0.1
bridge_ports p5p1
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off

Step 10: Up the changes of bridge through


## Getting interface which changed for bridge bro by ifup br0

root@linux:~# ifup br0 #######error

Waiting for br0 to get ready (MAXWAIT is 20 seconds).

root@linux:~# ifconfig

Step 11: Moving to Oneadmin user and doing configuration changes.


# Open opennebula in
webinterface root@linux:~# su -
oneadmin oneadmin@linux:~$
pwd
/var/lib/one ########default home dir for opennebula
Step 12: Checking the default configurations of opennebula.

oneadmin@linux:~$ oneuser list


oneadmin@linux:~$ oneuser show 0
# 0 represents the id of oneuser. Here usind id only we can do all activities like
delete, show, status and etc.,

oneadmin@linux:~$ onegroup list

oneadmin@linux:~$ onehost list

Step 13: Creating the kvm for localhost. It helps to create image, template and
template instantiate to work.
oneadmin@linux:~$ onehost create localhost -i kvm -v kvm -n
dummy ID: 0

# - I -information driver v- virtual driver n – network

driver oneadmin@linux:~$ onehost list

oneadmin@linux:~$ onehost show 0


oneadmin@linux:~$ ls -l
total 312
-rw-rw-r-- 1 oneadmin oneadmin 3339 Jul 1 05:04 config
drwxr-xr-x 5 oneadmin oneadmin 4096 Jul 1 04:12 datastores
-rw-r--r-- 1 dovenull nova 93 May 28 16:07 mynetwork.one
-rw-r--r-- 1 oneadmin oneadmin 287744 Jul 1 05:51 one.db
drwxr-xr-x 9 oneadmin oneadmin 4096 Jun 14 01:55 remotes
drwxrwxr-x 2 oneadmin oneadmin 4096 Jun 14 16:52
sunstone_vnc_tokens drwxr-xr-x 2 oneadmin oneadmin 4096 Nov 26
2015 vms
Step 14: Create and Modifying the mynetwork.one at /var/lib/onemynetwork.one

oneadmin@linux:~$ sudo bash


root@linux:~# cd /var/lib/one
root@linux:/var/lib/one# gedit mynetwork.one # paste the below lines.
NAME = "private"
BRIDGE = br0

AR=[
TYPE = IP4,
IP = 192.168.0.141,
SIZE = 5
]
root@linux:/var/lib/one# sudo - oneadmin
oneadmin@linux:~$ cat mynetwork.one
NAME = "private"
BRIDGE = br0

AR=[
TYPE = IP4,
IP = 192.168.0.141,
SIZE = 5
]

Step 15: Creating the virtualnetwork as onevnet and viewing it's


properties.

oneadmin@linux:~$ onevnet create


mynetwork.one
oneadmin@linux:~$ onevnet list
oneadmin@linux:~$ onevnet show
0
Step 16: Installing Virtual Machines, before that check list of oneimage,
onetemplate and onevm.

oneadmin@linux:~$ oneimage list


ID USER GROUP NAME DATASTORE SIZE TYPE PER STAT RVMS

oneadmin@linux:~$ onetemplate list

oneadmin@linux:~$ onevmlist

Step 17: Updating the .ssh for passwordless handshaking with oneadmin web
service. oneadmin@linux:~$ cat ~/.ssh/id_rsa.pub
## You can see the key, copy that fully and paste it in by visiting localhost:9869/
Click oneadmin and choose configuration and deselect the public ssh and finally paste
it.
Step 18: Creating Oneimage, onetemplate and one vm.
Move to the datastores folder.

oneadmin@linux:~$ cd datastores
oneadmin@linux:~/datastores$

Creating oneimage
oneadmin@linux:~/datastores$ oneimage create --name "ttylinux" –path
"/home/linux/Downloads/source/ttylinux.img" --driver raw --datastore default

Creating One Template:


oneadmin@linux:~/datastores$ onetemplate create --name "ttylinux" --cpu 1
--vcpu 1 - -memory 512 --arch x86_64 --disk "ttylinux" --nic "private" --vnc –ssh

Instantiating OneVm (oneemplate)


oneadmin@linux:~/datastores$ onetemplate instantiate "ttylinux"
The above image before creating the vm. Refresh and check once the above
commands are executed.

Step 19: Opening the VM through opennebula.

Click the corner computer symbol icon, it will and ask the username and password. By
default the username is root and password is password.
Through terminal you can access the vm by oneadmin@linux:~/datastores$
[email protected] and give password

Step 20: Similarly you can create as much vm your machine supports and can
access only 5vm at a time since we limited our ip range upto 5 in mynetwork.one
You can install ubuntu, centos and etc.,

Change the unbold data in the below command and install for various vm size.

Creating One Template:


oneadmin@linux:~/datastores$ onetemplate create --name "ttylinux" --cpu1 --vcpu
1 -- memory 512 --arch x86_64 --disk "ttylinux" --nic "private" --vnc –ssh

Instantiating OneVm (oneemplate)


oneadmin@linux:~/datastores$ onetemplate instantiate "ttylinux"
RESULT:
Thus the procedure for running the virtual machine of different configuration
has been successfully implemented in Opennebula.
Exercise 3: Find procedure to attach virtual block to the virtual machine and check
whether it holds the data even after the release of the virtual machine.

Aim:
To Find the procedure for attaching virtual block to the virtual machine and check whether it
holds the data even after the release of the virtual machine.

Procedure:
Step 1: Create the Oneimage, onetemplate and onevm through the commands

Creating oneimage
oneadmin@linux:~/datastores$ oneimage create --name "ttylinux" –path
"/home/linux/Downloads/source/ttylinux.img" --driver raw --datastore default

Creating One Template:


oneadmin@linux:~/datastores$ onetemplate create --name "ttylinux" --cpu 1 --vcpu 1 -
-memory 512 --arch x86_64 --disk "ttylinux" --nic "private" --vnc –ssh

Instantiating OneVm (oneemplate)


oneadmin@linux:~/datastores$ onetemplate instantiate "ttylinux"

Creating oneimage
oneadmin@linux:~/datastores$ oneimage create --name "Ubuntu" –path
"/home/linux/Downloads/source/tubuntu1404-5.0.1.qcow2c" --driver qcow2 --
datastore default

Creating One Template:


oneadmin@linux:~/datastores$ onetemplate create --name "ubuntu1" --cpu 1 --vcpu 1
--memory 1024 --arch x86_64 --disk "Ubuntu" --nic "private" --vnc –ssh

Instantiating OneVm (oneemplate)


oneadmin@linux:~/datastores$ onetemplate instantiate "ubuntu1"

Step 2: Power off Virtual os


oneadmin@ubuntu:~/datastores$ onevm poweroff 1
oneadmin@ubuntu:~/datastores$ onevm poweroff 2
oneadmin@ubuntu:~/datastores$ onevm list

Step 3: Starting service


oneadmin@ubuntu:~/datastores$ onevm resume 0

Step 4: Deleting created VM

After power off do the operations


oneadmin@ubuntu:~/datastores$ onevm delete 1
oneadmin@ubuntu:~/datastores$ onevm list
oneadmin@ubuntu:~/datastores$ onevm delete 2
oneadmin@ubuntu:~/datastores$ onevm list
ID USER GROUP NAME STAT UCPU UMEM HOST TIME
0 oneadmin oneadmin CentOS-6.5 Virt runn 0.0 512M localhost 0d 01h43
3 oneadmin oneadmin ttylinux Virtua runn 9.5 256M localhost 0d 00h05

Step 5: Deleting image, template and vm


Rollback the operations
For deleting the template
oneadmin@ubuntu:~/datastores$ onetemplate delete 1

For deleting the image


oneadmin@ubuntu:~/datastores$ oneimage delete 1

Step 6: Deploying, undeploying, disable and enabling the services of


onehost. oneadmin@ubuntu:~/datastores$ onevm undeploy 0,3,4,5

oneadmin@ubuntu:~/datastores$ onevm list


ID USER GROUP NAME STAT UCPU UMEM HOST TIME
0 oneadmin oneadmin CentOS-6.5 Virt unde 0.0 0K 0d 02h11
3 oneadmin oneadmin ttylinux Virtua shut 11.0 256M localhost 0d 00h34
4 oneadmin oneadmin Debian 7 Virtua unde 0.0 0K 0d 00h23
5 oneadmin oneadmin Ubuntu 14.04 Vi unde 0.0 0K 0d 00h21

oneadmin@ubuntu:~/datastores$ onehost list


ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT
0 localhost - 0 0 / 400 (0%) 0K / 3.7G (0%) on

oneadmin@ubuntu:~/datastores$ onehost disable 0


oneadmin@ubuntu:~/datastores$ onehost list
ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT
0 localhost - 0 - - off

oneadmin@ubuntu:~/datastores$ onehost enable 0


oneadmin@ubuntu:~/datastores$ onehost list
ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT
0 localhost - 0 0 / 400 (0%) 0K / 3.7G (0%) on

Step 7: Password generation for root user (Ubuntu)


For Passwrod Generation through root user-guide
[root@localhost ~]# ls -al
[root@localhost ~]# cd .ssh/

[root@localhost .ssh]# cat authorized_keys


ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAABAQCpZ7VExltM+8w36OsQZdzBsINiIRTBqU6934vS2wI
RZvhjzT4RO6QS314gG3K0ghFk4cVAlS8ykMMjqW11G0LtIIqMYaKUYOG4oWfiB2hkeQoGGJCP
hjMzsz3RKXkOsn/bzgo2iYXldiCTVLaj5d+c8ZxXHIErCK0K3AM2JYoeN/iR88nP6h8vCdJwaahp
cysggpKyHTAsJ+TBaXFl3TGhVH9W0AAw6qM/OA2+FNKqCnR+b57KI7fXeBBVc/MckJfjI5PQX
m+ZDrKa2LtFV9L5f71VvOmc8YWIBmDfZ2Bx/FcHuCEphq7Sh8WLNrLuqNW+Kf9lRcr33DBY
IROm9w2B root@gcc-server
Result:
Thus the procedure to attach virtual block to the virtual machine and check whether it holds the
data even after the release of the virtual machine has been successfully implemented.
Exercise 4: Install a C compiler in the virtual machine and execute a sample program.

Aim :
To install a C compiler in the virtual machine and execute a sample program.

Procedure:
step1:
Install the centos or ubuntu in the opennebula as per previous commands.

Step 2:
Login into the VM of installed OS.

Step 3:
If it is ubuntu then, for gcc installation
$ sudo add-apt-repository ppa:ubuntu-toolchain-
r/test $ sudo apt-get update
$ sudo apt-get install gcc-6 gcc-6-base

Step 4:
Write a sample program like
Welcome.cpp
#include<iostream.h>
using namespace std;
int main()
{
cout<<”Hello world”;
return 0;
}

Step 5:
First we need to compile and link our program. Assuming the source code is saved in a file
welcome.cpp, we can do that using GNU C++ compiler g++, for example
g++ -Wall -o welcome welcome.cpp

And output can be executed by ./welcome

Result:
Thus the GCC compiler has been successfully installed and executed a sample program.
EXERCISE 5: Show the virtual machine migration based on the certain condition from
one node to the other

Aim:
To show the virtual machine migration based on the certain condition from one node
to the other.

Procedure:
Step 1: Open Opennebula service from root user and view in localhost:9869
root@linux:$ /etc/init.d/opennebula-sunstone restart

Step 2: Create oneimage, onetemplate and onevm as like earlier


Creating oneimage
oneadmin@linux:~/datastores$ oneimage create name "Ubuntu" –path--
"/home/linux/Downloads/source/tubuntu1404-5.0.1.qcow2c" driver qcow2-- --datastore
default

Creating One Template:


oneadmin@linux:~/datastores$ onetemplate create name "ubuntu1" cpu 1---- -- vcpu 1 --
memory 1024 --arch x86_64 disk "Ubuntu" nic "private" vnc –ssh------

Instantiating OneVm (oneemplate)


oneadmin@linux:~/datastores$ onetemplate instantiate "ubuntu1"

Step 3: To perform a migration. We use onevm command with VM id as VID = 0 to


host02(HID=1)

oneadmin@linux:~/datastores$ onevm migrate --live 0 1

This will move the VM from host01 to host02. The onevm list shows something like the
following

oneadmin@linux:~/datastores$ onevm list


ID USER GROUP NAME STAT CPU MEM HOSTNAME TIME
0 oneadmin oneadmin one-0 runn 0 0k host02 00:00:48

Result :
Thus the virtual machine migration based on the certain condition
from one node to the other has been executed successfully.
HADOOP INSTALLATION

Exercise 6. Find procedure to set up the one node Hadoop cluster.

Aim:
To find the procedure for setting up the one hadoop cluster in the linux platform.

Procedures:
Step 1:
Download the latest sun java and apache hadoop from the official website.
Step 2:
To install Java and Hadoop follow the belo lines

######## 1. Install Java #############################################


a. Extract the Downloaded java tar.gz file in Downloads / Documents folder
b. Open Terminal by pressing ctrl+alt+t c. In Terminal, type

$gedit ~/.bashrc

d. At the bottom paste the following lines by changing the path alone

#--insert JAVA_HOME
JAVA_HOME= /opt/jdk1.8.0_05
#--in PATH variable just append at the end of the line
PATH=$PATH:$JAVA_HOME/bin
#--Append JAVA_HOME at end of the export statement
export PATH JAVA_HOME

e. Save the configuring by giving command as


$ source ~/.bashrc
f. Check java has been successfully
installed by typing $java -version
Step 3. Install ssh for passwordless authentication
For passwordless authentication we need to do certain changes by following the
below procedure and we need internet connection.
In Terminal: copy and the paste the below lines $
sudo apt-get update
### It will ask your root password. Give it

$ sudo apt-get install openssh-server


$ sshlocalhost
### It also will ask ask root password
$ ssh-keygen(Don't mention any path during key generation)

$ ssh-copy-id -ilocalhost

Step 4. Installation Procedure of Hadoop


As like java, extract hadoop tar.gz file also and do the changes in bashrc file by copy
and paste the following line
a. Extract hadoop at java located folder itself (Downloads or Documents)
b. $ gedit ~/.bashrc
paste the following lines below java path (change the path)

#--insert HADOOP_PREFIX
HADOOP_PREFIX=/opt/hadoop-2.7.0
#--in PATH variable just append at the end of the line
PATH=$PATH:$HADOOP_PREFIX/bin
#--Append HADOOP_PREFIX at end of the export statement
export PATH JAVA_HOME HADOOP_PREFIX

c. save it by typing the below command in terminal


$ source ~/.bashrc
d. To check the installed path of Hadoop. Type the command
$ echo $HADOOP_PREFIX
e. Command is to get into the hadoop directory
is $ cd $HADOOP_PREFIX
f. To check the installed hadoop version
$bin/hadoop version

Step 5. Modifying the Hadoop configuration


files Do the things as like we did before using
terminal (i) cd $HADOOP_PREFIX/etc/hadoop
$ gedit hadoop-env.sh
(paste the java and hadoop path as the first two lines)
export JAVA_HOME=/usr/local/jdk1.8.0_05
export HADOOP_PREFIX=/opt/hadoop-2.7.0
(ii) Modify the core-site.xml
$ gedit core-site.xml
Paste the line within <configuration></configuration>
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>

(iii) Modify the hdfs-site.xml


$ gedit hdfs-site.xml
Paste the configuration file
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>

(iv) modify the mapred-site.xml


$ cpmapred-site.xml.template mapred-site.xml
$ gedit mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

(v) Modiy yarn-site.xml


>gedit yarn-site.xml
<configuration>
<property> <name>yarn.nodemanager.aux-
services</name>
<value>mapreduce_shuffle</value> </property>
</configuration>

Step 7. Formatting the HDFS file-system via the NameNode


$ cd $HADOOP_PREFIX
$ bin/hadoopnamenode -format
after formatting, start the services

7. Starting the services.


$sbin/start-dfs.sh#######it will start services by taking some time and it will
ask permission give yes.
$sbin/start-yarn.sh
else
$ sbin/start-all.sh

to check running services


>jps
3200 DataNode
12563 Jps
4036 ResourceManager
4172 NodeManager
5158NameNode
3685 SecondaryNameNode

Step 9. Stopping Services


>sbin/stop-dfs.sh
>sbin/stop-yarn.sh

(or)
>sbin/stop-all.sh

Once you start the services after stopped means it shows only 4 services.
>jps
12563 Jps
4036 ResourceManager
4172 NodeManager
3685 SecondaryNameNode

Step 10. only four services will run. To start datanode and name node we have to add some
lines in hdfs- site.xml
In Terminal
$ cd $HADOOP_PREFIX/etc/hadoop
$gedit hdfs-site.xml (Paste the below lines)

<property>
<name>dfs.namenode.name.dir</name>
<value>/opt/name</value>
</property>

<property>
<name>dfs.datanode.data.dir</name>
<value>/opt/data</value>
</property>

Next do these procedures, for creating permanent log storage to


namenode and datanode. $ sudomkdir /opt/name
$ sudomkdir /opt/data
$ ls /opt/
$ ls -l /opt/ ##To change the directory from root user to admin
user
$sudochownvrscet:vrscet -R /opt (root should be replaced
by your system username)
$ ls -l /opt/
Step 11. Format the namenode
$ cd $HADOOP_PREFIX
$ bin/hadoopnamenode -format

Step 12. Start services


$sbin/start-dfs.sh
$sbin/start-yarn.sh
$jps
3200 DataNode
12563 Jps
4036 ResourceManager
4172 NodeManager
5158 NameNode
3685 SecondaryNameNode

Step 13. To view in Browser (Open Firefox and enter the below address)
localhost:50070
localhost:8088
Result:
Thus single node hadoop cluster has been successfully created.
Exercise 7. Write a program to use the API's of Hadoop to interact with it.
Aim:
To write a program for using the API's on hadoop and as well as to interact with it.

Procedure:
Step 1:
Start the hadoop services by giving the following command in terminal
$ sbin/start-all.sh
$ jps
Step 2:
Open web browser and open
localhost:50070
localhost:8088
Step 3:
Creating folder in web interface (HDFS) from terminal.
$ bin/hadoop fs -mkdir /bala
Wait until the command executes.

Step 4:
Open the localhost:50070
Utilities --> Browse the filesytem.
An folder has been created which we had given in terminal

bin/hado ----> represents the location of hdfs


op ---> file system
fs ------> create a folder
-mkdir
/ ------> root in hdfs
bala ----> folder name

Step 5: Loading the data into the folder we created in hdfs


$bin/hadoop fs -copyFromLocal /home/bala/Pictures /bala2

Open web browser and under utilities, browse the filesytem and check whether
the content is moved
Step 6: Space took by datanode
Result:
Thus an API program has been developed for creating folder and copying files into it.
Exercise 8. Write a wordcount program to demonstrate the use of Map and Reduce tasks
Aim:
To write a wordcount program for demonstrating the use of map and reduce task.

Procedure:
Step 1:
Write a map reduce program on java
Step 2:
Create a folder in HDFS by using the command $
bin/hadoop fs -mkdir /bala2
Step 3:
Move the number of text file into the hdfs
$ bin/hadoop fs -copyFromLocal /home/bala/Downloads/data /bala
Step 4:
use the hadoop-mapreduce-examples-2.7.0.jar which is already available in hadoop.

Step 5:
Run the Mapreduce program by the following command.
$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-
examples-2.7.0.jar wordcount /bala/data /output

Input file
Loaded file

Word Count Program:


package org.myorg;

import java.io.IOException;
import java.util.*;

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;

publicclass WordCount {

publicstaticclass Map extends Mapper<LongWritable, Text, Text,


IntWritable>{ privatefinalstatic IntWritable one =new IntWritable(1);
private Text word =new Text();

publicvoid map(LongWritable key, Text value, Context context)throws IOException,


InterruptedException {
String line =value.toString();
StringTokenizer tokenizer =newStringTokenizer(line);
while(tokenizer.hasMoreTokens()){
word.set(tokenizer.nextToken()); context.write(word, one);
}
}
}

publicstaticclass Reduce extends Reducer<Text, IntWritable, Text, IntWritable>{

publicvoid reduce(Text key, Iterator<IntWritable> values, Context context)


throws IOException, InterruptedException { int sum =0;

while(values.hasNext()){
sum+= values.next().get();
}
context.write(key,new IntWritable(sum));
}
}

publicstaticvoid main(String[] args)throws Exception {


Configuration conf =newConfiguration();

Job job =newJob(conf,"wordcount");

job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);

job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);

job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);

FileInputFormat.addInputPath(job,new Path(args[0]));
FileOutputFormat.setOutputPath(job,new Path(args[1]));

job.waitForCompletion(true);
}

}
Executing Map reduce program

Open Localhost --->ouput folder ----> open


Output :

Result:
Thus the wordcount program has been executed successfully.
Exercise 9: Mount the one node Hadoop cluster using FUSE
AIM
To mount the one node Hadoop cluster using FUSE and access files on HDFS in the
same way as we do on Linux operating systems.

PROCEDURE

FUSE (Filesystem in Userspace) enables you to write a normal user application as a bridge
for a traditional filesystem interface.

The hadoop-hdfs-fuse package enables you to use your HDFS cluster as if it were a traditional
filesystem on Linux. It is assumed that you have a working HDFS cluster and know the
hostname and port that your NameNode exposes.

To install fuse-dfs on Ubuntu systems:

hdpuser@jiju-PC:~$ wget http://archive.cloudera.com/cdh5/one-click-


install/trusty/amd64/cdh5-repository_1.0_all.deb
--2016-07-24 09:10:33-- http://archive.cloudera.com/cdh5/one-click-
install/trusty/amd64/cdh5-repository_1.0_all.deb
Resolving archive.cloudera.com (archive.cloudera.com)... 151.101.8.167
Connecting to archive.cloudera.com (archive.cloudera.com)|151.101.8.167|:80... connected.
HTTP request sent, awaiting response... 200 OK Length: 3508 (3.4K) [application/x-debian-
package]
Saving to: ‘cdh5-repository_1.0_all.deb’

100%[======================================>] 3,508 --.-K/s in 0.09s


2016-07-24 09:10:34 (37.4 KB/s) - ‘cdh5-repository_1.0_all.deb’ saved [3508/3508]
hdpuser@jiju-PC:~$ sudo dpkg -i cdh5-repository_1.0_all.deb Selecting
previously unselected package cdh5-repository.
(Reading database ... 170607 files and directories currently installed.)
Preparing to unpack cdh5-repository_1.0_all.deb ...
Unpacking cdh5-repository (1.0) ...
Setting up cdh5-repository (1.0) ...
gpg: keyring `/etc/apt/secring.gpg' created
gpg: keyring `/etc/apt/trusted.gpg.d/cloudera-cdh5.gpg' created
gpg: key 02A818DD: public key "Cloudera Apt Repository" imported
gpg: Total number processed: 1
gpg: imported: 1

hdpuser@jiju-PC:~$ sudo apt-get update

hdpuser@jiju-PC:~$ sudo apt-get install hadoop-hdfs-fuse


Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed: avro-libs bigtop-jsvc bigtop-
utils curl hadoop hadoop-0.20-mapreduce hadoop-client hadoop-hdfs
hadoop-mapreduce hadoop-yarn libcurl3 libhdfs0 parquet parquet-format
zookeeper
The following NEW packages will be installed: avro-libs bigtop-jsvc bigtop-
utils curl hadoop hadoop-0.20-mapreduce hadoop-client hadoop-hdfs hadoop-
hdfs-fuse hadoop-mapreduce hadoop-yarn libhdfs0 parquet parquet-format
zookeeper The following packages will be upgraded:

libcurl3
1 upgraded, 15 newly installed, 0 to remove and 702 not upgraded.
Need to get 222 MB of archives.
After this operation, 267 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://in.archive.ubuntu.com/ubuntu/ trusty-updates/main libcurl3 amd64 7.35.0-
1ubuntu2.7 [173 kB]
Get:2 https://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib
avro-libs all 1.7.6+cdh5.8.0+112-1.cdh5.8.0.p0.74~trusty-cdh5.8.0 [47.0 MB]
Get:3 http://in.archive.ubuntu.com/ubuntu/ trusty-updates/main curl amd64 7.35.0-
1ubuntu2.7 [123 kB]
Get:4 https://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib
parquet-format all 2.1.0+cdh5.8.0+12-1.cdh5.8.0.p0.70~trusty-cdh5.8.0 [479 kB]
Get:5 https://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib
parquet all 1.5.0+cdh5.8.0+174-1.cdh5.8.0.p0.71~trusty-cdh5.8.0 [27.1 MB]
Get:6 https://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib
hadoop all 2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0 [28.2 MB]
Get:7 https://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib
libhdfs0 amd64 2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0 [320 kB]
Get:8 https://archive.cloudera.com/cdh5/ubuntu/trusty/amd64/cdh/ trusty-cdh5/contrib
hadoop-hdfs-fuse amd64 2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0 [317 kB]
Fetched 222 MB in 3min 28s (1,064 kB/s)
(Reading database ... 170612 files and directories currently installed.)
Preparing to unpack .../libcurl3_7.35.0-1ubuntu2.7_amd64.deb ...
Unpacking libcurl3:amd64 (7.35.0-1ubuntu2.7) over (7.35.0-1ubuntu2) ...
Selecting previously unselected package curl.
Preparing to unpack .../curl_7.35.0-1ubuntu2.7_amd64.deb ...
Unpacking curl (7.35.0-1ubuntu2.7) ...
Selecting previously unselected package avro-libs.
Preparing to unpack .../avro-libs_1.7.6+cdh5.8.0+112-1.cdh5.8.0.p0.74~trusty-
cdh5.8.0_all.deb ...
Unpacking avro-libs (1.7.6+cdh5.8.0+112-1.cdh5.8.0.p0.74~trusty-cdh5.8.0) ...
Selecting previously unselected package bigtop-utils.
Preparing to unpack .../bigtop-utils_0.7.0+cdh5.8.0+0-
1.cdh5.8.0.p0.72~trusty-cdh5.8.0_all.deb ...
Unpacking bigtop-utils (0.7.0+cdh5.8.0+0-1.cdh5.8.0.p0.72~trusty-cdh5.8.0) ...
Selecting previously unselected package bigtop-jsvc.
Preparing to unpack .../bigtop-jsvc_0.6.0+cdh5.8.0+847-1.cdh5.8.0.p0.74~trusty-
cdh5.8.0_amd64.deb ...
Unpacking bigtop-jsvc (0.6.0+cdh5.8.0+847-1.cdh5.8.0.p0.74~trusty-cdh5.8.0) ...
Selecting previously unselected package zookeeper.
Preparing to unpack .../zookeeper_3.4.5+cdh5.8.0+94-1.cdh5.8.0.p0.76~trusty-
cdh5.8.0_all.deb ...
Unpacking zookeeper (3.4.5+cdh5.8.0+94-1.cdh5.8.0.p0.76~trusty-cdh5.8.0) ...
Selecting previously unselected package parquet-format.
Preparing to unpack .../parquet-format_2.1.0+cdh5.8.0+12-1.cdh5.8.0.p0.70~trusty-
cdh5.8.0_all.deb ...
Unpacking parquet-format (2.1.0+cdh5.8.0+12-1.cdh5.8.0.p0.70~trusty-cdh5.8.0) ...
Selecting previously unselected package hadoop-yarn.
Preparing to unpack .../hadoop-yarn_2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-
cdh5.8.0_all.deb ...
Unpacking hadoop-yarn (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Selecting previously unselected package hadoop-mapreduce.
Preparing to unpack .../hadoop-mapreduce_2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-
cdh5.8.0_all.deb ...
Unpacking hadoop-mapreduce (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Selecting previously unselected package hadoop-hdfs.
Preparing to unpack .../hadoop-hdfs_2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-
cdh5.8.0_all.deb ...
Unpacking hadoop-hdfs (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Selecting previously unselected package hadoop-0.20-mapreduce.
Preparing to unpack .../hadoop-0.20-mapreduce_2.6.0+cdh5.8.0+1601-
1.cdh5.8.0.p0.93~trusty-cdh5.8.0_amd64.deb ...
Unpacking hadoop-0.20-mapreduce (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0)
...
Selecting previously unselected package hadoop-client.
Preparing to unpack .../hadoop-client_2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-
cdh5.8.0_all.deb ...
Unpacking hadoop-client (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Selecting previously unselected package parquet.
Preparing to unpack .../parquet_1.5.0+cdh5.8.0+174-1.cdh5.8.0.p0.71~trusty-cdh5.8.0_all.deb
...
Unpacking parquet (1.5.0+cdh5.8.0+174-1.cdh5.8.0.p0.71~trusty-cdh5.8.0) ...
Selecting previously unselected package hadoop.
Preparing to unpack .../hadoop_2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-
cdh5.8.0_all.deb ...
Unpacking hadoop (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Selecting previously unselected package libhdfs0.
Preparing to unpack .../libhdfs0_2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-
cdh5.8.0_amd64.deb ...
Unpacking libhdfs0 (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Selecting previously unselected package hadoop-hdfs-fuse.
Preparing to unpack .../hadoop-hdfs-fuse_2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-
cdh5.8.0_amd64.deb ...
Unpacking hadoop-hdfs-fuse (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Processing triggers for man-db (2.6.7.1-1) ...
Setting up libcurl3:amd64 (7.35.0-1ubuntu2.7) ...
Setting up curl (7.35.0-1ubuntu2.7) ...
Setting up avro-libs (1.7.6+cdh5.8.0+112-1.cdh5.8.0.p0.74~trusty-cdh5.8.0) ...
Setting up bigtop-utils (0.7.0+cdh5.8.0+0-1.cdh5.8.0.p0.72~trusty-cdh5.8.0) ...
Setting up bigtop-jsvc (0.6.0+cdh5.8.0+847-1.cdh5.8.0.p0.74~trusty-cdh5.8.0) ...
Setting up zookeeper (3.4.5+cdh5.8.0+94-1.cdh5.8.0.p0.76~trusty-cdh5.8.0) ...
update-alternatives: using /etc/zookeeper/conf.dist to provide /etc/zookeeper/conf
(zookeeper-conf) in auto mode
Setting up parquet-format (2.1.0+cdh5.8.0+12-1.cdh5.8.0.p0.70~trusty-cdh5.8.0) ...
Setting up parquet (1.5.0+cdh5.8.0+174-1.cdh5.8.0.p0.71~trusty-cdh5.8.0) ...
Setting up hadoop (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
update-alternatives: using /etc/hadoop/conf.empty to provide /etc/hadoop/conf (hadoop-
conf) in auto mode
Setting up hadoop-yarn (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Setting up libhdfs0 (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Setting up hadoop-mapreduce (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Setting up hadoop-hdfs (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Setting up hadoop-0.20-mapreduce (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0)
...
Setting up hadoop-client (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Setting up hadoop-hdfs-fuse (2.6.0+cdh5.8.0+1601-1.cdh5.8.0.p0.93~trusty-cdh5.8.0) ...
Processing triggers for libc-bin (2.19-0ubuntu6) ...

hdpuser@jiju-PC:~$ sudo mkdir -p /home/hdpuser/hdfs


[sudo] password for hdpuser:

hdpuser@jiju-PC:~$ sudo hadoop-fuse-dfs dfs://localhost:54310 /home/hdpuser/hdfs/

INFO /data/jenkins/workspace/generic-package-ubuntu64-14-04/CDH5.8.0-Packaging-
Hadoop-2016-07-12_15-43-10/hadoop-2.6.0+cdh5.8.0+1601-
1.cdh5.8.0.p0.93~trusty/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-
dfs/fuse_options.c:164 Adding FUSE arg /home/hdpuser/hdfs/

hdpuser@jiju-PC:~$ ls /home/hdpuser/hdfs/

hdpuser@jiju-PC:~$ mkdir /home/hdpuser/hdfs/new

hdpuser@jiju-PC:~$ ls /home/hdpuser/hdfs/
new

hdpuser@jiju-PC:~$ mkdir /home/hdpuser/hdfs/example


hdpuser@jiju-PC:~$ ls -l
/home/hdpuser/hdfs/ total 8
drwxr-xr-x 2 hdpuser 99 4096 Jul 24 15:28 example
drwxr-xr-x 2 hdpuser 99 4096 Jul 24 15:19 new

To Unmont the file system

Using umount command the filesystem can be unmounted.

hdpuser@jiju-PC:~$ sudo umount /home/hdpuser/hdfs

NOTE: You can now add a permanent HDFS mount which persists through reboots.

To add a system mount:

Open /etc/fstab and add lines to the bottom similar to these: (sudo vi /etc/fstab)

hadoop-fuse-dfs#dfs://<name_node_hostname>:<namenode_port><mount_point>
fuse allow_other,usetrash,rw 2 0

For example:

sudo hadoop-fuse-dfs#dfs://localhost:54310 /home/hdpuser/hdfs


fuse allow_other,usetrash,rw 2 0

Test to make sure everything is working properly:

$ mount <mount_point>

hdpuser@jiju-PC:~$ sudo mount /home/hdpuser/hdfs

Result:
Thus fuse has been installed successfully.
Installing Eclipse
Extract Eclipse to /opt/ for global use
Press Ctrl+Alt+T on keyboard to open the terminal. When it opens, run the command below
to extract Eclipse to /opt/:
cd /opt/ && sudo tar -zxvf ~/Downloads/eclipse-*.tar.gz
You may replace “eclipse-*.tar.gz” (without quote) to the exact package name if the command
does not work.
Don’t like Linux commands? You can do this by opening Nautilus file browser via root: Press
Alt+F2 -> run gksudo nautilus.
Once done, you should see the eclipse folder under /opt/ directory.
Create a launcher shortcut for Eclipse
Press Ctrl+Alt+T, paste below command into the terminal and hit enter (install gksu from
Software Center if below command does not work).
gksudo gedit /usr/share/applications/eclipse.desktop
Above command will create and open the launcher file for eclipse with gedit text editor.
Paste below content into the opened file and save it.
[Desktop Entry]
Name=Eclipse 4
Type=Application
Exec=/opt/eclipse/eclipse
Terminal=false
Icon=/opt/eclipse/icon.xpm
Comment=Integrated Development Environment
NoDisplay=false
Categories=Development;IDE;
Name[en]=Eclipse
Finally open Eclipse from Unity Dash search results and enjoy!

3.2. Setting up security on your first machine


The Globus Toolkit uses X.509 certificates and proxy certificates to authenticate and authorize
grid users. For this quickstart, we use the Globus SimpleCA tools to manage our own
Certificate Authority, so that we don’t need to rely on any external entitty to authorize our
grid users.
When the globus-simple-ca package is installed, it will automatically create a new Certificate
Authority and deploy its public certificate into the globus trusted certificate directory. It will
also create a host certificate and key, so that the Globus services will be able to run.

We’ll also need to copy the host certificate and key into place so that the myproxy service can
use it as well.
root# install -o myproxy -m 644 \
/etc/grid-security/hostcert.pem \
/etc/grid-security/myproxy/hostcert.pem
root# install -o myproxy -m 600 \
/etc/grid-security/hostkey.pem \
/etc/grid-security/myproxy/hostkey.pem

3.3. Creating a MyProxy Server


We are going to create a MyProxy server on elephant, following the instructions at
http://grid.ncsa.illinois.edu/myproxy/fromscratch.html#server. This will be used to store our
user’s certificates. In order to enable myproxy to use the SimpleCA, modify the /etc/myproxy-
server.config file, by uncommenting every line in the section file, by uncommenting every line
in the section Complete Sample Policy #1 such that section looks like this myproxy
configuration:
#
# Complete Sample Policy #1 - Credential Repository
#
# The following lines define a sample policy that enables all
# myproxy-server credential repository features.
# See below for more examples.
accepted_credentials "*"
authorized_retrievers "*"
default_retrievers "*"
authorized_renewers "*"
default_renewers "none"
authorized_key_retrievers "*"
default_key_retrievers "none"
trusted_retrievers "*"
default_trusted_retrievers "none"
cert_dir /etc/grid-security/certificates

We’ll next add the myproxy user to the simpleca group so that the myproxy server can create
certificates.
root# usermod -a -G simpleca myproxy

Start the myproxy server:


root# service myproxy-server start
Starting myproxy-server (via [OK]
systemctl):

Check that it is running:


root# service myproxy-server status myproxy-
server.service - LSB: Startup the MyProxy server
daemon Loaded: loaded
(/etc/rc.d/init.d/myproxy-server)
Active: active (running) since Fri, 02 Nov 2012 09:07:51 -0400; 1min 20s ago
Process: 1205 ExecStart=/etc/rc.d/init.d/myproxy-server start (code=exited, status=0/SUCCESS)
CGroup: name=systemd:/system/myproxy-server.service
└ 1214 /usr/sbin/myproxy-server -s /var/lib/myproxy

Nov 02 09:07:51 elephant.globus.org runuser[1210]: pam_unix(runuser:session):...


Nov 02 09:07:51 elephant.globus.org myproxy-server[1212]: myproxy-server v5.9...
Nov 02 09:07:51 elephant.globus.org myproxy-server[1212]: reading configurati...
Nov 02 09:07:51 elephant.globus.org myproxy-server[1212]: usage_stats: initia...
Nov 02 09:07:51 elephant.globus.org myproxy-server[1212]: Socket bound to 0.0...
Nov 02 09:07:51 elephant.globus.org myproxy-server[1212]: Starting myproxy-se...
Nov 02 09:07:51 elephant.globus.org runuser[1210]: pam_unix(runuser:session):...
Nov 02 09:07:51 elephant.globus.org myproxy-server[1205]: Starting myproxy-se...

The important thing to see in the above is that the process is in the active (running) state.
[NOTE]
For other Linux distributions which are not using systemd, the output will be different. You
should still see some information indicating the service is running.
As a final sanity check, we’ll make sure the myproxy TCP port 7512 is in use via the netstat
command:
root# netstat -an | grep 7512
tcp 0 0 0.0.0.0:7512 0.0.0.0:* LISTEN

3.3.1. User Credentials


We’ll need to specify a full name and a login name for the user we’ll create credentials for.
We’ll be using the QuickStart User as the user’s name and quser as user’s account name. You
can use this as well if you first create a quser unix account. Otherwise, you can use another
local user account. Run the myproxy-admin-adduser command as the myproxy user to create
the credentials. You’ll be prompted for a passphrase, which must be at least 6 characters long,
to encrypt the private key for the user. You must communicate this passphrase to the user
who will be accessing this credential. He can use the myproxy-change-passphrase command
to change the passphrase.

The command to create the myproxy credential for the user is


root# su - -s /bin/sh myproxy
myproxy% PATH=$PATH:/usr/sbin
myproxy% myproxy-admin-adduser -c "QuickStart User" -l quser
Legacy library getopts.pl will be removed from the Perl core distribution in the next major release. Please install
it from the CPAN distribution Perl4::CoreLibs. It is being used at /usr/sbin/myproxy-admin-adduser, line 42.
Enter PEM pass phrase: ******
Verifying - Enter PEM pass phrase:******

The new signed certificate is at: /var/lib/globus/simple_ca/newcerts/02.pem

using storage directory /var/lib/myproxy


Credential stored successfully
Certificate subject is:
/O=Grid/OU=GlobusTest/OU=simpleCA-elephant.globus.org/OU=local/CN=QuickStart User

3.3.2. User Authorization


Finally, we’ll create a grid map file entry for this credential, so that the holder of that
credential can use it to access globus services. We’ll use the grid-mapfile-add-entry program
for this. We need to use the exact string from the output above as the parameter to the -dn
command-line option, and the local account name of user to authorize as the parameter to
the -ln command-line option.
root# grid-mapfile-add-entry -dn \
"/O=Grid/OU=GlobusTest/OU=simpleCA-elephant.globus.org/OU=local/CN=QuickStart User" \
-ln quser
Modifying /etc/grid-security/grid-mapfile ...
/etc/grid-security/grid-mapfile does not exist... Attempting to create /etc/grid-security/grid-mapfile
New entry:
"/O=Grid/OU=GlobusTest/OU=simpleCA-elephant.globus.org/OU=local/CN=QuickStart User" quser
(1) entry added

3.4. Setting up GridFTP


Now that we have our host and user credentials in place, we can start a globus service. This
set up comes from the GridFTP Admin Guide.
Start the GridFTP server:
root# service globus-gridftp-server start
Started GridFTP Server [ OK ]

Check that the GridFTP server is running and listening on the gridftp port:
root# service globus-gridftp-server status

GridFTP Server Running (pid=20087)


root# netstat -an | grep 2811
tcp 0 0 0.0.0.0:2811 0.0.0.0:* LISTEN

Now the GridFTP server is waiting for a request, so we’ll generate a proxy from the myproxy
service by using myproxy-logon and then copy a file from the GridFTP server with the globus-url-
copy command. We’ll use the passphrase used to create the myproxy credential for quser.
quser% myproxy-logon -s elephant
Enter MyProxy pass phrase: ******
A credential has been received for user quser in /tmp/x509up_u1001
quser% globus-url-copy gsiftp://elephant.globus.org/etc/group \
file:///tmp/quser.test.copy
quser% diff /tmp/quser.test.copy /etc/group

At this point, we’ve configured the myproxy and GridFTP services and verified that we can
create a security credential and transfer a file. If you had trouble, check the security
troubleshooting section in the Security Admin Guide. Now we can move on to setting up
GRAM5 resource management.

3.5. Setting up GRAM5


Now that we have security and GridFTP set up, we can set up GRAM for resource
management. There are several different Local Resource Managers (LRMs) that one could
configure GRAM to use, but this guide will explain the simple case of setting up a "fork"
jobmanager, without auditing. For details on all other configuration options, and for
reference, you can see the GRAM5 Admin Guide. The GRAM service will use the same host
credential as the GridFTP service, and is configured by default to use the fork manager, so all
we need to do now is start the service.
Start the GRAM gatekeeper:
root# service globus-gatekeeper start
Started globus-gatekeeper [ OK ]

We can now verify that the service is running and listening on the GRAM5 port:
root# service globus-gatekeeper status
globus-gatekeeper is running (pid=20199)
root# netstat -an | grep 2119
tcp6 0 0 :::2119 :::* LISTEN

The gatekeeper is set up to run, and is ready to authorize job submissions and pass them on to
the fork job manager. We can now run a couple of test jobs:
quser% myproxy-logon -s elephant
Enter MyProxy pass phrase: ******
A credential has been received for user quser in /tmp/x509up_u1001.
quser% globus-job-run elephant /bin/hostname elephant.globus.org
quser% globus-job-run elephant /usr/bin/whoami
quser

1. Eclipse - > Preference add Axis 2


2. Start a new Project , new → Dynamic web project → name → 2.5 →
3. create new class →
public int add(int x, int y){
return x + y;
}}

4. create new web service add the class name.. select the tomcat server execute
6. Right click project ands run as server.
In browser check the output by giving ---> localhost:8080/projectname

if tomcat not runs copy the file from tomcat/conf and paste in workspace → server

You might also like