Wednesday, 27 November 2013

HBASE TABLE SNAPSHOT


STEP 1 :-

Configuration

a)  Add a property.

[ nitin@nitin-ubuntu:~ # ] sudo vim /etc/hbase/conf/hbase-site.xml


  <property>
    <name>hbase.snapshot.enabled</name>
    <value>true</value>
  </property>

b) Restart hbase
 

[ nitin@nitin-ubuntu:~ # ] sudo /usr/lib/hbase/bin/stop_hbase.sh
[ nitin@nitin-ubuntu:~ # ] sudo /usr/lib/hbase/bin/start_hbase.sh


Step 2 :-

Take a Snapshot

[ nitin@nitin-ubuntu:~ # ] hbase shell
hbase> snapshot 'MY_TABLE', 'SNAP_MYTABLE'

Step 3 :-

Listing Snapshots
[ nitin@nitin-ubuntu:~ # ] hbase shell
hbase> list_snapshots

Step 4 :-

Deleting Snapshots
[ nitin@nitin-ubuntu:~ # ] hbase shell
hbase> delete_snapshot 'SNAP_MYTABLE'

Step 5 :-

Clone a table from snapshot
[ nitin@nitin-ubuntu:~ # ] hbase shell
hbase> clone_snapshot 'SNAP_MYTABLE', 'NEW_TABLE'

Step 6 :-
Export to another cluster:-
[ nitin@nitin-ubuntu:~ # ]  hbase  org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot SNAP_Target_toys_Prods -copy-to hdfs://CLUSTER2:8020/hbase








HBASE TABLE ROW COUNT



STEP 1:-

Rowcounter  is a mapreduce job to count all the rows of a table. This is a good utility to use as a sanity check to ensure that HBase can read all the blocks of a table if there are any concerns of metadata inconsistency. It will run the mapreduce all in a single process but it will run faster if you have a MapReduce cluster in place for it to exploit.



hbase org.apache.hadoop.hbase.mapreduce.RowCounter <tablename>
[ nitin@nitin-ubuntu:~] # hbase org.apache.hadoop.hbase.mapreduce.RowCounter TARGET_TBL_NAME

Monday, 25 November 2013

QUERYING JSON RECORDS VIA HIVE



Step 1 :- 

Json file 



{

"Foo": "ABC",

"Bar": "20090101100000",

"Quux": {

"QuuxId": 1234,

"QuuxName": "Sam"

}
}


Step 2 :- 

Create Table 

CREATE TABLE json_table ( json string );


Step 3 :-

Upload data into hive table 

LOAD DATA LOCAL INPATH '/tmp/example.json'  INTO TABLE `json_table`;

Step 4 :- 

Retrieve data 

select get_json_object(json_table.json, '$') from json_table;

Step 5 :-

Retrieve Nested data

select get_json_object(json_table.json, '$.Foo') as foo,
       get_json_object(json_table.json, '$.Bar') as bar,       get_json_object(json_table.json, '$.Quux.QuuxId') as qid,       get_json_object(json_table.json, '$.Quux.QuuxName') as qname from json_table;






Friday, 15 November 2013

HBASE BACKUP AND RESTORE TABLE


STEP 1

EXPORT  :-

Export is a utility that will dump the contents of table to HDFS in a sequence file. Invoke via:



[ nitin@nitin-ubuntu:~ ]# hbase 
org.apache.hadoop.hbase.mapreduce.Export                   <tablename> <outputdir>

[ nitin@nitin-ubuntu:~ ]#  hbase org.apache.hadoop.hbase.mapreduce.Export HBASEEXPORTTABLE DUMP

STEP 2

IMPORT :- 

Import is a utility that will load data that has been exported back into HBase. Invoke via:



[ nitin@nitin-ubuntu:~ ]# hbase 
org.apache.hadoop.hbase.mapreduce.Import                   <tablename> <inputdir>

[ nitin@nitin-ubuntu:~ ]#  hbase org.apache.hadoop.hbase.mapreduce.Import HBASEIMPORTABLE DUMP






Friday, 25 October 2013

RACKSPACE CLOUD FILES FROM THE COMMAND LINE


It is also possible to control your rackspace cloud files using the openstack swift client, the first thing we need to do is install the package



[ nitin@nitin-ubuntu:~ ]# sudo spt-get install python-swiftclient keystone



Create the file rack.sh this file will be used to setup our environment anytime we want to use swift, it should look something like this 



[ nitin@nitin-ubuntu:~ ]# sudo vim /etc/profile.s/rack.sh



export OS_USERNAME=MossoCloudFS_22321312740217423749237492:myusername
export OS_PASSWORD=mypassword
export OS_AUTH_URL=https://identity.api.rackspacecloud.com/v2.0/



[ nitin@nitin-ubuntu:~ ]# keystone tenant-list





+---------------------------------------------------+---------------------------------------------------+---------+
|                         id                        |                        name                       | enabled |
+---------------------------------------------------+---------------------------------------------------+---------+
| MossoCloudFS_22321312740217423749237492    True  |
+---------------------------------------------------+---------------------------------------------------+---------+

If you worked through my last post you’ll notice some differences about this file, two differences we 
should note

OS_TENANT_NAME : for the version of swift client I am using this is absent but in future versions it may be back again, in which case os_tenant_name should be set to the tenant name and the value for OS_USERNAME should not contain the tenant name
OS_USERNAME : this contains the tenant name, in addition to the user name but it is not the same as the tenant name we used for nova. On rackspace a different tenant is used for cloud servers and cloud files. What we need to do is get the tenant name for our cloud files account. In order to get this you can run the command below (replace the username and password in the command)



[ nitin@nitin-ubuntu:~ ]#  source /etc/profile

Now your ready, lets go
List all of container 
[ nitin@nitin-ubuntu:~ ]# Swift list







Thursday, 24 October 2013

RACKSPACE CLOUD SERVERS FROM THE COMMAND LINE



With rackspaces providing its compute service based on OpenStack, its possible to control your servers using the openstack nova client and its fairly simple to do.
You start by installing the openstack-novaclient (these examples are done on fedora 17). For now python-setuptools is needed here as a dependency.
step 1 :- 
[ nitin@nitin-ubuntu:~ ]#   sudo yum install python-setuptools python-novaclient

Create the file rack.sh this file will be used to setup our environment anytime we want to use nova, it should look something like this 


[ nitin@nitin-ubuntu:~ ]# sudo vim /etc/profile.s/rack.sh

export OS_USERNAME=username
export OS_PASSWORD=passwd
export OS_AUTH_URL=https://identity.api.rackspacecloud.com/v2.0/
export OS_REGION_NAME=DFW
export OS_TENANT_NAME=1234567
export NOVA_SERVICE_NAME=cloudServersOpenStack


OS_USERNAME : This is your login name for your rackspace Open Cloud accountOS_PASSWORD : Your password….OS_AUTH_URL : this is the url to the rackspaces identity managment server, if you have a rackspace.co.uk account then this should be set to https://lon.identity.api.rackspacecloud.com/v2.0/ , leave it as above if your rackspace account is.com
OS_REGION_NAME : the code for the data center in which you want to control servers (DFW is Dallas, ORD Chicago, LON London), Use DFW or ORD if rackspace.com or LON if rackspace.co.ukOS_TENANT_NAME : the id for your account, you can get this in the top right hand corner of the control panel when you log in
[ nitin@nitin-ubuntu:~ ]#  source /etc/profile

Now your ready, lets go
List all of your running servers
[ nitin@nitin-ubuntu:~ ]#  nova list




Wednesday, 16 October 2013

HIVE-HBASE INTEGRATION





Step 1 :- 


hive --auxpath /usr/lib/hive/lib/hive-hbase-handler-0.7.1-cdh3u6.jar,/usr/lib/hive/lib/hbase-0.90.6-cdh3u6.jar,/usr/lib/hive/lib/zookeeper-3.3.5-cdh3u6.jar,/usr/lib/hive/lib/guava-r06.jar -hiveconf hbase.master=<HMASTER IP ADDRESS>:60000



Friday, 30 August 2013

CREATE NAMENODE BACKUP




#!/bin/bash

nameNode=namenode.Cloudera.com
timeStamp=$(date +%Y-%m-%d-%H)
workDir="/Backup/Hadoop_Namenode_backup/Backup"
targetDir="/Backup/Hadoop_Namenode_backup/zipfiles"
logfile="/var/log/namenode/${nameNode}.log.${timeStamp}"

curl -s http://${nameNode}:50070/getimage?getimage=1 > $workDir/fsimage
curl -s http://${nameNode}:50070/getimage?getedit=1 > $workDir/edits

zip -j $targetDir/namenode.${timeStamp}.zip $workDir/* 1>> ${logfile} 2>> ${logfile}

rm -f $workDir/edits 1>> ${logfile} 2>> ${logfile}
rm -f $workDir/fsimage 1>> ${logfile} 2>> ${logfile}

###Retension Policy
find ${targetDir} -name "*.zip" -mtime +5 -exec rm -rf {} \; 1>> ${logfile} 2>> ${logfile}

exit 0

Thursday, 29 August 2013

MONGODB USER CREATION


                                        

STEP 1 :-

Connect to MongoDB

[ nitin@nitin-ubuntu:~] # mongo
MongoDB shell version: 2.4.4
connecting to: test
>


STEP 2:-

Go to admin database

> use admin ;

STEP 3 :-

> db.addUser('USERNAME','PASSWORD');

STEP 4 :-

Allow mongodb to ask password

[ nitin@nitin-ubuntu:~] # sudo vim /etc/mongodb.conf

Uncomment below parameter
auth = true

STEP 5:-

Restart Mongodb

[ nitin@nitin-ubuntu:~] # sudo /etc/init.d/mongodb restart

STEP 6 :-

[ nitin@nitin-ubuntu:~] # mongo -u USERNAME localhost/admin -p
Enter password:


Tuesday, 27 August 2013

CREATING STATIC WEB-HOSTING ON AMAZON S3




                                 

Step 1 :-

             Login to Amazon console

Step 2 :-

            Go to S3 tab Click on Create Bucket Give Name of Your Bucket

Step 3 :-

            Click On Action tab go to property . On left site your will get bucket property configration

Step 4 :-

Click on Static Web hosting

Select enable website hosting radio button . Edit index document filed add index.html . click on save button

Step 5 :-

                Click on permission on same property page . Click on edit bucket policy

                Add below details

              {
              "Version": "2008-10-17",
              "Statement": [
              {
              "Sid": "PublicReadGetObject",
              "Effect": "Allow",
              "Principal": {
              "AWS": "*"
              },
              "Action": "s3:GetObject",
              "Resource": "arn:aws:s3:::<BUCKET-NAME>/*"
              }
              ]
              }

Click on Save

Step 6 :-

Test your configrations

Click on Static Web hosting you will find check for Endpoint

it will show url to access your website

buckte-name.s3-region-1.amazonaws.com

for E.g

Endpoint: - Test-bucket.s3-website-ap-southeast-1.amazonaws.com

Step 7 :- 

Go to Browser and access the URL :

http://Test-bucket.s3-website-ap-southeast-1.amazonaws.com



Friday, 16 August 2013

CREATE A SNAPSHOT AND DELETE OLD SNAPSHOT FROM AWS




#!/bin/bash



########


        # purpose :- To take a backup of MongoDB Collections.

        # Requirement :- Make Sure database.config file is present in /data/Backup/mongodb
        # cat /data/Backup/mongodb/database.config
        #  db1
        #  db2
########

PROGNAME=$(basename $0)

BACKUP_DIR="/data/Backup/mongodb/Dump"
#DATE=$(date +"%F")
DATE=$(date +%Y_%m_%d)
S3_DEL=$(date --date "30 days ago" +%Y_%m_%d)
BOX=$(uname -n)
DATBASE_FILE=$(dirname ${BACKUP_DIR})/database.config
LOGDIR=$(dirname ${BACKUP_DIR})/logs/Dump/
LOGFILE="backup_${DATE}.log"
LOCKFILE="/tmp/${PROGNAME}.lock"
REMOVE_FILE_DAYS=7
MONGODUMP_PATH="/usr/bin/mongodump"
MONGO_HOST="localhost"
MONGO_PORT="27017"
MONGO_USER=""
MONGO_PASSWORD=""
BUCKET="s3://working/mongo-backup/"
LOG_BUCKET="s3://working/mongo-backup/logs/"
#MAILTO="<MAIL-ADDRESS>"

LOCK_FILE ()

{
        if [ "$1" = "create" ]; then
                if [ -f $LOCKFILE ]; then
                        SEND_MAIL "ERROR::" "Unable to create lock may lock file not removed"
                        exit 0
                fi
                touch $LOCKFILE
        fi
        if [ "$1" = "remove" ]; then
                rm -fr $LOCKFILE
        fi
}

SEND_MAIL ()

{
        mail -s "${BOX} :: ${PROGNAME} : $1 $2"   -t $MAILTO < $LOGDIR/$LOGFILE
        LOCK_FILE "remove"
        exit 1
}

LOCK_FILE "create"

echo "Script started at :- $(date)" 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE

####### Backup and log Directory checking and creations


for dir in "${BACKUP_DIR}" "${LOGDIR}"


do

        if [ ! -d "${dir}" ]; then
                mkdir -p ${dir} 1>/dev/null 2>/dev/null
                        if [ $? -ne 0 ]; then
                                SEND_MAIL "ERROR::" "Unable to Create ${dir} Directory :"
                                LOCK_FILE "remove"
                                exit 1
                        fi
        fi
done

####### Collection file  checking


if [ ! -f "${DATBASE_FILE}" ]; then

        SEND_MAIL "ERROR::" " DATABASE Config file is not Present :"
else
        if [ ! -s "${DATBASE_FILE}" ]; then
        SEND_MAIL "ERROR ::" "DATABASE Config file is ZERO byte"
        fi

####### Dump logic started

        for MONGO_DATABASE in $(cat ${DATBASE_FILE})
        do

                ${MONGODUMP_PATH} --db ${MONGO_DATABASE}  --host ${MONGO_HOST} --port ${MONGO_PORT}  --out ${BACKUP_DIR}/${DATE}  1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE

                        if [ $? -ne 0 ]; then
                                SEND_MAIL "ERROR ::" " Unable to take dump for database ${MONGO_DATABASE}"
                        fi
        done

###### Dump Logic ended


###### Compression Logic started

        tar -zvcf ${BACKUP_DIR}/${DATE}.tgz ${BACKUP_DIR}/${DATE} 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE
        if [ $? -ne 0 ]; then
                SEND_MAIL "ERROR ::" " Unable to compress ${BACKUP_DIR}/${MONGO_DATABASE}_${DATE} directory"

        else

                rm -fr ${BACKUP_DIR}/${DATE}
                find ${BACKUP_DIR} -name *.tgz -mtime +${REMOVE_FILE_DAYS} -exec rm {} \; 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE
                if [ $? -eq 0 ]; then
                        echo "Removed old backup files." 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE
                fi
                find ${LOGDIR} -name *.log -mtime +${REMOVE_FILE_DAYS} -exec rm {} \; 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE
                if [ $? -eq 0 ]; then
                        echo "Removed old log files." 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE
                fi
        fi

######################### Pushing data to S3 bucket


       s3cmd put ${BACKUP_DIR}/${DATE}.tgz ${BUCKET} 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE

       if [ $? -ne 0 ];then
       SEND_MAIL "ERROR ::" " Unable to send data to S3 Bucket"
       fi

fi

       s3cmd put $LOGDIR/$LOGFILE ${LOG_BUCKET} 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE
       if [ $? -ne 0 ];then
       SEND_MAIL "ERROR ::" " Unable to send logs to S3 Bucket"
       fi

       s3cmd del ${BUCKET}${S3_DEL}.tgz 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE

       if [ $? -ne 0 ];then
       SEND_MAIL "ERROR ::" " Unable to  delete data from S3 Bucket"
       fi

 echo "Script Ended at $(date)" 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE
 LOCK_FILE "remove"
        if [ $? -eq 0 ]; then
                echo "Removed lock file file." 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE
                rm -vf ${BACKUP_DIR}/${DATE}.tgz 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE
                SEND_MAIL "Success::" "Backup Script executed successfully."
        fi
exit 0
#### END OF LOGIC


#


Wednesday, 14 August 2013

INSTALL MONGODB ON RHEL AND CENTOS



STEP 1:- 
          Configure Package Management System (YUM)

Create a /etc/yum.repos.d/10gen.repo file 
[ nitin@nitin-ubuntu: ~ ] $ sudo vim /etc/yum.repos.d/10gen.repo
[10gen]
name=10gen Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64
gpgcheck=0
enabled=1


[ nitin@nitin-ubuntu: ~ ] $ yum clean all
[ nitin@nitin-ubuntu: ~ ] $ sudo yum install mongo-10gen mongo-10gen-server

STEP 2:-

Configure MongoDB
These packages configure MongoDB using the /etc/mongod.conf file in conjunction with the control script. You can find the init script at/etc/rc.d/init.d/mongod.
This MongoDB instance will store its data files in the /var/lib/mongo and its log files in /var/log/mongo, and run using the mongod user account.

STEP 3:- 
Start MongoDB

nitin@nitin-ubuntu: ~ ] $ sudo service mongod start


STEP 4:- 
Stop MongoDB

nitin@nitin-ubuntu: ~ ] $ sudo service mongod stop




USE AWS S3 WITH S3CMD

One of the most popular Amazon S3 command line clients is s3cmd, which is written in python. As a simple AWS S3 command line tool, s3cmd is ideal to use when you want to run scripted cron jobs such as daily backups.

STEP 1:- 


  • To install s3cmd on Ubuntu or Debian:
     [  nitin@nitin-ubuntu: ~ ] $   sudo apt-get install s3cmd

STEP 2:-
  •  Configure:
      [  nitin@nitin-ubuntu: ~ ] $ s3cmd --configure 
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3

Access Key: (Access Key for the admin S3 User we have created before)
Secret Key: (Seccret Key for the admin S3 User we have created before)

Encryption password is used to protect your files from reading

by unauthorized persons while in transfer to S3
Encryption password: (your-password)
Path to GPG program [/usr/bin/gpg]:

When using secure HTTPS protocol all communication with Amazon S3

servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP and can't be used if you're behind a proxy
Use HTTPS protocol [No]:

On some networks all internet access must go through a HTTP proxy.

Try setting it here if you can't conect to S3 directly
HTTP Proxy server name:

New settings:

  Access Key: ***********************
  Secret Key: ***************************
  Encryption password: ***********
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: False
  HTTP Proxy server name:
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] y

Please wait...
Success. Your access key and secret key worked fine :-)

Now verifying that encryption works...

Success. Encryption and decryption worked fine :-)

Save settings? [y/N] y

Configuration saved to '/home/nitin/.s3cfg'

STEP 3:- 



  • Listing Buckets:
   [  nitin@nitin-ubuntu: ~ ] $ s3cmd ls 


  • Listing Bucket contents (folders):
   [  nitin@nitin-ubuntu: ~ ] $ s3cmd ls s3://Bucket-Name 


  • Listing Bucket contents (files):
   [  nitin@nitin-ubuntu: ~ ] $  s3cmd ls s3://Bucket-Name/Folder-Name 


  • Download all folder content:
   [  nitin@nitin-ubuntu: ~ ] $  s3cmd get s3://Bucket-Name/Folder-Name/* 


  • Delete all folder content:
   [  nitin@nitin-ubuntu: ~ ] $ s3cmd del s3://Bucket-Name/Folder-Name/* 












BACKUP MONGODB AND UPLOAD TO AWS S3




#!/bin/bash



########


        # purpose :- To take a backup of MongoDB Collections.

        # Requirement :- Make Sure database.config file is present in /data/Backup/mongodb
        # cat /data/Backup/mongodb/database.config
        #  db1
        #  db2
########

PROGNAME=$(basename $0)

BACKUP_DIR="/data/Backup/mongodb/Dump"
#DATE=$(date +"%F")
DATE=$(date +%Y_%m_%d)
S3_DEL=$(date --date "30 days ago" +%Y_%m_%d)
BOX=$(uname -n)
DATBASE_FILE=$(dirname ${BACKUP_DIR})/database.config
LOGDIR=$(dirname ${BACKUP_DIR})/logs/Dump/
LOGFILE="backup_${DATE}.log"
LOCKFILE="/tmp/${PROGNAME}.lock"
REMOVE_FILE_DAYS=7
MONGODUMP_PATH="/usr/bin/mongodump"
MONGO_HOST="localhost"
MONGO_PORT="27017"
MONGO_USER=""
MONGO_PASSWORD=""
BUCKET="s3://working/mongo-backup/"
LOG_BUCKET="s3://working/mongo-backup/logs/"
#MAILTO="<MAIL-ADDRESS>"

LOCK_FILE ()

{
        if [ "$1" = "create" ]; then
                if [ -f $LOCKFILE ]; then
                        SEND_MAIL "ERROR::" "Unable to create lock may lock file not removed"
                        exit 0
                fi
                touch $LOCKFILE
        fi
        if [ "$1" = "remove" ]; then
                rm -fr $LOCKFILE
        fi
}

SEND_MAIL ()

{
        mail -s "${BOX} :: ${PROGNAME} : $1 $2"   -t $MAILTO < $LOGDIR/$LOGFILE
        LOCK_FILE "remove"
        exit 1
}

LOCK_FILE "create"

echo "Script started at :- $(date)" 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE

####### Backup and log Directory checking and creations


for dir in "${BACKUP_DIR}" "${LOGDIR}"


do

        if [ ! -d "${dir}" ]; then
                mkdir -p ${dir} 1>/dev/null 2>/dev/null
                        if [ $? -ne 0 ]; then
                                SEND_MAIL "ERROR::" "Unable to Create ${dir} Directory :"
                                LOCK_FILE "remove"
                                exit 1
                        fi
        fi
done

####### Collection file  checking


if [ ! -f "${DATBASE_FILE}" ]; then

        SEND_MAIL "ERROR::" " DATABASE Config file is not Present :"
else
        if [ ! -s "${DATBASE_FILE}" ]; then
        SEND_MAIL "ERROR ::" "DATABASE Config file is ZERO byte"
        fi

####### Dump logic started

        for MONGO_DATABASE in $(cat ${DATBASE_FILE})
        do

                ${MONGODUMP_PATH} --db ${MONGO_DATABASE}  --host ${MONGO_HOST} --port ${MONGO_PORT}  --out ${BACKUP_DIR}/${DATE}  1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE

                        if [ $? -ne 0 ]; then
                                SEND_MAIL "ERROR ::" " Unable to take dump for database ${MONGO_DATABASE}"
                        fi
        done

###### Dump Logic ended


###### Compression Logic started

        tar -zvcf ${BACKUP_DIR}/${DATE}.tgz ${BACKUP_DIR}/${DATE} 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE
        if [ $? -ne 0 ]; then
                SEND_MAIL "ERROR ::" " Unable to compress ${BACKUP_DIR}/${MONGO_DATABASE}_${DATE} directory"

        else

                rm -fr ${BACKUP_DIR}/${DATE}
                find ${BACKUP_DIR} -name *.tgz -mtime +${REMOVE_FILE_DAYS} -exec rm {} \; 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE
                if [ $? -eq 0 ]; then
                        echo "Removed old backup files." 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE
                fi
                find ${LOGDIR} -name *.log -mtime +${REMOVE_FILE_DAYS} -exec rm {} \; 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE
                if [ $? -eq 0 ]; then
                        echo "Removed old log files." 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE
                fi
        fi

######################### Pushing data to S3 bucket


       s3cmd put ${BACKUP_DIR}/${DATE}.tgz ${BUCKET} 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE

       if [ $? -ne 0 ];then
       SEND_MAIL "ERROR ::" " Unable to send data to S3 Bucket"
       fi

fi

       s3cmd put $LOGDIR/$LOGFILE ${LOG_BUCKET} 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE
       if [ $? -ne 0 ];then
       SEND_MAIL "ERROR ::" " Unable to send logs to S3 Bucket"
       fi

       s3cmd del ${BUCKET}${S3_DEL}.tgz 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE

       if [ $? -ne 0 ];then
       SEND_MAIL "ERROR ::" " Unable to  delete data from S3 Bucket"
       fi

 echo "Script Ended at $(date)" 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE
 LOCK_FILE "remove"
        if [ $? -eq 0 ]; then
                echo "Removed lock file file." 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE
                rm -vf ${BACKUP_DIR}/${DATE}.tgz 1>> $LOGDIR/$LOGFILE 2>> $LOGDIR/$LOGFILE
                SEND_MAIL "Success::" "Backup Script executed successfully."
        fi
exit 0
#### END OF LOGIC


#


Ansible Cheat sheet

Install Ansible  # yum install ansible Host file configuration  File  [ansible@kuber2 ~]$ cat /etc/ansible/hosts     [loca...