Friday, 21 October 2016

Hiveserver2 with Openldap on MapR




Step 1 ) Edit hive-site.xml

# vim /opt/mapr/hive/hive-1.2/conf/hive-site.xml

<!-- LDAP AUTHENTICATION -->

<property>
     <name>hive.server2.authentication</name>
     <value>LDAP</value>
</property>

<property>
     <name>hive.server2.authentication.ldap.url</name>
     <value>ldap://adp034</value>
</property>

<property>
     <name>hive.server2.authentication.ldap.baseDN</name>
     <value>ou=Users,dc=tuxhub,dc=com</value>
</property>
<property>
  <name>hive.server2.enable.doAs</name>
  <value>true</value>
</property>


<!-- HIVE IMPERSANATION -->

<property>
  <name>hive.server2.enable.doAs</name>
  <value>true</value>
</property>
<property>
  <name>hive.metastore.execute.setugi</name>
  <value>true</value>
</property>


2) Connect your OS to ldap

[root@satz-n01 ~]# authconfig-tui



3) Select "Use Ldap"

You may need to install  yum install nss-pam-ldapd if get any error while selecting Ldap


4) Execute id command to check user infomration is populated.

[root@satz-n01 ~]# id <LDAP USER>

5) Restart HS2  and Hivemeta store

# maprcli node services -name hivemeta -action restart -nodes `hostname`
# maprcli node services -name hs2 -action restart -nodes `hostname`

6) Connect via beeline

[mapr@satz-n01 ~]$ /opt/mapr/hive/hive-1.2/bin/beeline

0: jdbc:hive2://localhost:10000/default (closed)> !connect jdbc:hive2://localhost:10000/default
Connecting to jdbc:hive2://localhost:10000/default
Enter username for jdbc:hive2://localhost:10000/default: uhg2
Enter password for jdbc:hive2://localhost:10000/default: ****
Connected to: Apache Hive (version 1.2.0-mapr-1609)
Driver: Hive JDBC (version 1.2.0-mapr-1609)
Transaction isolation: TRANSACTION_REPEATABLE_READ

Tuesday, 20 September 2016

Hiveserver2 With Kerberos



 Step 1 ) Add hive-site.xml 
 
<property>
  <name>hive.server2.authentication</name>
  <value>KERBEROS</value>
</property>
<property>
  <name>hive.server2.authentication.kerberos.principal</name>
  <value>hive/_HOST@YOUR-REALM.COM</value>
</property>
<property>
  <name>hive.server2.authentication.kerberos.keytab</name>
  <value>/etc/hive/conf/hive.keytab</value>
</property>
<property>
  <name>hive.server2.enable.doAs</name>
  <value>false</value>
</property>

Step 2 ) Add principal:

# kadmin.local

kadmin.local: add_principal -randkey hive/cdh084.tuxhub.com@TUXHUB.COM
kadmin.local: change_password hive/cdh084.tuxhub.com@TUXHUB.COM
kadmin.local: xst -k /etc/hive/conf/hive.keytab hive/cdh084.tuxhub.com@TUXHUB.COM

Step 3 ) Check permission :-

[root@cdh084 ~]# ll /etc/hive/conf/hive.keytab
-rw------- 1 hive hive 442 Sep 20 18:49 /etc/hive/conf/hive.keytab
[root@cdh084 ~]#
Step 3 )  Restart hiveserver2

[root@cdh084 ~]# /etc/init.d/hive-server2 restart

Step 4 ) Connect to beeline

beeline> !connect jdbc:hive2://localhost:10000/default;principal=hive/cdh084.tuxhub.com@TUXHUB.COM
nter username for jdbc:hive2://localhost:10000/default;principal=hive/cdh084.tuxhub.com@TUXHUB.COM: <ENTER ANYTHING>
Enter password for jdbc:hive2://localhost:10000/default;principal=hive/cdh084.tuxhub.com@TUXHUB.COM: <ENTER ANYTHING>
Connected to: Apache Hive (version 1.1.0-cdh5.8.0)
Driver: Hive JDBC (version 1.1.0-cdh5.8.0)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://localhost:10000/default>

Thursday, 25 August 2016

Ganglia Installtion


You must have epel repo installed

1) Installtion :

 On all nodes

# yum search ganglia-gmond


On one node

# sudo yum install ganglia-gmetad
# yum install ganglia-web.x86_64

2) Config

# /etc/ganglia/gmond.conf

# Gmond.conf

cluster {
  name = "tuxhub"

}

udp_send_channel {
  host = "gemtad.tuxhub.com"
  port = 8649
  ttl = 1
}

/* You can specify as many udp_recv_channels as you like as well. */
udp_recv_channel {
  port = 8649
}

/* You can specify as many tcp_accept_channels as you like to share
   an xml description of the state of the cluster */
tcp_accept_channel {
  port = 8649
}


# /etc/ganglia/gmetad.conf

data_source "tuxhub" gemtad.tuxhub.com

3) Services

Start services

On all nodes
service gmond start

On gmeta node.

service gmetad start
service httpd  start


Sunday, 14 August 2016

Kerberos Installtion And Configuration To Access Hadoop ( CDH 5.X)



Install Kerberos :

1) On client nodes

# yum install krb5-libs krb5-auth-dialog krb5-workstation

On server node

# yum install krb5-server krb5-libs krb5-auth-dialog krb5-workstation

2) On all nodes edit krb5.conf

[root@cdh084 krb5kdc]# cat /etc/krb5.conf
[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 default_realm = TUXHUB.COM
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true

[realms]
 TUXHUB.COM = {
  kdc = cdh084.tuxhub.com
  admin_server = cdh084.tuxhub.com
 }

[domain_realm]
 .tuxhub.com = TUXHUB.COM
 tuxhub.com = TUXHUB.COM
[root@cdh084 krb5kdc]#

3) Edit kerberos acl

[root@cdh084 krb5kdc]# cat  /var/kerberos/krb5kdc/kadm5.acl
*/admin@TUXHUB.COM      *
[root@cdh084 krb5kdc]#

4) kerberos database config

[root@cdh084 krb5kdc]# cat  /var/kerberos/krb5kdc/kdc.conf
[kdcdefaults]
 kdc_ports = 88
 kdc_tcp_ports = 88

[realms]
 TUXHUB.COM = {
  #master_key_type = aes256-cts
  acl_file = /var/kerberos/krb5kdc/kadm5.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
  supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
 }
[root@cdh084 krb5kdc]#

5) Create database

[root@cdh084 ~]# kdb5_util create -r TUXHUB.COM -s
Loading random data
Initializing database '/var/kerberos/krb5kdc/principal' for realm 'TUXHUB.COM',
master key name 'K/M@TUXHUB.COM'
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
Enter KDC database master key: master
Re-enter KDC database master key to verify: master

[root@cdh084 ~]#

6) Create admin principal  create the first user principal. This must be done on the KDC server itself, while you are logged in as root:

[root@cdh084 ~]# kadmin.local
Authenticating as principal root/admin@TUXHUB.COM with password.
kadmin.local:  addprinc root/admin
WARNING: no policy specified for root/admin@TUXHUB.COM; defaulting to no policy
Enter password for principal "root/admin@TUXHUB.COM": admin
Re-enter password for principal "root/admin@TUXHUB.COM": admin
Principal "root/admin@TUXHUB.COM" created.

7) Start services

[root@cdh084 krb5kdc]# service kadmin start
Starting Kerberos 5 Admin Server:                          [  OK  ]
[root@cdh084 krb5kdc]# service krb5kdc start
Starting Kerberos 5 KDC:                                   [  OK  ]

8) Creating Service Principals for every host ( chnage host name)

[root@cdh084  ]# kadmin
kadmin:  add_principal -randkey hdfs/cdh084.tuxhub.com@TUXHUB.COM
kadmin:  add_principal -randkey mapred/cdh084.tuxhub.com@TUXHUB.COM
kadmin:  add_principal -randkey HTTP/cdh084.tuxhub.com@TUXHUB.COM
kadmin:  add_principal -randkey yarn/cdh084.tuxhub.com@TUXHUB.COM

9) Create Keytab files add everyhost principal
[root@cdh084  ]# kadmin
kadmin:  xst -k /tmp/hdfs.keytab hdfs/cdh085.tuxhub.com@TUXHUB.COM HTTP/cdh085.tuxhub.com@TUXHUB.COM ( ADD ALL HOST )
kadmin:  xst -k /tmp/mapred.keytab mapred/cdh085.tuxhub.com@TUXHUB.COM HTTP/cdh085.tuxhub.com@TUXHUB.COM ( ADD ALL HOST )
kadmin:  xst -k /tmp/yarn.keytab yarn/cdh085.tuxhub.com@TUXHUB.COM HTTP/cdh085.tuxhub.com@TUXHUB.COM  ( ADD ALL HOST )


10) Permission
[root@cdh084 keytab]# cp -pav /tmp/*.keytab /etc/hadoop/conf
[root@cdh084 keytab]# chown hdfs:hadoop /etc/hadoop/conf/hdfs.keytab
[root@cdh084 keytab]# chown mapred:hadoop /etc/hadoop/conf/mapred.keytab
[root@cdh084 keytab]# chown yarn:hadoop /etc/hadoop/conf/yarn.keytab
[root@cdh084 keytab]# chmod 400 /etc/hadoop/conf/hdfs.keytab /etc/hadoop/conf/mapred.keytab /etc/hadoop/conf/yarn.keytab
[root@cdh084 keytab]#

11) EDIT core-site.xml

<configuration>

<property>
 <name>fs.defaultFS</name>
 <value>hdfs://cdh081.tuxhub.com:8020</value>
</property>


<property>
 <name>hadoop.proxyuser.mapred.groups</name>
 <value>*</value>
</property>

<property>
 <name>hadoop.proxyuser.mapred.hosts</name>
 <value>*</value>
</property>

<property>
<name>hadoop.proxyuser.httpfs.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.httpfs.groups</name>
<value>*</value>
</property>

<property>
  <name>hadoop.security.authentication</name>
    <value>kerberos</value>
</property>

<property>
  <name>hadoop.security.authorization</name>
    <value>true</value>
</property>


</configuration>
[root@cdh081 ~]#

12) hdfs.site.xml

<configuration>
  <property>
     <name>dfs.namenode.name.dir</name>
     <value>file:///var/lib/hadoop-hdfs/cache/hdfs/dfs/name</value>
  </property>

  <property>
     <name>dfs.permissions.superusergroup</name>
     <value>hadoop</value>
  </property>


  <property>
     <name>dfs.datanode.data.dir</name>
     <value>file:///home/data/1/dfs/dn</value>
  </property>

 <property>
   <name>dfs.webhdfs.enabled</name>
    <value>true</value>
 </property>


<!-- SECURITY -->

<!-- General HDFS security config -->
<property>
  <name>dfs.block.access.token.enable</name>
  <value>true</value>
</property>

<!-- NameNode security config -->
<property>
  <name>dfs.namenode.keytab.file</name>
  <value>/etc/hadoop/conf/hdfs.keytab</value> <!-- path to the HDFS keytab -->
</property>
<property>
  <name>dfs.namenode.kerberos.principal</name>
  <value>hdfs/_HOST@TUXHUB.COM</value>
</property>
<property>
  <name>dfs.namenode.kerberos.internal.spnego.principal</name>
  <value>HTTP/_HOST@TUXHUB.COM</value>
</property>

<!-- DataNode security config -->
<property>
  <name>dfs.datanode.data.dir.perm</name>
  <value>700</value>
</property>
<property>
  <name>dfs.datanode.address</name>
  <value>cdh081.tuxhub.com:1004</value>
</property>
<property>
  <name>dfs.datanode.http.address</name>
  <value>cdh081.tuxhub.com:1006</value>
</property>
<property>
  <name>dfs.datanode.keytab.file</name>
  <value>/etc/hadoop/conf/hdfs.keytab</value> <!-- path to the HDFS keytab -->
</property>
<property>
  <name>dfs.datanode.kerberos.principal</name>
  <value>hdfs/_HOST@TUXHUB.COM</value>
</property>

<!-- Web Authentication config -->
<property>
  <name>dfs.web.authentication.kerberos.principal</name>
  <value>HTTP/_HOST@TUXHUB.COM</value>
 </property>

<property>
<name>dfs.http.policy</name>
<value>HTTPS_ONLY</value>
</property>

</configuration>

13) SECURE Datanodes

vim /etc/default/hadoop-hdfs-datanode

export HADOOP_SECURE_DN_USER=hdfs
export HADOOP_SECURE_DN_PID_DIR=/var/lib/hadoop-hdfs
export HADOOP_SECURE_DN_LOG_DIR=/var/log/hadoop-hdfs
export JSVC_HOME=/usr/lib/bigtop-utils/

14) SSL

 keytool -genkey -alias replserver -keyalg RSA -keystore keystore.jks -dname "cn=localhost, ou=IT, o=Continuent, c=US"  -storepass password -keypass password
 keytool -export -alias replserver -file client.cer -keystore keystore.jks
 keytool -import -v -trustcacerts -alias replserver -file client.cer -keystore truststore.ts

 Copy SSL files

 cp -pav keystore.jks truststore.ts /etc/hadoop/conf/

15) Hadoop SSL config

vim /etc/hadoop/conf/ssl-server.xml

<configuration>

<property>
  <name>ssl.server.truststore.location</name>
  <value>/etc/hadoop/conf/truststore.ts</value>
</property>

<property>
  <name>ssl.server.truststore.password</name>
  <value>keystore</value>
</property>

<property>
  <name>ssl.server.truststore.type</name>
  <value>jks</value>
</property>

<property>
  <name>ssl.server.truststore.reload.interval</name>
  <value>10000</value>
  Default value is 10000 (10 seconds).
</property>

<property>
  <name>ssl.server.keystore.location</name>
  <value>/etc/hadoop/conf/keystore.jks</value>
</property>

<property>
  <name>ssl.server.keystore.password</name>
  <value>keystore</value>
</property>

<property>
  <name>ssl.server.keystore.keypassword</name>
  <value>keystore</value>
</property>

<property>
  <name>ssl.server.keystore.type</name>
  <value>jks</value>
</property>

<property>
  <name>ssl.server.exclude.cipher.list</name>
  <value>TLS_ECDHE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA,
  SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,
  SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA,
  SSL_RSA_WITH_RC4_128_MD5</value>
</property>

</configuration>

# /etc/hadoop/conf/ssl-client.xml

<configuration>

<property>
  <name>ssl.client.truststore.location</name>
  <value>/etc/hadoop/conf/truststore.ts</value>
  specified.
</property>

<property>
  <name>ssl.client.truststore.password</name>
  <value>keystore</value>
</property>

<property>
  <name>ssl.client.truststore.type</name>
  <value>jks</value>
</property>

<property>
  <name>ssl.client.truststore.reload.interval</name>
  <value>10000</value>
  Default value is 10000 (10 seconds).
</property>

<property>
  <name>ssl.client.keystore.location</name>
  <value>/etc/hadoop/conf/keystore.jks</value>
  specified.
</property>

<property>
  <name>ssl.client.keystore.password</name>
  <value>keystore</value>
</property>

<property>
  <name>ssl.client.keystore.keypassword</name>
  <value>keystore</value>
</property>

<property>
  <name>ssl.client.keystore.type</name>
  <value>jks</value>
</property>

</configuration>

16 ) NameNode start

# services hadoop-hdfs-namenode restart

17) Datanode start

#  services hadoop-hdfs-datanode restart

18) any one node

kinit -kt /etc/hadoop/conf/hdfs.keytab hdfs/cdh084.tuxhub.com@TUXHUB.COM

14) Klist

-bash-4.1$ klist
Ticket cache: FILE:/tmp/krb5cc_496
Default principal: hdfs/cdh084.tuxhub.com@TUXHUB.COM

Valid starting     Expires            Service principal
08/14/16 21:05:43  08/15/16 21:05:43  krbtgt/TUXHUB.COM@TUXHUB.COM
        renew until 08/14/16 21:05:43
-bash-4.1$

16)  Hadoop command

-bash-4.1$ hadoop fs -ls /

17) Please yarn-site.xml  to use kerberos.

Note :- All files present on
https://drive.google.com/file/d/0BxAxRcNkM4a0aWZxdGdOMWRtNDQ/view?usp=sharing

Thursday, 21 July 2016

How To Install and configure Elasticsearch, Logstash, and Kibana (ELK Stack)


1) Add  ELK repo



[root@cdh084 ~]# cat /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1

[root@cdh084 ~]# cat /etc/yum.repos.d/logstash.repo
[logstash-2.2]
name=logstash repository for 2.2 packages
baseurl=http://packages.elasticsearch.org/logstash/2.2/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1

[root@cdh084 ~]# cat /etc/yum.repos.d/kibana.repo
[kibana-4.4]
name=Kibana repository for 4.4.x packages
baseurl=http://packages.elastic.co/kibana/4.4/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
[root@cdh084 ~]



A) Install Logstash

1) Install logstash

[root@cdh081 ~]# yum install -y logstash

2) On logstash installed host add below files

[root@cdh081 ~]# cat  /etc/logstash/conf.d/logstash.conf

input {
      file {
          path => "/var/log/hadoop-hdfs/hadoop-hdfs-*-cdh08*.tuxhub.com.log"
          start_position => "beginning"
      }
}

filter {

}

output {
    elasticsearch {
        action => "index"
        hosts => ["cdh084:9200"]
        index => "logstash-%{+YYYY.MM.dd}"
        workers => 1
    }
}

[root@cdh081 ~]#

Elasticsearch :

1) Install Elasticsearch :

[root@cdh081 ~]# sudo yum -y install elasticsearch

2) Add below line in elasticsearch conf

[root@cdh084 ~]# cat /etc/elasticsearch/elasticsearch.yml | egrep -v "^#|^$"
network.host: cdh084
http.port: 9200
cluster.name: "logsearch"
node.master: true
node.data: true
index.number_of_shards: 5
index.number_of_replicas : 1
path.data: /home/es/
[root@cdh084 ~]#

3) Install Kibana

[root@cdh084 ~]#  sudo yum -y install kibana


Restart Services

1) Logstash :

[root@cdh084 ~]#  /etc/init.d/logstash restart

2) elasticsearch

[root@cdh084 ~]#  /etc/init.d/elasticsearch restart

3) Kibana

[root@cdh084 ~]#  /etc/init.d/kibana restart

WEBUI :- http://cdh084:5601/


 

Monday, 18 April 2016

How to Configure YARN Fair Scheduler on a MAPR Cluster



1) Add below line is yarn-site.xml

<property><name>yarn.scheduler.fair.allocation.file</name><value>/opt/mapr/hadoop/hadoop-2.7.0/etc/hadoop/fair-scheduler.xml</value></property>

<property><name>yarn.acl.enable</name><value>true</value></property>

<property><name>yarn.admin.acl</name><value>mapr mapr</value></property>

<property><name>yarn.resourcemanager.scheduler.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value></property>

2) Configure fair scheduler:

[root@mfs071 ~]#  vim /opt/mapr/hadoop/hadoop-2.7.0/etc/hadoop/fair-scheduler.xml

<allocations>


 <queuePlacementPolicy>
                 <rule name="specified" create="false"/>
                <rule name="primaryGroup" create="false"/>
                <rule name="secondaryGroupExistingQueue" create="false"/>
                <!--Create interactive queue for generic shape group?-->
                <rule name="user" create="false" />
                 <rule name="reject" />
  </queuePlacementPolicy>
  <queue name="root">
    <minResources>2000 mb, 1 vcores,1 disks</minResources>
    <maxResources>5000 mb, 1 vcores,2 disks</maxResources>
    <maxRunningApps>10</maxRunningApps>
    <weight>2.0</weight>
    <schedulingPolicy>fair</schedulingPolicy>
    <aclSubmitApps> </aclSubmitApps>
    <aclAdministerApps>root</aclAdministerApps>
  <!--  <aclAdministerApps>mapr mapr</aclAdministerApps> -->
    <queue name="sample_sub_queue1">
        <minResources>1024 mb, 1 vcores,1 disks</minResources>
        <aclSubmitApps>nitin</aclSubmitApps>
        <aclAdministerApps>root</aclAdministerApps>
    </queue>
    <queue name="sample_sub_queue2">
        <minResources>1024 mb, 1 vcores,1 disks</minResources>
        <aclSubmitApps>mapr</aclSubmitApps>
        <aclAdministerApps>root</aclAdministerApps>
    </queue>
    <queue name="sample_sub_queue3">
        <minResources>1024 mb, 1 vcores,1 disks</minResources>
        <aclSubmitApps>kunal</aclSubmitApps>
        <aclAdministerApps>root</aclAdministerApps>
    </queue>
</queue>

</allocations>


3) Restart resourcemanager

4) Login via nitin and submit job to  sample_sub_queue3.

[nitin@mfs071 ~]$ yarn jar "/opt/mapr/hadoop/hadoop-2.7.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.0-mapr-1602.jar" teragen  -Dmapred.map.tasks=20 -Dmapred.reduce.tasks=0 -Dmapred.job.queue.name=sample_sub_queue3 1000 /tmp/teragen1

Job will fail with following exception

java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1460957957156_0002 to YARN : User nitin cannot submit applications to queue root.sample_sub_queue3

5) Try to submit job in sample_sub_queue1. It will be successful.

6) Login via kunal and try to kill the job.

[kunal@mfs072 ~]$ yarn application -kill application_1460957957156_0001

Job will fail with following exception

Exception in thread "main" org.apache.hadoop.yarn.exceptions.YarnException: java.security.AccessControlException: User kunal cannot perform operation MODIFY_APP on application_1460957957156_0001








Tuesday, 5 April 2016

Hiveserver2 high availability from beeline



Step 1:

 Edit hive-site.xml on every hs2 node

<property>
        <name>hive.server2.support.dynamic.service.discovery</name>
        <value>true</value>
</property>


<property>
        <name>hive.zookeeper.quorum</name>
        <value>mfs071:5181,mfs072:5181,mfs073:5181</value>
</property>

<property>
        <name>hive.server2.zookeeper.namespace</name>
        <value>hiveserver2</value>
</property>


Steps 2

Restart hs2


Steps 3

 Connect via beeline

!connect jdbc:hive2://mfs071:5181,mfs072:5181,mfs073:5181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2


[mapr@mfs072 ~]$ /opt/mapr/hive/hive-1.2/bin/beeline
Beeline version 1.2.0-mapr-1601 by Apache Hive
beeline> !connect jdbc:hive2://mfs071:5181,mfs072:5181,mfs073:5181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Connecting to jdbc:hive2://mfs071:5181,mfs072:5181,mfs073:5181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
Enter username for jdbc:hive2://mfs071:5181,mfs072:5181,mfs073:5181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2: mapr
Enter password for jdbc:hive2://mfs071:5181,mfs072:5181,mfs073:5181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2: ****
Connected to: Apache Hive (version 1.2.0-mapr-1601)
Driver: Hive JDBC (version 1.2.0-mapr-1601)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://mfs071:5181,mfs072:5181,mfs07> show databases;
+----------------+--+
| database_name  |
+----------------+--+
| default        |
+----------------+--+
1 row selected (0.178 seconds)

Ansible Cheat sheet

Install Ansible  # yum install ansible Host file configuration  File  [ansible@kuber2 ~]$ cat /etc/ansible/hosts     [loca...