Friday, 29 March 2019

Ansible Cheat sheet





Install Ansible 


# yum install ansible


Host file configuration 


  • File 

[ansible@kuber2 ~]$ cat /etc/ansible/hosts
    [local]
    localhost
 
    [allhost]
    mfs091.tuxhub.com
    mfs092.tuxhub.com
    mfs093.tuxhub.com

Note:-

  • Host file we config by ansible.cfg


#inventory      = /etc/ansible/hosts


  • Command to  the list of hosts  
  • [ansible@kuber2 ~]$ ansible all  --list-hosts
      hosts (3):    mfs091.tuxhub.com    mfs092.tuxhub.com    mfs093.tuxhub.com[ansible@kuber2 ~]$ ansible all -i /tmp/hosts  --list-hosts  hosts (1):    mfs023.tuxhub.com
    [ansible@kuber2 ~]$ 

Help in ansible


[ansible@kuber2 ~]$ ansible-doc -l



[ansible@kuber2 ~]$ ansible-doc  atomic_host


Create a custom host file 

[ansible@kuber2 ~]$ cat /tmp/hosts
[customhosts]
mfs023.tuxhub.com

[ansible@kuber2 ~]$ 




[ansible@kuber2 ~]$ ansible all  -i /tmp/hosts -m ping 


mfs023.tuxhub.com | SUCCESS => {    "changed": false,     "ping": "pong"}





Override Ansible config :


  • Seqeunce of file ansible.cfg to be read 
  1.   Varibale in env :

[ansible@kuber2 ~]$ export ANSIBLE_CONFIG=/home/ansible/config/ansible.cfg



   2. Current dir from where anible command excuted
   3. home direcotry of user ( /home/user/.anisble.cfg)
   4. /etc/ansible/ansible.cfg



Ansible commandline :

Using Module 

1) ping 



[ansible@kuber2 ~]$ ansible all -m ping
mfs092.tuxhub.com | SUCCESS => {
"changed": false,
"ping": "pong"
}
mfs091.tuxhub.com | SUCCESS => {
"changed": false,
"ping": "pong"
}
mfs093.tuxhub.com | SUCCESS => {
"changed": false,
"ping": "pong"
}
localhost | SUCCESS => {
"changed": false,
"ping": "pong"
}


2) Shell    ( ansible <HOST> -m <module > -a <argument > 'COMMAND'


[ansible@kuber2 ~]$ ansible mfs091.tuxhub.com  -m shell -a  'yum list all | grep python'


Ansible System  Facts :


[ansible@kuber2 ~]$ ansible mfs091.tuxhub.com -m setup 



[ansible@kuber2 facts]$ ansible mfs091.tuxhub.com -m setup --tree /tmp/facts



[ansible@kuber2 facts]$ ansible mfs091.tuxhub.com -m  setup -a 'filter=*ipv*'




[ansible@kuber2 ~]$ ansible mfs091.tuxhub.com -m setup -a 'filter=ansible*'




Playbooks :


[ansible@kuber2 plyabook2]$ cat lynx.yml 
---
- hosts : all
  tasks :
        - name : Package Installtion
          yum : pkg=lynx state=installed update_cache=true


[ansible@kuber2 plyabook2]$


Playbook Varibales :


  1. Direct Vairables 

[ansible@kuber2 plyabook2]$ cat lynx.yml 
---
- hosts : all
  vars:
        cldbnodes : mfs091.tuxhub.com
  tasks :
        - name : Package Installtion on {{cldbnodes}}
          yum : pkg=lynx state=installed update_cache=true


[ansible@kuber2 plyabook2]$


Output


TASK [Package Installtion on mfs091.tuxhub.com] 



 2.  Varibale via files 


[ansible@kuber2 plyabook2]$ cat lynx.yml vars.yml 
---
- hosts : all
  vars_files:
   - vars.yml
  tasks :
        - name : Package Installtion on {{cldbnodes}}
          yum : pkg=lynx state=installed update_cache=true

Var.yml
---
cldbnodes: mfs091.tuxhub.com 


[ansible@kuber2 plyabook2]$ 


3. Run time varibale 

[ansible@kuber2 plyabook2]$ cat runtime.yml 
---

- hosts : all
  user : ansible      # run with ansible user  
  become : yes        # using sudo 
  connection :  ssh   # over ssh 
  gather_facts : no   # do not gather facts
  vars_files:
   - vars.yml
  vars_prompt : 
   - name : pkgtoinstall
     prompt : Install packges 

     private : no


Ansible Target Section :



[ansible@kuber2 plyabook2]$ cat target.yml 
---

- hosts : all
  user : ansible      # run with ansible user  
  become : yes        # using sudo 
  connection :  ssh   # over ssh 
  gather_facts : no   # do not gather facts

[ansible@kuber2 plyabook2]$ 


Ansible Task Section :

[ansible@kuber2 plyabook2]$ cat action.yml 
---

- hosts : all
  user : ansible      # run with ansible user  
  become : yes        # using sudo 
  connection :  ssh   # over ssh 
  gather_facts : no   # do not gather facts
  vars_files:
   - vars.yml
  tasks :
   - name  : Check the lynx install
     action : yum name=lynx state=installed 
        

[ansible@kuber2 plyabook2]$  


Ansible Notify Handler Section :

[ansible@kuber2 plyabook2]$ cat handler.yml
---

- hosts : all
  user : ansible      # run with ansible user  
  become : yes        # using sudo 
  connection :  ssh   # over ssh 
  gather_facts : no   # do not gather facts
  vars_files:
   - vars.yml
  tasks :
   - name  : Install and handler nginx 
     action : yum name=nginx state=installed
     notify : restart nginx
  handlers:
   - name : restart nginx
     action : service name=nginx state=restarted
        

[ansible@kuber2 plyabook2]$


Ansible Register Section :

[ansible@kuber2 outline]$ cat resister.yml 
--- # Outline to playbook translation 
- hosts : all 
  gather_facts : no
  become : true
  become_user: root
  tasks : 
   - name : date/time when playbok starts 
     command : /usr/bin/date
     register : timestamp_start

   - debug : var=timestamp_start


Ansible Dry run section Section :



[ansible@kuber2 plyabook2]$ ansible-playbook playbook.yml --check


Ansible Async polling Section :


[ansible@kuber2 plyabook2]$ grep fork /etc/ansible/ansible.cfg
#forks          = 5

[ansible@kuber2 plyabook2]


The default anisble run 5 nodes at a time but if we need more than 5 mode we need to use async method

[ansible@kuber2 plyabook2]$ cat lynx.yml 
---
- hosts : all
  become : true
  vars_files:
   - vars.yml
  tasks :
        - name : Package Installtion on {{cldbnodes}}
          yum : pkg=lynx state=installed update_cache=true
          async : 300  # wait for 300 second for sucess
          poll : 3     # poll output every 3 second


[ansible@kuber2 plyabook2]$ 


Ansible Varibale Subsitution Section :

[ansible@kuber2 plyabook2]$ cat lynx.yml 
---
- hosts : all
  become : true
  vars_files:
   - vars.yml
  tasks :

        - name : Package Installtion on {{cldbnodes}}



[ansible@kuber2 plyabook2]$ cat vars.yml 
---
cldbnodes: mfs091.tuxhub.com 


[ansible@kuber2 plyabook2]$


[ansible@kuber2 plyabook2]$ cat pkg_installtion.yml 
---
- hosts : all
  become : true
  vars_prompt :
   - name : Installpackege
     prompt : Install packege to be installed 
     private : no
  tasks :
   - name : install {{Installpackege}}
     action : yum name={{Installpackege}} state=installed
     

[ansible@kuber2 plyabook2]$


Ansible Lookup Section :

Loopup is Inbuild functions


[ansible@kuber2 plyabook2]$ cat loopkup.yml 
---
- hosts : all
  become : true
  gather_facts : no
  tasks : 
   - debug : 
      msg: "{{ lookup('env','HOME') }} is the value" 

[ansible@kuber2 plyabook2]$ 


Ansible runonce Section :

It runs only on one system even hosts is all


[ansible@kuber2 plyabook2]$ cat runonce.yml 
---
- hosts : all
  user : ansible 
  gather_facts : no
  become : true
  tasks :
   - name : Run at one time 
     command : /usr/bin/date
     register : result
   - debug : var=result
     run_once : true 

[ansible@kuber2 plyabook2]$ 


Ansible Local action Section : ( 127 .0.0.1) 


[ansible@kuber2 plyabook2]$ cat localaction.yml 
--- 
- hosts : 127.0.0.1 
  connection : local 
  tasks :
   - name : install telent
     action : yum name=telnet state=installed

[ansible@kuber2 plyabook2]$


Ansible Loop Section :


[ansible@kuber2 plyabook2]$ cat loop.yml 
---
- hosts : all
  become : true
  gather_facts : no
  tasks : 
   - name : install via loop 
     action : yum name={{item}} state=installed
     with_items:
      - lynx
      - nginx

[ansible@kuber2 plyabook2]$




Ansible Conditional  Section :




[ansible@kuber2 plyabook2]$ cat conditional.yml
---
- hosts : all
  become: true
  tasks:
   - name : install via ngnix conditional 
     action : yum name=ngnix state=installed
     when : ansible_os_family == "centos" 

[ansible@kuber2 plyabook2]$



Ansible Until Section :



[ansible@kuber2 plyabook2]$ cat until.yml 
--- 
- hosts : all
  become : true
  gather_facts : no 
  tasks :
   - name : test until 
     action : yum name=httpd state=installed 
   - name : verify status 
     shell : systemctl status httpd
     register : result
     until : result.stdout.find("Active (running )") != -1 
     retry : 5 
     delay : 5 
   - debug : var=result
     

[ansible@kuber2 plyabook2]$ 


Ansible valut ( passoword ) 


[ansible@kuber2 plyabook2]$ ansible-vault create secure.yml
New Vault password: 

Confirm New Vault password: 


[ansible@kuber2 plyabook2]$ cat secure.yml 
$ANSIBLE_VAULT;1.1;AES256
35383232613735396438633236613266623432346462333063393061626135396164343830336430
3733663435393065656139613839313832326634666636620a626130353037396631393539373539
66613631393362346538663530633637326439643333623362643766333665373763366531356230
3330333864616530620a613831346637366639663365326562343962646562663532313065366231
6131

[ansible@kuber2 plyabook2]$


[ansible@kuber2 plyabook2]$ ansible-vault view secure.yml
Vault password: 
test1=password

[ansible@kuber2 plyabook2]$ 



Ansible  Include



--- # full include task 
- hosts : webbox
  become : true
  connection : ssh
  gather_facts: no
  tasks :

    - include : plays/pkg.yml




[ansible@kuber2 playbooks]$ cat plays/pkg.yml 
--- # install telnet 

- name : install telent 
  yum : pkg=telnet state=installed
- name : install lynx
  yum :  pkg=lynx state=installed 

[ansible@kuber2 playbooks]$ 



Ansible  Tags 

Just to run verification

[ansible@kuber2 playbooks]$ cat tag.yml 
--- # Tag functionallty yml 
- hosts : webbox 
  become : true 
  gather_facts : no 
  connection: ssh 
  tasks : 
   - name : installe telent and lynx 
     yum : pkg={{item}} state=latest
     with_items : 
        - telnet
        - lynx
     tags : 
        - packages 
   - name : verify telent install
     command : yum list insalled | grep telent 
     tags : 
        - verification
[ansible@kuber2 playbooks]$ 



Execute only tags


[ansible@kuber2 playbooks]$ ansible-playbook tag.yml --tags "verification"

Skip the tags


[ansible@kuber2 playbooks]$ ansible-playbook tag.yml --skip-tags "packages"


Always run the verification ignore only if it skips.

[ansible@kuber2 plyabook2]$ cat tag.yml 
--- # Tag functionallty yml 
- hosts : all
  become : true 
  gather_facts : no 
  connection: ssh 
  tasks : 
   - name : installe telent and lynx 
     yum : pkg={{item}} state=latest
     with_items : 
        - telnet
        - lynx
     tags : 
        - packages 
   - name : verify telent install
     command : yum list insalled | grep telent 
     tags : 

        - always




Ansible  ERROR handle: 

If  ignore_Error : yes then even if first task fails it will continue 



[ansible@kuber2 plyabook2]$ cat errorhandle.yml 
---
- hosts : all
  become : true 
  gather_facts : no 
  tasks : 
   - name : fail command 
     command : /bin/false
     ignore_errors : yes
   - name : Install telent 
     action : yum name=telnet state=installed
   

[ansible@kuber2 plyabook2]$


Ansible  startat / step  :

It will ask to perform or not :


[root@kuber2 plyabook2]# cat startat.yml 
--- # startat example playbook 
- hosts : all
  become : true 
  gather_facts : no 
  connection : ssh
  tasks : 
    - name : install telnet 
      yum : pkg=telnet state=latest 
    - name : install lynx 
      yum : pkg=lynx state=latest 
    - name : list dir 
      shell : ls -l /var

[root@kuber2 plyabook2]# 

Ansible command line  :



[root@kuber2 plyabook2]# cat fromcmdline.yml 
---
- hosts : '{{hosts}}'
  user : '{{user}}'
  become : true
  gather_facts : no
  tasks : 
   - name : Install telent client 
     action : yum  name={{pkg}} state=latest
      

[root@kuber2 plyabook2]# 



[root@kuber2 plyabook2]# ansible-playbook fromcmdline.yml  --extra-vars "hosts=all user=ansible pkg=telnet"




Monday, 15 October 2018

Docker Monitoring Cadvisor/Prometheus/Grafana


Run Cadvisor docker images to pull metrics

[root@kuber1 ~]#  docker run  --volume=/:/rootfs:ro  --volume=/var/run:/var/run:rw  --volume=/sys:/sys:ro  --volume=/var/lib/docker/:/var/lib/docker:ro  --volume=/dev/disk/:/dev/disk:ro  --publish=8080:8080  --detach=true  --name=cadvisor  google/cadvisor:latest

Cadvisor  WebUI 

http://kuber1.tuxhub.com:8080/containers/


Prometheus Configurations : 

[root@kuber1 ~]# vim /usr/local/prometheus/prometheus-2.4.3/prometheus.yml 

- job_name: 'docker'
   static_configs:
    - targets:
      - 10.10.72.108:8080    # 10.10.72.108 is kuber1.tuxhub.com 

Start Prometheus : 

[root@kuber1 ~]#  /usr/local/prometheus/prometheus-2.4.3//prometheus --web.listen-address=10.10.72.108:9080


Build Grafana Dashboard 

Download dashboard from 

https://gist.githubusercontent.com/njadhav1/37728ddc759ca188a2758c62721f43a0/raw/79d0e80a7e7b7185edf85533a




Friday, 12 October 2018

Kafka Installation & Monitoring

Kakfa Installtion

Host Details : -

10.10.72.108 kuber1.tuxhub.com kuber1
10.10.72.109 kuber2.tuxhub.com kuber2
10.10.72.114 kuber4.tuxhub.com kuber4


 Host :- kuber1.tuxhub.com

[root@kuber1 ]#  mkdir /usr/local/kafka;cd /usr/local/kafka;wget http://apache.mirror.digitalpacific.com.au/kafka/0.10.2.1/kafka_2.12-0.10.2.1.tgz;tar -zxf kafka_2.12-0.10.2.1.tgz ;mv kafka_2.12-0.10.2.1 kafka-2.12

[root@kuber1 config]#
[root@kuber1 config]# vim server.properties
broker.id=1                               # Need to change this on every node
delete.topic.enable=true
advertised.listeners=PLAINTEXT://kuber1.tuxhub.com:9092    # Need to change this on every node
num.network.threads=3
num.io.threads=8
default.replication.factor=3
min.insync.replicas=2
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka
num.partitions=8
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=10.10.72.109:5181/kafka     # Zookeeper host
zookeeper.connection.timeout.ms=6000
auto.create.topic.enable=true


On All nodes

Make sure to chnage "java.rmi.server.hostname"

[root@kuber1 ]# vim /usr/local/kafka/kafka-2.12/bin/kafka-server-start.sh
export KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=kuber1.tuxhub.com  -Djava.net.preferIPv4Stack=true"
export JMX_PORT=9999
export KAFKA_OPTS='-javaagent:/usr/local/prometheus/jmx-export/lib/jmx_prometheus_javaagent-0.3.1.jar=7071:/usr/local/prometheus/jmx-export/conf/kafka.yml'

For prometheus monitoring:

[mapr@kuber1 ~]$  wget https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.3.1/jmx_prometheus_javaagent-0.3.1.jar
[mapr@kuber1 ~]$ mkdir -p /usr/local/prometheus/jmx-export/{conf,lib}
[mapr@kuber1 ~]$  wget https://github.com/prometheus/prometheus/releases/download/v2.4.3/prometheus-2.4.3.linux-amd64.tar.gz
[mapr@kuber1 ~]$  cd /usr/local/prometheus/
[mapr@kuber1 ~]$  gunzip prometheus-2.4.3.linux-amd64.tar.gz
 [mapr@kuber1 ~]$  tar -xvf prometheus-2.4.3.linux-amd64.tar
 [mapr@kuber1 ~]$  mv  prometheus-2.4.3.linux-amd6/ prometheus-2.4.3

[mapr@kuber1 ~]$ cp -pav jmx_prometheus_javaagent-0.3.1.jar /usr/local/prometheus/jmx-export/lib/

[mapr@kuber1 ~]$ cd  /usr/local/prometheus/jmx-export/conf
[mapr@kuber1 ~]$ curl -O https://raw.githubusercontent.com/prometheus/jmx_exporter/master/example_configs/kafka-0-8-2.yml
[mapr@kuber1 ~]$ vim /usr/local/prometheus/prometheus-2.4.3/prometheus.yml

 Add config to prometheus.yml

 scrape_configs:
 - job_name: 'kafka'
   static_configs:
    - targets:
      - 10.10.72.108:7071
      - 10.10.72.109:7071
      - 10.10.72.114:7071

Start Kafka broker on all nodes

[mapr@kuber1 ~]$ /usr/local/kafka/kafka-2.12/bin/kafka-server-start.sh -daemon /usr/local/kafka/kafka-2.12/config/server.properties

Start Prometheus :

[mapr@kuber1 ~]$  /usr/local/prometheus/prometheus-2.4.3/prometheus --web.listen-address=10.10.72.108:9080

Start topic/producer/consumer :

A) Topic create
[mapr@kuber1 ~]$  /usr/local/kafka/kafka-2.12/bin/kafka-topics.sh --zookeeper kuber2.tuxhub.com:5181/kafka --partitions 3 --replication-factor 3  --create --topic t2

B) Producer launch

[mapr@kuber2 ~]$ /usr/local/kafka/kafka-2.12/bin/kafka-console-producer.sh --broker-list kuber1.tuxhub.com:9092,kuber2.tuxhub.com:9092,kuber4.tuxhub.com:9092 --topic t2

= > Send Msg1

C) Consumer Launch

[mapr@kuber4 ~]$ /usr/local/kafka/kafka-2.12/bin/kafka-console-consumer.sh --bootstrap-server  kuber1.tuxhub.com:9092,kuber2.tuxhub.com:9092,kuber4.tuxhub.com:9092 --topic t2  --from-beginning
= > Recive Msg1


Monitoring

A) prometheus

http://10.10.72.108:9080/

B) Grafana

Add a data source as prometheus

Import Kafka Dashboard

https://gist.githubusercontent.com/njadhav1/3f7fa2c7199f6952773d9a150eeebaf1/raw/473f45e916d8fc273fa6df70a38a6946f317ab4a/Kafka%2520Grafana%2520Dashboard

http://kuber1.tuxhub.com:3000/


C) Kafka Manager

Kafka Manager WebUI

[root@kuber1 docker]# cat kafka-manager.yml
version: '2'
services:
    kafka-manager:
        image: intropro/kafka-manager
        container_name: kafka-manager
        ports:
            - "9009:9000"
        environment:
            ZK_HOSTS: 10.10.72.109:5181


[root@kuber1 docker]# docker-compose -f kfka-manager.yml up -d

WebUI :

http://10.10.72.108:9009/clusters/

Dashboard :

http://kuber1.tuxhub.com:3000/


Wednesday, 3 October 2018

Kafka Manager WebUI

[root@kuber1 docker]# cat kafka-manager.yml 
version: '2'
services:
    kafka-manager:
        image: intropro/kafka-manager
        container_name: kafka-manager
        ports:
            - "9009:9000"
        environment:
            ZK_HOSTS: 10.10.72.109:5181

[root@kuber1 docker]#

[root@kuber1 docker]# docker-compose -f kafka-manager.yml up -d

WebUI : 

http://10.10.72.108:9009/clusters/Kafka/assignment


Tuesday, 25 September 2018

Zookeeper WebUI with Docker




[root@kuber1 docker]# cat zoonavi.yml 
version: '2.1'

services:
  web:
    image: elkozmon/zoonavigator-web:latest
    container_name: zoonavigator-web
    ports:
     - "8000:8000"
    environment:
      API_HOST: "api"
      API_PORT: 9000
    depends_on:
     - api
    restart: always
  api:
    image: elkozmon/zoonavigator-api:latest
    container_name: zoonavigator-api
    environment:
      SERVER_HTTP_PORT: 9000
    restart: always

[root@kuber1 docker]#

[root@kuber1 docker]# docker-compose -f zoonavi.yml  up -d

[root@kuber1 docker]# docker ps
CONTAINER ID        IMAGE                              COMMAND             CREATED             STATUS                   PORTS                            NAMES
3e89fd7f04df        elkozmon/zoonavigator-web:latest   "./run.sh"          6 minutes ago       Up 6 minutes (healthy)   80/tcp, 0.0.0.0:8000->8000/tcp   zoonavigator-web
466ce28f46dc        elkozmon/zoonavigator-api:latest   "./run.sh"          6 minutes ago       Up 6 minutes (healthy)   9000/tcp                         zoonavigator-api


From browser :

http://10.10.72.108:8000 


Friday, 21 October 2016

Hiveserver2 with Openldap on MapR




Step 1 ) Edit hive-site.xml

# vim /opt/mapr/hive/hive-1.2/conf/hive-site.xml

<!-- LDAP AUTHENTICATION -->

<property>
     <name>hive.server2.authentication</name>
     <value>LDAP</value>
</property>

<property>
     <name>hive.server2.authentication.ldap.url</name>
     <value>ldap://adp034</value>
</property>

<property>
     <name>hive.server2.authentication.ldap.baseDN</name>
     <value>ou=Users,dc=tuxhub,dc=com</value>
</property>
<property>
  <name>hive.server2.enable.doAs</name>
  <value>true</value>
</property>


<!-- HIVE IMPERSANATION -->

<property>
  <name>hive.server2.enable.doAs</name>
  <value>true</value>
</property>
<property>
  <name>hive.metastore.execute.setugi</name>
  <value>true</value>
</property>


2) Connect your OS to ldap

[root@satz-n01 ~]# authconfig-tui



3) Select "Use Ldap"

You may need to install  yum install nss-pam-ldapd if get any error while selecting Ldap


4) Execute id command to check user infomration is populated.

[root@satz-n01 ~]# id <LDAP USER>

5) Restart HS2  and Hivemeta store

# maprcli node services -name hivemeta -action restart -nodes `hostname`
# maprcli node services -name hs2 -action restart -nodes `hostname`

6) Connect via beeline

[mapr@satz-n01 ~]$ /opt/mapr/hive/hive-1.2/bin/beeline

0: jdbc:hive2://localhost:10000/default (closed)> !connect jdbc:hive2://localhost:10000/default
Connecting to jdbc:hive2://localhost:10000/default
Enter username for jdbc:hive2://localhost:10000/default: uhg2
Enter password for jdbc:hive2://localhost:10000/default: ****
Connected to: Apache Hive (version 1.2.0-mapr-1609)
Driver: Hive JDBC (version 1.2.0-mapr-1609)
Transaction isolation: TRANSACTION_REPEATABLE_READ

Tuesday, 20 September 2016

Hiveserver2 With Kerberos



 Step 1 ) Add hive-site.xml 
 
<property>
  <name>hive.server2.authentication</name>
  <value>KERBEROS</value>
</property>
<property>
  <name>hive.server2.authentication.kerberos.principal</name>
  <value>hive/_HOST@YOUR-REALM.COM</value>
</property>
<property>
  <name>hive.server2.authentication.kerberos.keytab</name>
  <value>/etc/hive/conf/hive.keytab</value>
</property>
<property>
  <name>hive.server2.enable.doAs</name>
  <value>false</value>
</property>

Step 2 ) Add principal:

# kadmin.local

kadmin.local: add_principal -randkey hive/cdh084.tuxhub.com@TUXHUB.COM
kadmin.local: change_password hive/cdh084.tuxhub.com@TUXHUB.COM
kadmin.local: xst -k /etc/hive/conf/hive.keytab hive/cdh084.tuxhub.com@TUXHUB.COM

Step 3 ) Check permission :-

[root@cdh084 ~]# ll /etc/hive/conf/hive.keytab
-rw------- 1 hive hive 442 Sep 20 18:49 /etc/hive/conf/hive.keytab
[root@cdh084 ~]#
Step 3 )  Restart hiveserver2

[root@cdh084 ~]# /etc/init.d/hive-server2 restart

Step 4 ) Connect to beeline

beeline> !connect jdbc:hive2://localhost:10000/default;principal=hive/cdh084.tuxhub.com@TUXHUB.COM
nter username for jdbc:hive2://localhost:10000/default;principal=hive/cdh084.tuxhub.com@TUXHUB.COM: <ENTER ANYTHING>
Enter password for jdbc:hive2://localhost:10000/default;principal=hive/cdh084.tuxhub.com@TUXHUB.COM: <ENTER ANYTHING>
Connected to: Apache Hive (version 1.1.0-cdh5.8.0)
Driver: Hive JDBC (version 1.1.0-cdh5.8.0)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://localhost:10000/default>

Ansible Cheat sheet

Install Ansible  # yum install ansible Host file configuration  File  [ansible@kuber2 ~]$ cat /etc/ansible/hosts     [loca...