2017年8月30日水曜日

VagrantでApache Solrがインストールされた仮想マシン(CentOS7.3)を構築する

以下のVagrantfileを使用して、Apache Solrがインストールされた仮想マシン(CentOS7.3)を構築できます。 Vagrantfile

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "bento/centos-7.3"
  config.vm.hostname = "solr"
  config.vm.provider :virtualbox do |vbox|
     vbox.name = "solr"
     vbox.cpus = 4
     vbox.memory = 4096 
     vbox.customize ["modifyvm", :id, "--nicpromisc2","allow-all"]
  end
  # private network
  config.vm.network "private_network", ip: "192.168.55.75", :netmask => "255.255.255.0"
  # bridge netwrok
  config.vm.network "public_network", ip: "192.168.1.75", :netmask => "255.255.255.0"
  config.vm.network "forwarded_port", guest:22, host:20022, id:"ssh"
  config.vm.provision "shell", inline: <<-SHELL
yum -y install unzip

# havegedのインストール
yum -y install epel-release
yum -y install haveged
systemctl enable haveged.service
systemctl start haveged.service

# install JDK
yum -y install java-1.8.0-openjdk

wget http://ftp.riken.jp/net/apache/lucene/solr/6.6.0/solr-6.6.0.tgz
tar xvfz solr-6.6.0.tgz
./solr-6.6.0/bin/install_solr_service.sh solr-6.6.0.tgz

sudo -u solr /opt/solr/bin/solr create -c mycore
sudo -u solr /opt/solr/bin/post -c mycore /opt/solr/example/exampledocs/*.xml

SHELL
end
○関連情報
・Apache Solrに関する他の記事はこちらを参照してください。

2017年8月29日火曜日

Vagrantでmongodbがインストールされた仮想マシン(CentOS7.3)を構築する

mongodbがインストールされた仮想マシンを構築するには、以下のVagrantfileを使用します。このスクリプトでは、mongodbのインストールの他に、管理者であるadminユーザ(パスワードはadmin)と、一般ユーザであるtestユーザ(パスワードはtest)も作成し、testデータベースにproductsコレクションにテストデータを登録します。


Vagrantfile

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "bento/centos-7.3"
  config.vm.hostname = "mongodb"
  config.vm.provider :virtualbox do |vbox|
     vbox.name = "mongodb"
     vbox.cpus = 4
     vbox.memory = 4096
     vbox.customize ["modifyvm", :id, "--nicpromisc2","allow-all"]
  end
  # private network
  config.vm.network "private_network", ip: "192.168.55.74", :netmask => "255.255.255.0"
  # bridge netwrok
  config.vm.network "public_network", ip: "192.168.1.74", :netmask => "255.255.255.0"
  config.vm.network "forwarded_port", guest:22, host:19022, id:"ssh"
  config.vm.provision "shell", inline: <<-SHELL
#yum -y install unzip

cp /vagrant/disable-transparent-hugepages /etc/init.d/
chmod 755 /etc/init.d/disable-transparent-hugepages
chkconfig --add disable-transparent-hugepages
/etc/init.d/disable-transparent-hugepages start

cp /vagrant/mongodb-org-3.4.repo /etc/yum.repos.d/mongodb-org-3.4.repo
sudo yum install -y mongodb-org

mkdir -p /srv/mongodb/
openssl rand -base64 741 > /srv/mongodb/mongodb-keyfile
chmod 600 /srv/mongodb/mongodb-keyfile
chown mongod:mongod /srv/mongodb/mongodb-keyfile
echo 'security:' >> /etc/mongod.conf
echo '  keyFile: /srv/mongodb/mongodb-keyfile' >> /etc/mongod.conf

systemctl enable mongod
systemctl start mongod

# wait until mongod starts listening.
while netstat -lnt | awk '$4 ~ /:27017$/ {exit 1}'; do sleep 10; done

mongo /vagrant/addusers.js
echo '  authorization: enabled' >> /etc/mongod.conf

systemctl restart mongod
while netstat -lnt | awk '$4 ~ /:27017$/ {exit 1}'; do sleep 10; done

# create a test user.
mongo -u "admin" -p "admin" --authenticationDatabase "admin" /vagrant/createst.js

# create sample data
mongo -u "test" -p "test" --authenticationDatabase "test" /vagrant/sample.js

SHELL
end
mongodb-org-3.4.repo

[mongodb-org-3.4]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.4/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.4.asc
disable-transparent-hugepages

#!/bin/bash
### BEGIN INIT INFO
# Provides:          disable-transparent-hugepages
# Required-Start:    $local_fs
# Required-Stop:
# X-Start-Before:    mongod mongodb-mms-automation-agent
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Disable Linux transparent huge pages
# Description:       Disable Linux transparent huge pages, to improve
#                    database performance.
### END INIT INFO

case $1 in
  start)
    if [ -d /sys/kernel/mm/transparent_hugepage ]; then
      thp_path=/sys/kernel/mm/transparent_hugepage
    elif [ -d /sys/kernel/mm/redhat_transparent_hugepage ]; then
      thp_path=/sys/kernel/mm/redhat_transparent_hugepage
    else
      return 0
    fi

    echo 'never' > ${thp_path}/enabled
    echo 'never' > ${thp_path}/defrag

    re='^[0-1]+$'
    if [[ $(cat ${thp_path}/khugepaged/defrag) =~ $re ]]
    then
      # RHEL 7
      echo 0  > ${thp_path}/khugepaged/defrag
    else
      # RHEL 6
      echo 'no' > ${thp_path}/khugepaged/defrag
    fi

    unset re
    unset thp_path
    ;;
esac
addusers.js

var db = db.getSiblingDB('admin');
db.createUser({user:"admin",pwd:"admin",roles:[{role:"userAdminAnyDatabase",db:"admin"}]});
createtest.js

var db = db.getSiblingDB('test');
db.createUser({user:"test",pwd:"test",roles:[{role:"readWrite",db:"test"}]});
sample.js

var db = db.getSiblingDB('test');
db.products.insert( { item: "chair", qty: 15 } );
db.products.insert( { item: "table", qty: 3 } );
db.products.find();

〇関連情報
・Vagrantでmongo-expressがインストールされた仮想マシン(CentOS7.4)を構築する
http://serverarekore.blogspot.jp/2017/11/vagrantmongo-expresscentos74.html

・VagrantでMongoDBをインストールした仮想マシン(Ubuntu16.04)を構築する
http://serverarekore.blogspot.jp/2017/11/vagrantmongodbubuntu1604.html

2017年8月28日月曜日

Apache JSPWikiがインストールされた仮想マシン(CentOS7.3)を作成する

VagrantでApache JSPWikiがインストールされた仮想マシンを作成するには以下のVagrantfileを使用します。この構成では、編集するのにユーザadmin/パスワードadminでログインすることが必要なwikiサイトになります。

Vagrantfile

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "bento/centos-7.3"
  config.vm.hostname = "centos7jspwiki"
  config.vm.network :public_network, ip:"192.168.1.73"
  config.vm.provider :virtualbox do |vbox|
     vbox.name = "centos7jspwiki"
  end
  config.vm.provision "shell", inline: <<-SHELL
vgshare=/vagrant

# download and install jdk8
jdkfile=jdk-8u144-linux-x64.rpm
if [ ! -e /vagrant/$jdkfile ]; then
  wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" -P $vgshare http://download.oracle.com/otn-pub/java/jdk/8u144-b01/090f390dda5b47b9b721c7dfaa008135/jdk-8u144-linux-x64.rpm
fi

yum remove java-1.6.0-openjdk
yum remove java-1.7.0-openjdk
yum remove java-1.8.0-openjdk
rpm -ivh $vgshare/$jdkfile

# download and install tomcat8
tomcat=apache-tomcat-8.5.20
tomcatfile=$tomcat.tar.gz
if [ ! -e $vgshare/$tomcatfile ]; then
  wget -P $vgshare http://ftp.meisei-u.ac.jp/mirror/apache/dist/tomcat/tomcat-8/v8.5.20/bin/$tomcatfile
fi
tar xvfz $vgshare/$tomcatfile -C /opt

# download jspwiki and install it.
mkdir JSPWiki
cd JSPWiki
wget http://ftp.riken.jp/net/apache/jspwiki/2.10.2/binaries/webapp/JSPWiki.war
jar xvf JSPWiki.war
echo 'jspwiki.baseURL=http://192.168.1.73:8080/JSPWiki/' > /home/vagrant/JSPWiki/WEB-INF/classes/jspwiki-custom.properties
cp /vagrant/jspwiki.policy /home/vagrant/JSPWiki/WEB-INF/
cp /vagrant/userdatabase.xml /home/vagrant/JSPWiki/WEB-INF/
cd ..
mv JSPWiki /opt/$tomcat/webapps

# setup tomcat as a service...
cp $vgshare/tomcat.service /etc/systemd/system/
cp $vgshare/tomcat /etc/sysconfig
systemctl enable tomcat.service
systemctl start tomcat.service

SHELL

end
tomcat.service

[Unit]
Description=Apache Tomcat Servlet Container
After=syslog.target network.target

[Service]
Type=forking
EnvironmentFile=/etc/sysconfig/tomcat
ExecStart=/opt/apache-tomcat-8.5.20/bin/startup.sh
ExecStop=/opt/apache-tomcat-8.5.20/bin/shutdown.sh
KillMode=none

[Install]
WantedBy=multi-user.target
tomcat

JAVA_HOME="/usr/java/default"
jspwiki.policy

//  Licensed to the Apache Software Foundation (ASF) under one
//  or more contributor license agreements.  See the NOTICE file
//  distributed with this work for additional information
//  regarding copyright ownership.  The ASF licenses this file
//  to you under the Apache License, Version 2.0 (the
//  "License"); you may not use this file except in compliance
//  with the License.  You may obtain a copy of the License at
//
//    http://www.apache.org/licenses/LICENSE-2.0
//
//  Unless required by applicable law or agreed to in writing,
//  software distributed under the License is distributed on an
//  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
//  KIND, either express or implied.  See the License for the
//  specific language governing permissions and limitations
//  under the License.

// $Id: jspwiki.policy,v 1.23 2007-07-06 10:36:36 jalkanen Exp $
//
// This file contains the local security policy for JSPWiki.
// It provides the permissions rules for the JSPWiki
// environment, and should be suitable for most purposes.
// JSPWiki will load this policy when the wiki webapp starts.
//
// As noted, this is the 'local' policy for this instance of JSPWiki.
// You can also use the standard Java 2 security policy mechanisms
// to create a consolidated 'global policy' (JVM-wide) that will be checked first,
// before this local policy. This is ideal for situations in which you are
// running multiple instances of JSPWiki in your web container.
// To set a global security policy for all running instances of JSPWiki,
// you will need to specify the location of the global policy by setting the
// JVM system property 'java.security.policy' in the command line script
// you use to start your web container. See the documentation
// pages at http://doc.jspwiki.org/2.4/wiki/InstallingJSPWiki. If you
// don't know what this means, don't worry about it.
//
// Also, if you are running JSPWiki with a security policy, you will probably
// want to copy the contents of the file jspwiki-container.policy into your
// container's policy. See that file for more details.
//
// ------ EVERYTHING THAT FOLLOWS IS THE 'LOCAL' POLICY FOR YOUR WIKI ------

// The first policy block grants privileges that all users need, regardless of
// the roles or groups they belong to. Everyone can register with the wiki and
// log in. Everyone can edit their profile after they authenticate.
// Everyone can also view all wiki pages unless otherwise protected by an ACL.
// If that seems too loose for your needs, you can restrict page-viewing
// privileges by moving the PagePermission 'view' grant to one of the other blocks.

grant principal org.apache.wiki.auth.authorize.Role "All" {
    permission org.apache.wiki.auth.permissions.PagePermission "*:*", "view";
#    permission org.apache.wiki.auth.permissions.WikiPermission "*", "editPreferences";
#    permission org.apache.wiki.auth.permissions.WikiPermission "*", "editProfile";
    permission org.apache.wiki.auth.permissions.WikiPermission "*", "login";
};


// The second policy block is extremely loose, and unsuited for public-facing wikis.
// Anonymous users are allowed to create, edit and comment on all pages.
//
// Note: For Internet-facing wikis, you are strongly advised to remove the
// lines containing the "modify" and "createPages" permissions; this will make
// the wiki read-only for anonymous users.

// Note that "modify" implies *both* "edit" and "upload", so if you wish to
// allow editing only, then replace "modify" with "edit".

//grant principal org.apache.wiki.auth.authorize.Role "Anonymous" {
//    permission org.apache.wiki.auth.permissions.PagePermission "*:*", "modify";
//    permission org.apache.wiki.auth.permissions.WikiPermission "*", "createPages";
//};


// This next policy block is also pretty loose. It allows users who claim to
// be someone (via their cookie) to create, edit and comment on all pages,
// as well as upload files.
// They can also view the membership list of groups.

//grant principal org.apache.wiki.auth.authorize.Role "Asserted" {
//    permission org.apache.wiki.auth.permissions.PagePermission "*:*", "modify";
//    permission org.apache.wiki.auth.permissions.WikiPermission "*", "createPages";
//    permission org.apache.wiki.auth.permissions.GroupPermission "*:*", "view";
//};


// Authenticated users can do most things: view, create, edit and
// comment on all pages; upload files to existing ones; create and edit
// wiki groups; and rename existing pages. Authenticated users can also
// edit groups they are members of.

grant principal org.apache.wiki.auth.authorize.Role "Authenticated" {
    permission org.apache.wiki.auth.permissions.PagePermission "*:*", "modify,rename";
    permission org.apache.wiki.auth.permissions.GroupPermission "*:*", "view";
    permission org.apache.wiki.auth.permissions.GroupPermission "*:", "edit";
    permission org.apache.wiki.auth.permissions.WikiPermission "*", "createPages,createGroups";
};


// Administrators (principals or roles possessing AllPermission)
// are allowed to delete any page, and can edit, rename and delete
// groups. You should match the permission target (here, 'JSPWiki')
// with the value of the 'jspwiki.applicationName' property in
// jspwiki.properties. Two administative groups are set up below:
// the wiki group "Admin" (stored by default in wiki page GroupAdmin)
// and the container role "Admin" (managed by the web container).

grant principal org.apache.wiki.auth.GroupPrincipal "Admin" {
    permission org.apache.wiki.auth.permissions.AllPermission "*";
};
grant principal org.apache.wiki.auth.authorize.Role "Admin" {
    permission org.apache.wiki.auth.permissions.AllPermission "*";
};
userdatabase.xml

<?xml version="1.0" encoding="UTF-8"?>
<users>
<!-- use following command to generate sha1 hash : echo -n 'password' | sha1sum -->
    <user uid="b70c1100-7093-4290-aee9-eb3bac4954cc" loginName="admin" wikiName="Administrator" fullName="Administrator" email="test@localdomain" password="{SSHA}PQ2cmVbYoBW1wyzFxikvlJiHVoutbQdGqQmYig==" created="01-jan-2017 01:01:01" lastModified="2017.08.27 at 13:52:04:663 UTC" lockExpiry="" >
    </user>
</users>
ユーザを追加するには、userdatabase.xmlにuser属性を追加します。パスワードはecho -n 'password' | sha1sum を実行してhash値を生成して使用してください。

2017年8月26日土曜日

re:dashがインストールされた仮想マシン(Ubuntu16.04)をVagrantで作成する

以下のVagrantfileを使用して、re:dashがインストールされた仮想マシンを作成します。

Vagrantfile

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "bento/ubuntu-16.04"
  config.vm.hostname = "redash"
  config.vm.provider :virtualbox do |vbox|
     vbox.name = "redash"
     vbox.cpus = 4
     vbox.memory = 4096
     vbox.customize ["modifyvm", :id, "--nicpromisc2","allow-all"]
  end
  # private network
  config.vm.network "private_network", ip: "192.168.55.71", :netmask => "255.255.255.0"
  # bridge netwrok
  config.vm.network "public_network", ip: "192.168.1.71", :netmask => "255.255.255.0"
  config.vm.network "forwarded_port", guest:22, host:11022, id:"ssh"
  config.vm.provision "shell", inline: <<-SHELL

wget https://raw.githubusercontent.com/getredash/redash/master/setup/ubuntu/bootstrap.sh
chmod +x bootstrap.sh
./bootstrap.sh

SHELL
end

Vagrantを使用してkerberos化した1ノードクラスタのhive環境を構築する

Kerberos認証を使用した1ノードのhive環境は、以下のVagrantfileを使用して構築できます。Kerberosとhiveのインストールと同時にテストユーザ(test)の作成とサンプルテーブルの作成も同時に行います。 Vagrantfile

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "bento/centos-7.3"
  config.vm.hostname = "krbhive.vm.internal"
  config.vm.provider :virtualbox do |vbox|
     vbox.name = "krbhive"
     vbox.cpus = 4
     vbox.memory = 13312 
     vbox.customize ["modifyvm", :id, "--nicpromisc2","allow-all"]
  end
  # private network
  config.vm.network "private_network", ip: "192.168.55.70", :netmask => "255.255.255.0"
  # bridge netwrok
  config.vm.network "public_network", ip: "192.168.1.70", :netmask => "255.255.255.0"
  config.vm.network "forwarded_port", guest:22, host:10022, id:"ssh"
  config.vm.provision "shell", inline: <<-SHELL

#echo "192.168.55.70  krbhive.vm.internal krbhive" >> /etc/hosts
sed -i -e 's/127.0.0.1\\t/192.168.55.70\\t/' /etc/hosts


# havegedのインストール
yum -y install epel-release
yum -y install haveged
systemctl enable haveged.service
systemctl start haveged.service

# kerberosインストール
yum -y install krb5-server krb5-workstation pam_krb5

# chrony設定
echo 'allow 192.168.1/24' >> /etc/chrony.conf
echo 'allow 192.168.55/24' >> /etc/chrony.conf

systemctl enable chronyd.service
systemctl start chronyd.service

# kdc.conf/kerb5/conf設定
sed -i -e 's/EXAMPLE.COM/VM.INTERNAL/g' /var/kerberos/krb5kdc/kdc.conf

kdb5_util create -r VM.INTERNAL -s -P admin

sed -i -e 's/# default_realm = EXAMPLE.COM/default_realm = VM.INTERNAL/' /etc/krb5.conf
sed -i -e 's/ default_ccache_name/#default_ccache_name/' /etc/krb5.conf
sed -i -e 's/\\[realms\\]/#[realms]/' /etc/krb5.conf
sed -i -e 's/\\[domain_realm\\]/#[domain_realm]/' /etc/krb5.conf

echo '' >> /etc/krb5.conf
echo '[realms]' >> /etc/krb5.conf
echo 'VM.INTERNAL = {' >> /etc/krb5.conf
echo '  kdc = krbhive.vm.internal' >> /etc/krb5.conf
echo '  admin_server = krbhive.vm.internal' >> /etc/krb5.conf
echo '}' >> /etc/krb5.conf
echo '' >> /etc/krb5.conf
echo '[domain_realm]' >> /etc/krb5.conf
echo '.vm.internal = VM.INTERNAL' >> /etc/krb5.conf
echo 'vm.internal = VM.INTERNAL' >> /etc/krb5.conf

sed -i -e 's/^/#/' /var/kerberos/krb5kdc/kadm5.acl
echo '*/admin@VM.INTERNAL *' >> /var/kerberos/krb5kdc/kadm5.acl

kadmin.local addprinc -pw "admin" root/admin

systemctl enable krb5kdc
systemctl start krb5kdc
systemctl enable kadmin
systemctl start kadmin

# ホスト追加
kadmin.local addprinc -randkey host/krvhive.vm.internal
kadmin.local ktadd host/krbhive.vm.internal

# install mysql
sudo yum -y remove mariadb-libs
yum -y localinstall http://dev.mysql.com/get/mysql57-community-release-el7-7.noarch.rpm
yum -y install mysql mysql-devel mysql-server mysql-utilities
sudo systemctl enable mysqld.service
sudo systemctl start mysqld.service

# change password and create users and databases.
chkconfig mysqld on
service mysqld start
export MYSQL_ROOTPWD='Root123#'
export MYSQL_PWD=`cat /var/log/mysqld.log | awk '/temporary password/ {print $NF}'`
mysql -uroot --connect-expired-password -e "SET PASSWORD = PASSWORD('$MYSQL_ROOTPWD');"
export MYSQL_PWD=$MYSQL_ROOTPWD
export MYSQL_ROOTPWD='root'
mysql -uroot --connect-expired-password -e "UNINSTALL PLUGIN validate_password;"
mysql -uroot --connect-expired-password -e "SET PASSWORD = PASSWORD('$MYSQL_ROOTPWD');"
export MYSQL_PWD=$MYSQL_ROOTPWD
mysql -uroot --connect-expired-password -e "CREATE DATABASE ambari DEFAULT CHARACTER SET utf8;"
mysql -uroot --connect-expired-password -e "CREATE USER ambari@localhost IDENTIFIED BY 'bigdata';"
mysql -uroot --connect-expired-password -e "GRANT ALL PRIVILEGES ON ambari.* TO 'ambari'@'%' IDENTIFIED BY 'bigdata';"

mysql -uroot --connect-expired-password -e "CREATE DATABASE hive DEFAULT CHARACTER SET utf8;"
mysql -uroot --connect-expired-password -e "CREATE USER hive@localhost IDENTIFIED BY 'hive';"
mysql -uroot --connect-expired-password -e "GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'%' IDENTIFIED BY 'hive';"

sudo systemctl stop mysqld.service
sudo cp /vagrant/my.cnf /etc
ln -s /var/lib/mysql/mysql.sock /tmp/mysql.sock
sudo systemctl start mysqld.service

# install JDBC driver
yum -y install mysql-connector-java

# Ambariのインストール
cd /etc/yum.repos.d/
wget http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.5.1.0/ambari.repo
yum -y install ambari-server ambari-agent


# workaround of AMBARI-20532
echo '' >> /etc/ambari-server/conf/ambari.properties
echo 'server.jdbc.database=mysql' >> /etc/ambari-server/conf/ambari.properties
echo 'server.jdbc.database_name=ambari' >> /etc/ambari-server/conf/ambari.properties
echo 'server.jdbc.user.name=ambari' >> /etc/ambari-server/conf/ambari.properties
echo 'server.jdbc.user.password=/etc/ambari-server/conf/password.dat' >> /etc/ambari-server/conf/ambari.properties
echo 'server.jdbc.driver=/usr/share/java/mysql-connector-java.jar' >> /etc/ambari-server/conf/ambari.properties
echo 'custom.jdbc.name=mysql-connector-java.jar' >> /etc/ambari-server/conf/ambari.properties
echo 'server.jdbc.hostname=localhost' >> /etc/ambari-server/conf/ambari.properties
echo 'server.jdbc.port=3306' >> /etc/ambari-server/conf/ambari.properties
ambari-server setup -s --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar -v
ambari-server setup --silent

mysql -u ambari -pbigdata ambari < /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql

ambari-server start
ambari-agent start

# 構成情報のサブミット
curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://localhost:8080/api/v1/blueprints/krbhive -d @/vagrant/cluster_configuration.json

curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://localhost:8080/api/v1/clusters/krbhive -d @/vagrant/hostmapping.json
sleep 60

# クラスタが構築されるまで待機
Progress=`curl -s --user admin:admin -X GET http://localhost:8080/api/v1/clusters/krbhive/requests/1 | grep progress_percent | awk '{print $3}' | cut -d . -f 1`
while [[ `echo $Progress | grep -v 100` ]]; do
  Progress=`curl -s --user admin:admin -X GET http://localhost:8080/api/v1/clusters/krbhive/requests/1 | grep progress_percent | awk '{print $3}' | cut -d . -f 1`
  echo " Progress: $Progress%"
  sleep 30
done

# adminユーザのディレクトリ作成
sudo -u hdfs /usr/bin/hdfs dfs -mkdir /user/admin
sudo -u hdfs /usr/bin/hdfs dfs -chown admin /user/admin

# テストユーザの作成とサンプルテーブルの作成
useradd test
cd ~test
kadmin -p root/admin -w admin addprinc -pw test test
kadmin.local ktadd  -norandkey test
kadmin.local xst -norandkey -k test.keytab test@VM.INTERNAL
chown test:test test.keytab
sudo -u hdfs /usr/bin/hdfs dfs -mkdir /user/test
sudo -u hdfs /usr/bin/hdfs dfs -chown test /user/test

cp /vagrant/sample.sql /home/test
chown test:test /home/test/sample.sql
cp /vagrant/sample.csv /tmp
chmod 777 /tmp/sample.csv

sudo -u test kinit -k -t /home/test/test.keytab test
sudo -u test beeline -u 'jdbc:hive2://krbhive.vm.internal:10000/default;principal=hive/krbhive.vm.internal@VM.INTERNAL' -f /home/test/sample.sql

SHELL
end
cluster_configuration.json

{
  "configurations" : [
    {
      "kerberos-env": {
        "properties_attributes" : { },
        "properties" : {
          "realm" : "VM.INTERNAL",
          "kdc_type" : "mit-kdc",
          "kdc_host" : "krbhive.vm.internal",
          "admin_server_host" : "krbhive.vm.internal"
        }
      }
    },
    {
      "krb5-conf": {
        "properties_attributes" : { },
        "properties" : {
          "domains" : "vm.internal",
          "manage_krb5_conf" : "false"
        }
      }
    },
    {
      "hive-site": {
        "hive.support.concurrency": "true",
        "hive.txn.manager": "org.apache.hadoop.hive.ql.lockmgr.DbTxnManager",
        "hive.compactor.initiator.on": "true",
        "hive.compactor.worker.threads": "5",
        "javax.jdo.option.ConnectionDriverName": "com.mysql.jdbc.Driver",
        "javax.jdo.option.ConnectionPassword": "hive",
        "javax.jdo.option.ConnectionURL": "jdbc:mysql://localhost/hive",
        "javax.jdo.option.ConnectionUserName": "hive"
      }
    },
    {
      "hive-env": {
        "hive_ambari_database": "MySQL",
        "hive_database": "Existing MySQL Database",
        "hive_database_type": "mysql",
        "hive_database_name": "hive"
      }
    },
    {
      "core-site": {
        "properties" : {
          "hadoop.proxyuser.root.groups" : "*",
          "hadoop.proxyuser.root.hosts" : "*",
          "hadoop.proxyuser.hive.groups" : "*",
          "hadoop.proxyuser.hive.hosts" : "*"
        }
      }
    }
  ],
  "host_groups" : [
    {
      "name" : "host_group_1",
      "components" : [
        {
          "name" : "NAMENODE"
        },
        {
          "name" : "SECONDARY_NAMENODE"
        },
        {
          "name" : "DATANODE"
        },
        {
          "name" : "HDFS_CLIENT"
        },
        {
          "name" : "RESOURCEMANAGER"
        },
        {
          "name" : "NODEMANAGER"
        },
        {
          "name" : "YARN_CLIENT"
        },
        {
          "name" : "HISTORYSERVER"
        },
        {
          "name" : "APP_TIMELINE_SERVER"
        },
        {
          "name" : "ZOOKEEPER_SERVER"
        },
        {
          "name" : "ZOOKEEPER_CLIENT"
        },
        {
          "name" : "METRICS_MONITOR"
        },
        {
          "name" : "TEZ_CLIENT"
        },
        {
          "name" : "HIVE_SERVER"
        },
        {
          "name" : "HIVE_METASTORE"
        },
        {
          "name" : "METRICS_COLLECTOR"
        },
        {
          "name" : "WEBHCAT_SERVER"
        }
      ],
      "cardinality" : "1"
    }
  ],
  "settings" : [{
     "recovery_settings" : [{
       "recovery_enabled" : "true"
    }]
  }],
  "Blueprints" : {
    "blueprint_name" : "krbhive",
    "stack_name" : "HDP",
    "stack_version" : "2.6",
    "security" : {
      "type" : "KERBEROS"
    }
  }
}
hostmapping.json

{
  "blueprint" : "krbhive",
  "default_password" : "admin",
  "credentials" : [
    {
      "alias" : "kdc.admin.credential",
      "principal" : "root/admin@VM.INTERNAL",
      "key" : "admin",
      "type" : "TEMPORARY"
    }
  ],
  "security" : {
    "type" : "KERBEROS"
  },
  "provision_action" : "INSTALL_AND_START",
  "host_groups" :[
    {
      "name" : "host_group_1",
      "hosts" : [
        {
          "fqdn" : "krbhive.vm.internal"
        }
      ]
    }
  ]
}
my.cnf

[client]
port            = 3306
socket          = /var/lib/mysql/mysql.sock
default-character-set=utf8

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
bind-address = 0.0.0.0
port            = 3306
key_buffer_size = 256M
max_allowed_packet = 16M
table_open_cache = 16
innodb_buffer_pool_size = 512M
innodb_log_file_size = 32M
sort_buffer_size = 8M
read_buffer_size = 8M
read_rnd_buffer_size = 8M
join_buffer_size = 8M
thread_stack = 4M
character-set-server=utf8
lower_case_table_names = 1
innodb_lock_wait_timeout=120
skip-innodb-doublewrite

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
sample.sql

CREATE EXTERNAL TABLE sample (
  store_id INT,
  sales INT
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
   "separatorChar" = ",",
   "quoteChar"     = "\"",
   "escapeChar"    = "\\"
) 
stored as textfile
LOCATION '/user/test'
tblproperties ("skip.header.line.count"="1");

LOAD DATA LOCAL INPATH '/tmp/sample.csv' OVERWRITE INTO TABLE sample;

select * from sample;
sample.csv

store_id,sales
100,15000000
200,20000000
300,18000000

○関連情報
Vagrantを使用して、Kerberosサーバを構築する
VagrantとAmbari blueprintでhiveの1ノードクラスタを作成する
・Ambariに関する他の記事はこちらを参照してください。

2017年8月19日土曜日

Vagrantを使用してH2 databaseがインストールされた仮想マシン(CentOS7.3)を構築する

以下のVagrantfileを使用して、h2 databaseがインストールされた仮想マシン(CentOS7.3)を構築できます。Webコンソールをサービスとして登録しているので、仮想マシン起動時に自動的にコンソールも利用可能になります。 Vagrantfile

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "bento/centos-7.3"
  config.vm.hostname = "h2db"
  config.vm.provider :virtualbox do |vbox|
     vbox.name = "h2db"
     vbox.cpus = 4
     vbox.memory = 4096
     vbox.customize ["modifyvm", :id, "--nicpromisc2","allow-all"]
  end
  # private network
  config.vm.network "private_network", ip: "192.168.55.86", :netmask => "255.255.255.0"
  # bridge netwrok
  config.vm.network "public_network", ip: "192.168.1.86", :netmask => "255.255.255.0"
  config.vm.network "forwarded_port", guest:22, host:19022, id:"ssh"
  config.vm.provision "shell", inline: <<-SHELL
yum -y install unzip

# install JDK
yum -y install java-1.8.0-openjdk

# download h2 database
export h2db=h2-2017-06-10.zip
wget http://www.h2database.com/$h2db

# install h2 database
unzip $h2db
echo 'webAllowOthers = true' > ~root/.h2.server.properties
mv h2 /opt

# setup console as service
cp /vagrant/h2 /etc/sysconfig
cp /vagrant/h2.service /etc/systemd/system
systemctl enable h2.service
systemctl start h2.service

echo 'access URL: http://192.168.55.86:8082/'
echo 'username: sa    default password: sa'
echo 'Driver class: org.h2.Driver
echo 'JDBC URL: jdbc:h2:tcp://192.168.55.86/~/test'

SHELL
end
h2.service

[Unit]
Description=H2 database
After=syslog.target network.target

[Service]
Type=simple
EnvironmentFile=/etc/sysconfig/h2
WorkingDirectory=/opt/h2
ExecStart=/bin/java -cp "/opt/h2/bin/h2-1.4.196.jar:$H2CUSTOMJARS" org.h2.tools.Console
ExecStop=/bin/kill -3 ${MAINPID}

[Install]
WantedBy=multi-user.target
h2

H2CUSTOMJARS=

○関連情報
・H2 Databaseに関する他の記事はこちらを参照してください。

Vagrantを使用して、Kerberosサーバを構築する

以下のVagrantfileを使用して、Kerberosサーバを構築できます。 Vagrantfile

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "bento/centos-7.3"
  config.vm.hostname = "krb5server.vm.internal"
  config.vm.provider :virtualbox do |vbox|
     vbox.name = "krb5server"
     vbox.cpus = 4
     vbox.memory = 8192
     vbox.customize ["modifyvm", :id, "--nicpromisc2","allow-all"]
  end
  # private network
  config.vm.network "private_network", ip: "192.168.55.87", :netmask => "255.255.255.0"
  # bridge netwrok
  config.vm.network "public_network", ip: "192.168.1.87", :netmask => "255.255.255.0"
  config.vm.network "forwarded_port", guest:22, host:18022, id:"ssh"
  config.vm.provision "shell", inline: <<-SHELL

# havegedのインストール
yum -y install epel-release
yum -y install haveged
systemctl enable haveged.service
systemctl start haveged.service

# kerberosインストール
yum -y install krb5-server krb5-workstation pam_krb5

# chrony設定
echo 'allow 192.168.1/24' >> /etc/chrony.conf
echo 'allow 192.168.55/24' >> /etc/chrony.conf

systemctl enable chronyd.service
systemctl start chronyd.service

# kdc.conf/kerb5/conf設定
sed -i -e 's/EXAMPLE.COM/VM.INTERNAL/g' /var/kerberos/krb5kdc/kdc.conf

kdb5_util create -r VM.INTERNAL -s -P admin

sed -i -e 's/# default_realm = EXAMPLE.COM/default_realm = VM.INTERNAL/' /etc/krb5.conf
sed -i -e 's/ default_ccache_name/#default_ccache_name/' /etc/krb5.conf
sed -i -e 's/\\[realms\\]/#[realms]/' /etc/krb5.conf
sed -i -e 's/\\[domain_realm\\]/#[domain_realm]/' /etc/krb5.conf

echo '' >> /etc/krb5.conf
echo '[realms]' >> /etc/krb5.conf
echo 'VM.INTERNAL = {' >> /etc/krb5.conf
echo '  kdc = krb5server.vm.internal' >> /etc/krb5.conf
echo '  admin_server = krb5server.vm.internal' >> /etc/krb5.conf
echo '}' >> /etc/krb5.conf
echo '' >> /etc/krb5.conf
echo '[domain_realm]' >> /etc/krb5.conf
echo '.vm.internal = VM.INTERNAL' >> /etc/krb5.conf
echo 'vm.internal = VM.INTERNAL' >> /etc/krb5.conf

sed -i -e 's/^/#/' /var/kerberos/krb5kdc/kadm5.acl
echo '*/admin@VM.INTERNAL *' >> /var/kerberos/krb5kdc/kadm5.acl

kadmin.local addprinc -pw "admin" root/admin

systemctl enable krb5kdc
systemctl start krb5kdc
systemctl enable kadmin
systemctl start kadmin

# ホスト追加
kadmin.local addprinc -randkey host/krb5server.vm.internal
kadmin.local ktadd host/krb5server.vm.internal

# ユーザ追加
useradd test
kadmin -p root/admin -w admin addprinc -pw test test
#kadmin.local ktadd  -norandkey -k /etc/krb5.keytab test
kadmin.local ktadd  -norandkey test
kadmin.local xst -norandkey -k test.keytab test@VM.INTERNAL


# sshd/ssh設定
echo 'KerberosAuthentication yes' >> /etc/ssh/sshd_config
sed -i -e 's/GSSAPIAuthentication no/GSSAPIAuthentication yes/' /etc/ssh/sshd_config
sed -i -e 's/GSSAPICleanupCredentials no/GSSAPICleanupCredentials yes/' /etc/ssh/sshd_config

echo 'Host *.vm.internal' >> /etc/ssh/ssh_config
echo '  GSSAPIAuthentication yes' >> /etc/ssh/ssh_config
echo '  GSSAPIDelegateCredentials yes' >> /etc/ssh/ssh_config
authconfig --enablekrb5 --update
systemctl restart sshd


SHELL
end
関連情報
Vagrantを使用してkerberos化した1ノードクラスタのhive環境を構築する

2017年8月13日日曜日

Apache NiFiがインストールされた仮想マシン(CentOS7.3)をVagrantで作成する

Apache NiFiがインストールされた仮想マシン(CentOS7.3)をVagrantで作成するには、以下のVagrantfile, nifi.serviceを同一フォルダに配置してvagrant upコマンドを実行します。仮想マシンのビルド完了後、http://192.168.1.88:8080/nifi/でApache NiFiの画面にアクセスできます。 Vagrantfile

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "bento/centos-7.3"
  config.vm.hostname = "nifi"
  config.vm.provider :virtualbox do |vbox|
     vbox.name = "nifi"
     vbox.cpus = 4
     vbox.memory = 8192 
     vbox.customize ["modifyvm", :id, "--nicpromisc2","allow-all"]
  end
  # private network
  config.vm.network "private_network", ip: "192.168.55.88", :netmask => "255.255.255.0"
  # bridge netwrok
  config.vm.network "public_network", ip: "192.168.1.88", :netmask => "255.255.255.0"
  config.vm.network "forwarded_port", guest:22, host:18022, id:"ssh"
  config.vm.provision "shell", inline: <<-SHELL
# 8080ポート
firewall-cmd --permanent --add-port=8080/tcp

# maximum file handles & maximum forked processes
echo '*  hard  nofile  50000' >> /etc/security/limits.conf
echo '*  soft  nofile  50000' >> /etc/security/limits.conf
echo '*  hard  nproc  10000' >> /etc/security/limits.conf
echo '*  soft  nproc  10000' >> /etc/security/limits.conf

echo '*  soft  nproc  10000' >> /etc/security/limits.d/90-nproc

# javaをインストール
yum -y install java-1.8.0-openjdk

# Apache Nifiのダウンロード
wget http://ftp.riken.jp/net/apache/nifi/1.3.0/nifi-1.3.0-bin.tar.gz
tar xvfz nifi-1.3.0-bin.tar.gz
mv nifi-1.3.0 /opt

#サービスとして登録
cp /vagrant/nifi.service /etc/systemd/system
systemctl enable nifi.service
systemctl start nifi.service

echo 'access url -> http://192.168.55.88:8080/nifi/' 

SHELL
end
nifi.service

[Unit]
Description=Apache Nifi
After=syslog.target network.target

[Service]
Type=forking
ExecStart=/opt/nifi-1.3.0/bin/nifi.sh start
ExecStop=/opt/nifi-1.3.0/bin/nifi.sh stop
KillMode=none

[Install]
WantedBy=multi-user.target

○関連情報
・Apache NiFiに関する他の記事はこちらを参照してください。

2017年8月6日日曜日

VagrantとAmbari blueprintでSpark2の1ノードクラスタを構築する

以下のVagrantfileで、mysql, Ambari Server, Spark2などがインストールされた1ノードクラスタを構築できます。 Vagrantfile

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "bento/centos-7.3"
  config.vm.hostname = "min-spark"
  config.vm.provider :virtualbox do |vbox|
     vbox.name = "min-spark"
     vbox.cpus = 4
     vbox.memory = 12288 
     vbox.customize ["modifyvm", :id, "--nicpromisc2","allow-all"]
  end
  # private network
  config.vm.network "private_network", ip: "192.168.55.20", :netmask => "255.255.255.0"
  # bridge netwrok
  config.vm.network "public_network", ip: "192.168.1.20", :netmask => "255.255.255.0"
  config.vm.network "forwarded_port", guest:22, host:10022, id:"ssh"
  config.vm.provision "shell", inline: <<-SHELL
# firewalld無効化
systemctl stop firewalld
systemctl disable firewalld

# mysqlインストール
sudo yum -y remove mariadb-libs
yum -y localinstall http://dev.mysql.com/get/mysql57-community-release-el7-7.noarch.rpm
yum -y install mysql mysql-devel mysql-server mysql-utilities
sudo systemctl enable mysqld.service
sudo systemctl start mysqld.service

# パスワードの変更とユーザの作成、DB作成
chkconfig mysqld on
service mysqld start
export MYSQL_ROOTPWD='Root123#'
export MYSQL_PWD=`cat /var/log/mysqld.log | awk '/temporary password/ {print $NF}'`
mysql -uroot --connect-expired-password -e "SET PASSWORD = PASSWORD('$MYSQL_ROOTPWD');"
export MYSQL_PWD=$MYSQL_ROOTPWD
export MYSQL_ROOTPWD='root'
mysql -uroot --connect-expired-password -e "UNINSTALL PLUGIN validate_password;"
mysql -uroot --connect-expired-password -e "SET PASSWORD = PASSWORD('$MYSQL_ROOTPWD');"
export MYSQL_PWD=$MYSQL_ROOTPWD
mysql -uroot --connect-expired-password -e "CREATE DATABASE ambari DEFAULT CHARACTER SET utf8;"
mysql -uroot --connect-expired-password -e "CREATE USER ambari@localhost IDENTIFIED BY 'bigdata';"
mysql -uroot --connect-expired-password -e "GRANT ALL PRIVILEGES ON ambari.* TO 'ambari'@'%' IDENTIFIED BY 'bigdata';"

mysql -uroot --connect-expired-password -e "CREATE DATABASE hive DEFAULT CHARACTER SET utf8;"
mysql -uroot --connect-expired-password -e "CREATE USER hive@localhost IDENTIFIED BY 'hive';"
mysql -uroot --connect-expired-password -e "GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'%' IDENTIFIED BY 'hive';"

sudo systemctl stop mysqld.service
sudo cp /vagrant/my.cnf /etc
ln -s /var/lib/mysql/mysql.sock /tmp/mysql.sock
sudo systemctl start mysqld.service

# JDBCドライバーのインストール
yum -y install mysql-connector-java

# ---------
cd /etc/yum.repos.d/
wget http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.5.1.0/ambari.repo
yum -y install ambari-server ambari-agent


# AMBARI-20532
echo '' >> /etc/ambari-server/conf/ambari.properties
echo 'server.jdbc.database=mysql' >> /etc/ambari-server/conf/ambari.properties
echo 'server.jdbc.database_name=ambari' >> /etc/ambari-server/conf/ambari.properties
echo 'server.jdbc.user.name=ambari' >> /etc/ambari-server/conf/ambari.properties
echo 'server.jdbc.user.password=/etc/ambari-server/conf/password.dat' >> /etc/ambari-server/conf/ambari.properties
echo 'server.jdbc.driver=/usr/share/java/mysql-connector-java.jar' >> /etc/ambari-server/conf/ambari.properties
echo 'custom.jdbc.name=mysql-connector-java.jar' >> /etc/ambari-server/conf/ambari.properties
echo 'server.jdbc.hostname=localhost' >> /etc/ambari-server/conf/ambari.properties
echo 'server.jdbc.port=3306' >> /etc/ambari-server/conf/ambari.properties
ambari-server setup -s --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar -v
ambari-server setup --silent

mysql -u ambari -pbigdata ambari < /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql

ambari-server start
ambari-agent start

# blueprintで1ノードクラスタを構築
curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://localhost:8080/api/v1/blueprints/min-spark -d @/vagrant/cluster_configuration.json

curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://localhost:8080/api/v1/clusters/min-spark -d @/vagrant/hostmapping.json
sleep 30

# 完了まで待機
Progress=`curl -s --user admin:admin -X GET http://localhost:8080/api/v1/clusters/min-spark/requests/1 | grep progress_percent | awk '{print $3}' | cut -d . -f 1`
while [[ `echo $Progress | grep -v 100` ]]; do
  Progress=`curl -s --user admin:admin -X GET http://localhost:8080/api/v1/clusters/min-spark/requests/1 | grep progress_percent | awk '{print $3}' | cut -d . -f 1`
  echo " Progress: $Progress%"
  sleep 30
done

# adminユーザのディレクトリを作成
sudo -u hdfs /bin/hdfs dfs -mkdir /user/admin
sudo -u hdfs /bin/hdfs dfs -chown admin /user/admin

# ユーザ追加
useradd test
echo test | passwd test --stdin

sudo -u hdfs /bin/hdfs dfs -mkdir /user/test
sudo -u hdfs /bin/hdfs dfs -chown test /user/test

#以下のようにsparkに接続可能
#beeline
#!connect jdbc:hive2://localhost:10016 test


SHELL
end
cluster_configuration.json

{
  "configurations" : [
    {
      "hive-site": {
        "hive.support.concurrency": "true",
        "hive.txn.manager": "org.apache.hadoop.hive.ql.lockmgr.DbTxnManager",
        "hive.compactor.initiator.on": "true",
        "hive.compactor.worker.threads": "5",
        "javax.jdo.option.ConnectionDriverName": "com.mysql.jdbc.Driver",
        "javax.jdo.option.ConnectionPassword": "hive",
        "javax.jdo.option.ConnectionURL": "jdbc:mysql://localhost/hive",
        "javax.jdo.option.ConnectionUserName": "hive"
      }
    },
    {
      "hive-env": {
        "hive_ambari_database": "MySQL",
        "hive_database": "Existing MySQL Database",
        "hive_database_type": "mysql",
        "hive_database_name": "hive"
      }
    },
    {
      "core-site": {
        "properties" : {
          "hadoop.proxyuser.root.groups" : "*",
          "hadoop.proxyuser.root.hosts" : "*"
        }
      }
    }
  ],
  "host_groups" : [
    {
      "name" : "host_group_1",
      "components" : [
        {
          "name" : "NAMENODE"
        },
        {
          "name" : "SECONDARY_NAMENODE"
        },
        {
          "name" : "DATANODE"
        },
        {
          "name" : "HDFS_CLIENT"
        },
        {
          "name" : "RESOURCEMANAGER"
        },
        {
          "name" : "NODEMANAGER"
        },
        {
          "name" : "YARN_CLIENT"
        },
        {
          "name" : "HISTORYSERVER"
        },
        {
          "name" : "APP_TIMELINE_SERVER"
        },
        {
          "name" : "ZOOKEEPER_SERVER"
        },
        {
          "name" : "ZOOKEEPER_CLIENT"
        },
        {
          "name" : "METRICS_MONITOR"
        },
        {
          "name" : "TEZ_CLIENT"
        },
        {
          "name" : "HIVE_SERVER"
        },
        {
          "name" : "HIVE_METASTORE"
        },
        {
          "name" : "METRICS_COLLECTOR"
        },
        {
          "name" : "WEBHCAT_SERVER"
        },
        {
          "name" : "PIG"
        },
        {
          "name" : "SLIDER"
        },
        {
          "name" : "SPARK2_JOBHISTORYSERVER"
        },
        {
          "name" : "SPARK2_CLIENT"
        },
        {
          "name": "SPARK2_THRIFTSERVER"
        },
        {
          "name": "LIVY2_SERVER"
        }
      ],
      "cardinality" : "1"
    }
  ],
  "settings" : [{
     "recovery_settings" : [{
       "recovery_enabled" : "true"
    }]
  }],
  "Blueprints" : {
    "blueprint_name" : "min-spark",
    "stack_name" : "HDP",
    "stack_version" : "2.6"
  }
}
hostmapping.json

{
  "blueprint" : "min-spark",
  "default_password" : "admin",
  "provision_action" : "INSTALL_AND_START",
  "host_groups" :[
    {
      "name" : "host_group_1",
      "hosts" : [
        {
          "fqdn" : "min-spark"
        }
      ]
    }
  ]
}
my.cnf

[client]
port            = 3306
socket          = /var/lib/mysql/mysql.sock
default-character-set=utf8

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
bind-address = 0.0.0.0
port            = 3306
key_buffer_size = 256M
max_allowed_packet = 16M
table_open_cache = 16
innodb_buffer_pool_size = 512M
innodb_log_file_size = 32M
sort_buffer_size = 8M
read_buffer_size = 8M
read_rnd_buffer_size = 8M
join_buffer_size = 8M
thread_stack = 4M
character-set-server=utf8
lower_case_table_names = 1
innodb_lock_wait_timeout=120
skip-innodb-doublewrite

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

○関連情報
・Ambariに関する他の記事はこちらを参照してください。

2017年8月3日木曜日

VagrantとAmbari blueprintでhiveの1ノードクラスタを作成する

以下のVagrantfileで、mysql, Ambari Server, Hiveなどがインストールされた1ノードクラスタを構築できます。

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "bento/centos-7.3"
  config.vm.hostname = "min-hive"
  config.vm.provider :virtualbox do |vbox|
     vbox.name = "min-hive"
     vbox.cpus = 4
     vbox.memory = 12288 
     vbox.customize ["modifyvm", :id, "--nicpromisc2","allow-all"]
  end
  # private network
  config.vm.network "private_network", ip: "192.168.55.20", :netmask => "255.255.255.0"
  # bridge netwrok
  config.vm.network "public_network", ip: "192.168.1.20", :netmask => "255.255.255.0"
  config.vm.network "forwarded_port", guest:22, host:10022, id:"ssh"
  config.vm.provision "shell", inline: <<-SHELL
# firewalld無効化
systemctl stop firewalld
systemctl disable firewalld

# mysqlインストール
sudo yum -y remove mariadb-libs
yum -y localinstall http://dev.mysql.com/get/mysql57-community-release-el7-7.noarch.rpm
yum -y install mysql mysql-devel mysql-server mysql-utilities
sudo systemctl enable mysqld.service
sudo systemctl start mysqld.service

# パスワードの変更とユーザの作成、DB作成
chkconfig mysqld on
service mysqld start
export MYSQL_ROOTPWD='Root123#'
export MYSQL_PWD=`cat /var/log/mysqld.log | awk '/temporary password/ {print $NF}'`
mysql -uroot --connect-expired-password -e "SET PASSWORD = PASSWORD('$MYSQL_ROOTPWD');"
export MYSQL_PWD=$MYSQL_ROOTPWD
export MYSQL_ROOTPWD='root'
mysql -uroot --connect-expired-password -e "UNINSTALL PLUGIN validate_password;"
mysql -uroot --connect-expired-password -e "SET PASSWORD = PASSWORD('$MYSQL_ROOTPWD');"
export MYSQL_PWD=$MYSQL_ROOTPWD
mysql -uroot --connect-expired-password -e "CREATE DATABASE ambari DEFAULT CHARACTER SET utf8;"
mysql -uroot --connect-expired-password -e "CREATE USER ambari@localhost IDENTIFIED BY 'bigdata';"
mysql -uroot --connect-expired-password -e "GRANT ALL PRIVILEGES ON ambari.* TO 'ambari'@'%' IDENTIFIED BY 'bigdata';"

mysql -uroot --connect-expired-password -e "CREATE DATABASE hive DEFAULT CHARACTER SET utf8;"
mysql -uroot --connect-expired-password -e "CREATE USER hive@localhost IDENTIFIED BY 'hive';"
mysql -uroot --connect-expired-password -e "GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'%' IDENTIFIED BY 'hive';"

sudo systemctl stop mysqld.service
sudo cp /vagrant/my.cnf /etc
ln -s /var/lib/mysql/mysql.sock /tmp/mysql.sock
sudo systemctl start mysqld.service

# JDBCドライバーのインストール
yum -y install mysql-connector-java

# ---------
cd /etc/yum.repos.d/
wget http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.5.1.0/ambari.repo
yum -y install ambari-server ambari-agent


# AMBARI-20532
echo '' >> /etc/ambari-server/conf/ambari.properties
echo 'server.jdbc.database=mysql' >> /etc/ambari-server/conf/ambari.properties
echo 'server.jdbc.database_name=ambari' >> /etc/ambari-server/conf/ambari.properties
echo 'server.jdbc.user.name=ambari' >> /etc/ambari-server/conf/ambari.properties
echo 'server.jdbc.user.password=/etc/ambari-server/conf/password.dat' >> /etc/ambari-server/conf/ambari.properties
echo 'server.jdbc.driver=/usr/share/java/mysql-connector-java.jar' >> /etc/ambari-server/conf/ambari.properties
echo 'custom.jdbc.name=mysql-connector-java.jar' >> /etc/ambari-server/conf/ambari.properties
echo 'server.jdbc.hostname=localhost' >> /etc/ambari-server/conf/ambari.properties
echo 'server.jdbc.port=3306' >> /etc/ambari-server/conf/ambari.properties
ambari-server setup -s --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar -v
ambari-server setup --silent

mysql -u ambari -pbigdata ambari < /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql

ambari-server start
ambari-agent start

# blueprintで1ノードクラスタを構築
curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://localhost:8080/api/v1/blueprints/min-hive -d @/vagrant/cluster_configuration.json

curl -H "X-Requested-By: ambari" -X POST -u admin:admin http://localhost:8080/api/v1/clusters/min-hive -d @/vagrant/hostmapping.json
sleep 30

# 完了まで待機
Progress=`curl -s --user admin:admin -X GET http://localhost:8080/api/v1/clusters/min-hive/requests/1 | grep progress_percent | awk '{print $3}' | cut -d . -f 1`
while [[ `echo $Progress | grep -v 100` ]]; do
  Progress=`curl -s --user admin:admin -X GET http://localhost:8080/api/v1/clusters/min-hive/requests/1 | grep progress_percent | awk '{print $3}' | cut -d . -f 1`
  echo " Progress: $Progress%"
  sleep 30
done

# adminユーザのディレクトリを作成
sudo -u hdfs /usr/bin/hdfs dfs -mkdir /user/admin
sudo -u hdfs /usr/bin/hdfs dfs -chown admin /user/admin

# ユーザ追加
useradd test
echo test | passwd test --stdin

sudo -u hdfs /usr/bin/hdfs dfs -mkdir /user/test
sudo -u hdfs /usr/bin/hdfs dfs -chown test /user/test

SHELL
end
cluster_configuration.json

{
  "configurations" : [
    {
      "hive-site": {
        "hive.support.concurrency": "true",
        "hive.txn.manager": "org.apache.hadoop.hive.ql.lockmgr.DbTxnManager",
        "hive.compactor.initiator.on": "true",
        "hive.compactor.worker.threads": "5",
        "javax.jdo.option.ConnectionDriverName": "com.mysql.jdbc.Driver",
        "javax.jdo.option.ConnectionPassword": "hive",
        "javax.jdo.option.ConnectionURL": "jdbc:mysql://localhost/hive",
        "javax.jdo.option.ConnectionUserName": "hive"
      }
    },
    {
      "hive-env": {
        "hive_ambari_database": "MySQL",
        "hive_database": "Existing MySQL Database",
        "hive_database_type": "mysql",
        "hive_database_name": "hive"
      }
    },
    {
      "core-site": {
        "properties" : {
          "hadoop.proxyuser.root.groups" : "*",
          "hadoop.proxyuser.root.hosts" : "*"
        }
      }
    }
  ],
  "host_groups" : [
    {
      "name" : "host_group_1",
      "components" : [
        {
          "name" : "NAMENODE"
        },
        {
          "name" : "SECONDARY_NAMENODE"
        },
        {
          "name" : "DATANODE"
        },
        {
          "name" : "HDFS_CLIENT"
        },
        {
          "name" : "RESOURCEMANAGER"
        },
        {
          "name" : "NODEMANAGER"
        },
        {
          "name" : "YARN_CLIENT"
        },
        {
          "name" : "HISTORYSERVER"
        },
        {
          "name" : "APP_TIMELINE_SERVER"
        },
        {
          "name" : "ZOOKEEPER_SERVER"
        },
        {
          "name" : "ZOOKEEPER_CLIENT"
        },
        {
          "name" : "METRICS_MONITOR"
        },
        {
          "name" : "TEZ_CLIENT"
        },
        {
          "name" : "HIVE_SERVER"
        },
        {
          "name" : "HIVE_METASTORE"
        },
        {
          "name" : "METRICS_COLLECTOR"
        },
        {
          "name" : "WEBHCAT_SERVER"
        }
      ],
      "cardinality" : "1"
    }
  ],
  "settings" : [{
     "recovery_settings" : [{
       "recovery_enabled" : "true"
    }]
  }],
  "Blueprints" : {
    "blueprint_name" : "min-hive",
    "stack_name" : "HDP",
    "stack_version" : "2.6"
  }
}
hostmapping.json

{
  "blueprint" : "min-hive",
  "default_password" : "admin",
  "provision_action" : "INSTALL_AND_START",
  "host_groups" :[
    {
      "name" : "host_group_1",
      "hosts" : [
        {
          "fqdn" : "min-hive"
        }
      ]
    }
  ]
}
my.cnf

[client]
port            = 3306
socket          = /var/lib/mysql/mysql.sock
default-character-set=utf8

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
bind-address = 0.0.0.0
port            = 3306
key_buffer_size = 256M
max_allowed_packet = 16M
table_open_cache = 16
innodb_buffer_pool_size = 512M
innodb_log_file_size = 32M
sort_buffer_size = 8M
read_buffer_size = 8M
read_rnd_buffer_size = 8M
join_buffer_size = 8M
thread_stack = 4M
character-set-server=utf8
lower_case_table_names = 1
innodb_lock_wait_timeout=120
skip-innodb-doublewrite

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

○関連情報
Vagrantを使用してkerberos化した1ノードクラスタのhive環境を構築する

・Ambariに関する他の記事はこちらを参照してください。