Thursday, December 8, 2016

Hue 3.11 with HDP 2.5

Works fine with CentOS / RHEL, I used 6.8 in that case. Epel has to be available, if not, install the repo.
And I ask me why Hortonworks didn't integrated Hue v3 in their HDP release - I mean, Hue v2 is older as old and lacks dramatically on functionality.
Anyhow, lets get to work.

sudo wget http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo -O /etc/yum.repos.d/epel-apache-maven.repo

sudo yum install ant gcc krb5-devel mysql mysql-devel openssl-devel cyrus-sasl-devel cyrus-sasl-gssapi sqlite-devel libtidy libxml2-devel libxslt-devel openldap-devel python-devel python-simplejson python-setuptools rsync gcc-c++ saslwrapper-devel libffi-devel gmp-devel apache-maven

sudo mkdir /software; sudo chown hue: /software && cd /software
wget https://github.com/cloudera/hue/archive/master.zip -O hue.zip && unzip hue.zip; cd hue-master; sudo mkdir -p /usr/local/hue && chown -R hue: /usr/local/hue && make install



HDP config changes:


Oozie => Custom oozie-site
oozie.service.ProxyUserService.proxyuser.hue.groups *
oozie.service.ProxyUserService.proxyuser.hue.hosts *


Hive => Custom webhcat-site
webhcat.proxyuser.hue.host *
webhcat.proxyuser.hue.groups *


HDFS => Custom core-site
hadoop.proxyuser.hue.hosts *
hadoop.proxyuser.hue.groups *


At the end, hue.ini needs to be configured to fit the installation, here's an example - I use 8899 as HueUI port:

http_port=8899
app_blacklist=impala,security
hive_server_host=hue3.azure.intern
hbase_clusters=(Cluster|hue3.azure.intern:9090)
solr_url=http://hue3.azure.intern:8983/solr/


At least a new security rule for port 8899 has to be created, as well as the hbase thrift service has to be started per:
nohup hbase thrift start &

Configure Hue:
/usr/local/hue/build/env/bin/hue syncdb
/usr/local/hue/build/env/bin/hue migrate

Start Hue:
/usr/local/hue/build/env/bin/supervisor -d

Login per http://your_hue3_host:8899

I strongly recommend to use MySQL as an backend DB, but for first test the integrated SQLite instance is fine, too.

Issues:
HUE-4701 - recreate the saved queries from sample notebook

Tuesday, November 29, 2016

Erase HDP 2.x and Ambari

Since I hack now often with Hortonworks HDP, I also often need to completely clean out my lab environments to get fresh boxes. I figured to write a ugly shell script is more comfortable as bothering my infra guys to reset the VM's in Azure - which also reset all my modifications. Bad!
Anyhow, here's the script in the case anyone has some use, too.

https://github.com/alo-alt/shell/blob/master/rmhdp.bash

As usual, first stop all Ambari managed services. I remove Postgres too, since the setup of a new db done by the installer of Ambari is much more faster than dealing with inconsistencies later.
Side Note: The script is made for RHEL based distributions ;)

Monday, October 24, 2016

FreeIPA and Hadoop Distributions (HDP / CDH)

FreeIPA is the tool of choice when it comes to implement a security architecture from the scratch today. I don't need to praise the advantages of FreeIPA, it speaks for himself. It's the Swiss knife of user authentication, authorization and compliance.

To implement FreeIPA into Hadoop distributions like Hortonwork's HDP and Cloudera's CDH some tweaks are necessary, but the outcome is it worth. I assume that the FreeIPA server setup is done and the client tools are distributed. If not, the guide from Hortonworks has those steps included, too.

For Hortonworks, nothing more as the link to the documentation is necessary:
https://community.hortonworks.com/articles/59645/ambari-24-kerberos-with-freeipa.html

Ambari 2.4x has FreeIPA (Ambari-6432) support (experimental, but it works as promised) included. The setup and rollout is pretty simple and runs smoothly per Wizard.

For Cloudera it takes a bit more handwork, but it works at the end also perfect and well integrated, but not at the same UI level as Ambari. These steps are necessary to get Cloudera Manager working with FreeIPA:

1. create the CM principal in FreeIPA (example: cdh@ALO.ALT)
2. retrieve the keytab:
 ipa-getkeytab -r -s freeipa.alo.alt -p cdh -k cdh.keytab
3. install ipa-admintools on the Cloudera Manager server 
 yum install ipa-admintools -y
4. place the retrieval-script (from my GitHub) in /opt/cloudera/security/getkeytabs.sh (or another path accessible by cloudera manager), make it executable and owned by cloudera-scm
 chmod 775 /opt/cloudera/security/getkeytabs.sh && chown cloudera-scm: /opt/cloudera/security/getkeytabs.sh
5. Start the Kerberos wizard, but stop after verifying the cdh user
6. Set the configuration [1] for "Custom Kerberos Keytab Retrieval Script" to "/opt/cloudera/security/getkeytabs.sh"
7. resume the Kerberos wizard and follow the steps until its finished and restart the cluster.

Important:
The FreeIPA client from RHEL7 / CentOS 7 uses now memory based keytabs, but Java doesn't support them (yet). To switch back to the file based ticket cache, the config file (/etc/krb5.conf) needs to be altered by commenting default_ccache_name out, which let the client use the default file based ticket cache;

cat /etc/krb5.conf
..
# default_ccache_name = KEYRING:persistent:%{uid}
..


Wednesday, October 12, 2016

Shifting paradigms in the world of BigData

In building the next generation of applications, companies and stakeholders need to adopt new paradigms. The need for this shift is predicated on the fundamental belief that building a new application at scale requires tailored solutions to that application’s unique challenges, business model and ROI. Some things change, and I’d like to point to some of that changes.

Event Driven vs. CRUD
Software development traditionally is driven by entity-relation modeling and CRUD operations on that data. The modern world isn’t about data at rest, it’s about being responsive to events in flight. This doesn’t mean that you don’t have data at rest, but that this data shouldn’t be organized in silos.
The traditional CRUD model is neither expressive nor responsive, given by the amount of uncountable available data sources. Since all data is structured somehow, an RDBMS isn't able to store and work with data when the schema isn't known (schema on write). That makes the use of additional free available data more like an adventure than a valid business model, given that the schema isn't known and can change rapidly. Event driven approaches are much more dynamical, open and make the data valuable for other processes and applications. The view to the data is defined by the use of the data (schema on read). This views can be created manually (Data Scientist), automatically (Hive and Avro for example) or explorative (R, AI, NNW).

Centralized vs Siloed Data Stores
BigData projects often fail by not using a centralized data store, often refereed as Data Lake or Data Hub. It’s essential to understand the idea of a Data Lake and the need for it. Siloed solutions (aka data warehouse solutions) have only data which match the schema and nothing else. Every schema is different, and often it’s impossible to use them in new analytic applications. In a Data Lake the data is stored as it is - originally, untouched, uncleaned, disaggregated. That makes the entry (or low hanging fruit) mostly easy - just start to catch all data you can get. Offload RDBMS and DWs to your Hadoop cluster and start the journey by playing with that data, even by using 3rd party tools instead to develop own tailored apps. Even when this data comes from different DWH's, mining and correlating them often brings treasures to light.

Scaled vs. Monolith Development
Custom processing at scale involves tailored algorithms, be they custom Hadoop jobs, in-memory approaches for matching and augmentation or 3rd party applications. Hadoop is nothing more (or less) than a framework which allows the user to work within a distributed system, splitting workloads into smaller tasks and let those tasks run on different nodes. The interface to that system are reusable API's and Libraries. That makes the use of Hadoop so convenient - the user doesn't need to take care about the distribution of tasks nor to know exactly how the framework works. Additionally, every piece of written code can be reused by others without having large code depts.
On the other hand Hadoop gives the user an interface to configure the framework to match the application needs dynamically on runtime, instead of having static configurations like traditional processing systems.

Having this principles in mind by planning and architecting new applications, based on Hadoop or similar technologies doesn’t guarantee success, but it lowers the risk to get lost. Worth to note that every success has had many failures before. Not trying to create something new is the biggest mistake we can made, and will result sooner or later in a total loss.

Thursday, September 15, 2016

Cloudera Manager and Slack

The most of us are getting bored by receiving hundreds of monitoring emails every day. To master the flood, rules are getting in play - and with that rules the interest into email communication are reduced.
To master the internal information flood, business messaging networks like Slack are taking more and more place.

To make CM work with Slack a custom alert script from my Github will do the trick:

https://github.com/alo-alt/Slack/blob/master/cm2slack.py

The use is pretty straight forward - create a channel in Slack, enable Webhooks, place the token into the script, store the script on your Cloudera Manager host, make it executable for cloudera-scm : and enable outgoing firewall / proxy rules to let the script chat with Slack's API. The script can handle proxy connections, too.

In Cloudera Manager, the script path needs to be added into Cloudera-Management-Service => Configuration => Alert Publisher => Custom Script.



Tuesday, August 16, 2016

Manage rights in OpenStack

Openstack lacks on sophisticated rights management, the most users figure. But that's not the case, role management in Openstack is available.
First users and groups needs to be added to projects, this can be done per CLI or GUI [1]. Lets say, a group called devops shall have the full control about OpenStack, but others not in that group can have dedicated operation access like create snapshot, stop / start / restart an instance or looking at the floating IP pool.

Users, Groups and Policies
OpenStack handles the rights in a policy file in /etc/nova/policy.json, using roles definitions per group assigned to all tasks OpenStack provides. It looks like:

{
"context_is_admin": "role:admin",
"admin_or_owner": "is_admin:True or project_id:%(project_id)s",
"default": "rule:admin_or_owner",
...
}

It describes the default - an member of a project is the admin of that project. To add additional rules, they have to be defined here.
In my case, I created a goup devops, added the users and defined the rights like:

"devops": "is_admin:True or (project_id:%(project_id)s and not role:user and not role:guest)",

and assigned the role to all tasks, an DevOps team member should be able to perform. Project owners / admins can operate with instances in her project, but in an controlled way. Basically, I revoked all delete / move / reassign tasks, like network and subnet management or delete an instance.
The full policy file is available in my GitHub repo [2].

[1] http://docs.openstack.org/admin-guide/cli-manage-projects-users-and-roles.html
[2] https://github.com/alo-alt/OpenStack/blob/master/policy.json

Monday, July 4, 2016

Deal with corrupted messages in Apache Kafka

Under some strange circumstances it can happen that a message in a Kafka topic is corrupted. This happens often by using 3rd party frameworks together with Kafka. Additionally, Kafka < 0.9 has no lock at Log.read() at the consumer read level, but has a lock on Log.write(). This can cause a rare race condition, as described in KAKFA-2477 [1]. Probably a log entry looks like:

ERROR Error processing message, stopping consumer: (kafka.tools.ConsoleConsumer$) kafka.message.InvalidMessageException: Message is corrupt (stored crc = xxxxxxxxxx, computed crc = yyyyyyyyyy

Kafka-Tools

Kafka stores the offset of every consumer in Zookeeper. To read out the offsets, Kafka provides handy tools [2]. But also zkCli.sh can be used, at least to display the consumer and the stored offsets. First we need to find the consumer for a topic (> Kafka 0.9):

bin/kafka-consumer-groups.sh --zookeeper management01:2181 --describe --group test

Prior to Kafka 0.9 the only possibility to get this informations was to use zkCli.sh (or similar tools) to find the consumer group. Since the debug with zkCli is a bit frustrating, I personally use kafka-manager from Yahoo [3]. 
Let's assume the consumers are stored in Zookeeper under /consumer, the command to find the offset looks like:

ls /consumer/test/offsets
[1]
get /consumer/test/offsets/1
[15]

With Kafka that command would look like:

bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group console-1 --zookeeper zknode1:2181

Group     Topic   Pid   Offset   logSize   Lag   Owner
console-1 test    1     15       337       326   none


After the offset was found, this offset can be incremented to force the consumer to read the next available message. Before doing this, Kafka has to be shutdown. 

bin/kafka-run-class.sh kafka.tools.UpdateOffsetsInZK latest 16 test

After restart, Kafka should be able read the next message, in the case this message isn’t corrupted, too. And yes, the corrupted message is lost and can’t be restored, so it's always a good idea to implement a CRC check before any message gets to Kafka.

A code based approach is also available [4]. For that a subclass of the ConsumerIterator has to be created, which will catch the message exception, replace it with a dummy message and proceed with the next message. Of course the corrupted message is lost in that case, too.

Monday, June 27, 2016

Encryption in HDFS

Encryption of data was and is the hottest topic in terms of data protection and prevention against theft. Hadoop HDFS supports full transparent encryption in transit and at rest [1], based on Kerberos implementations [2], often used within multiple trusted Kerberos domains.

Technology

Hadoop KMS provides a REST-API, which has built-in SPNEGO and HTTPS support, comes mostly bundled with a pre-configured Apache Tomcat within your preferred Hadoop distribution. 
To have encryption transparent for the user and the system, each encrypted zone is associated with a SEZK (single encryption zone key), created when the zone is defined as an encryption zone by interaction between NN and KMS. Each file within that zone will have its own DEK (Data Encryption Key). This behavior is fully transparent, since the NN directly asks the KMS for a new EDEK (encrypted data encryption key) encrypted with the zones key and adds them to the file’s metadata when a new file is created.

When a client wants to read a file in an encrypted zone, the NN provides the EDEK together with a zone key version and the client asks the KMS to decrypt the EDEK. If the client has permissions to read that zone (POSIX), the client will use the provided DEK to read the file. Seen from a DFS node perspective, that datastream is encrypted and the nodes only see an encrypted data stream. 

Setup and Use

I use here Cloudera’s CDH as example, but the same would work with other distributions and for sure with the official Apache Hadoop distribution. Enabling KMS in CDH (5.3.x and up) it's pretty easy, and doesn’t need to be explained here since Cloudera has great articles online about that process [3]. Important to know is only that KMS doesn’t work without a working Kerberos implementation. Additionally, there are other configuration parameters which need to be known, especially in a multi-domain Kerberos environment.
First, KMS uses the same rule based mechanism as HDFS uses when a trusted kerberos environment is used. That means the same filtering rules as existent in core-site.xml need to be added to kms-site.xml to get the encryption for all trusted domains working. This has to be done per:

<property>
 <name>hadoop.kms.authentication.kerberos.name.rules</name>
  <value>RULE:[1:$1@$0](.*@\QTRUSTED.DOMAIN\E$)s/@\QTRUSTED.DOMAIN\E$//
RULE:[2:$1@$0](.*@\QTRUSTED.DOMAIN\E$)s/@\QTRUSTED.DOMAIN\E$//
RULE:[1:$1@$0](.*@\QMAIN.DOMAIN\E$)s/@\QMAIN.DOMAIN\E$//
RULE:[2:$1@$0](.*@\QMAIN.DOMAIN\E$)s/@\QMAIN.DOMAIN\E$//
DEFAULT</value>
</property>


per kms-site.xml. The terms trusted.domain / main.domain are placeholders, describing the original and the trusted kerberos domain. The use from an administrative standpoint is straightforward:
hadoop key create KEYNAME #(one time key creation)
hadoop fs -mkdir /enc_zones/data
hdfs crypto -createZone -keyName KEYNAME -path /enc_zones/data
hdfs crypto -listZones


First I create a key, then I create the directory I want to encrypt in HDFS and encrypt this with the key I created first. 
This directory is now only accessible by me or users I give access per HDFS POSIX permissions. Others aren’t able to change or read files. To give superusers the possibility to create backups without de- and encrypt, a virtual path prefix for distCp (/.reserved/raw) [4] is available. This prefix allows the block-wise copy of encrypted files, for backup and DR reasons.

The use of distCp for encrypted zones can cause some mishaps. Highly recommended is to have identical encrypted zones on both sides to avoid problems later. A potential distCp command for encrypted zones could look like:

hadoop distcp -px hdfs://source-cluster-namenode:8020/.reserved/raw/enc_zones/data hdfs://target-cluster-namenode:8020/.reserved/raw/enc_zones/data

Saturday, June 4, 2016

Open Source based Hyper-Converged Infrastructures and Hadoop

According to a report from Simplivity [1] Hyper-Converged Infrastructures are used by more than 50% of the interviewed businesses, tendentious increasing. But what does this mean for BigData solutions, and Hadoop especially? What tools and technologies can be used, what are the limitations and the gains from such a solution?

To build a production ready and reliable private cloud to support Hadoop clusters as well as on-demand and static I have made great experience with OpenStack, Saltstack and the Sahara plugin for Openstack.
Openstack supports Hadoop-on-demand per Sahara, it's also convenient to use VM's and install a Hadoop Distribution within, especially for static clusters with special setups. The Openstack project provides ready to go images per [2], as example for Vanilla 2.7.1 based Hadoop installations. As an additional benefit, Openstack supports Docker [3], which adds an additional layer of flexibility for additional services, like Kafka [4] or SolR [5].

Costs and Investment
The costs of such an Infrastructure can vary, depending on the hardware and future strategy. Separate compute and storage nodes have been proven in the past, and should be used in future, too. The benefits outweigh the limitations, mostly end up in having move bare metal servers than in a high packed (compute and storage in one server) environment. Additionally, a more stretched environment
helps to balance peaks and high usage better than packed servers. A typical setup would have 2 controller nodes (for HA reasons), a decent count on compute nodes (high memory and CPU count) and several storage nodes (1 CPU, 8 or 16GB RAM and plenty JBOD (just a bunch of disks)). Those storage nodes should have 2 LVM’s (or raids, if that feels better) to avoid later conflicts with production and development / staging / QA buildouts.

Technology
Hadoop itself has some limitations, especially in Hyper-Converged Infrastructures, given by the demand on data locality for batch processes (MapReduce). In a typical cloud environment, like Sahara is providing in Openstack, the storage area is virtualized, and all data is transferred over the network stack. This can be avoided by using VM images for a persistent Hadoop cluster, as a production one mostly is. The data storage (HDFS) will then be provided within the VM and can be extended by mounting additional volumes to the VM (partitions for the data nodes, for example). In both implementations, Cloud based by Sahara and VM, the use of HDFS caching [6] is recommended. This will dramatically speed up the platform for analytical workloads by using columnar based storage formats like Parquet or Kudu [7], together with Hive on Spark [8]. To identify bottlenecks analyzer like Dr. Elephant [9] are very useful and recommended.

Hadoop on demand provides much more flexibility as a static cluster has, especially in terms of load peaks, dynamical resource allocation and cost efficiency. But there are some points to consider. The first and most important one is the separation of block storage and computing. Hadoop itself works with different other distributed filesystems, like ceph [10], but those often rely on Hadoop 1 (MRv1) and Yarn and MRv2 aren’t supported (yet).
The best solution here is to use the standard HDFS layer over cinder [11], which provides good performance with reliability and decent IOpS. The second, and also important one is the network layer. Every compute and storage node should have at least bonded 1GB uplinks, 10G are better (but more expensive). The network needs to be separated into front- and backend. The front-end link provides accessibility to the services the cluster provides to its users, and the back-end provides inter-cluster-communication only. As a third point the use of in-memory filesystems like Alluxio [12] (former Tachyon) may be considered, especially for research clusters, like Genome calculation or NRT applications with high ingestion rates of small data points, like IoT devices typically do.
With these points in mind, streaming based applications getting the most out of this approach, given by the high flexibility and the availability to deal with large load peaks by adding computing resources dynamically. 

Conclusion
Using Hyper-Converged Infrastructures in the world of BigData tools is trending now and proves the success of the private cloud idea. Large companies like LinkedIN, Google, Facebook are on this road since years, and the success outweighs the implementation and maintenance considerations.

List of tools used in this article
Openstack:
Sahara:

Saltstack - Openstack:

Links and References:

Tuesday, May 10, 2016

SolR, NiFi, Twitter and CDH 5.7

Since the most interesting Apache NiFi parts are coming from ASF [1] or Hortonworks [2], I thought to use CDH 5.7 and do the same, just to be curious. Here's now my 30 minutes playground, currently running in Googles Compute.

On one of my playground nodes I installed Apache NiFi per
mkdir /software && cd /software && wget http://mirror.23media.de/apache/nifi/0.6.1/nifi-0.6.1-bin.tar.gz && tar xvfz nifi-0.6.1-bin.tar.gz

Then I've set only nifi.sensitive.props.key property in conf/nifi.properties to an easy to remember secret. The next bash /software/nifi-0.6.1/bin/nifi.sh install installs Apache NiFi as an service. After log in into Apache NiFi's WebUI, download and add the template [3] to Apache NiFi, move the template icon to the drawer, open it and edit the twitter credentials to fit your developer account.

To use an  schema-less SolR index (or Cloudera Search in CDH) I copied some example files over into a local directory:
cp -r /opt/cloudera/parcels/CDH/share/doc/solr-doc-4.10.3+cdh5.7.0+389/example/example-schemaless/solr/collection1/conf/* $HOME/solr_configs/conf/

And added to solrconfig.xml into the <updateRequestProcessorChain name="add-unknown-fields-to-the-schema"> declaration below <updateRequestProcessorChain name="add-unknown-fields-to-the-schema">:
<str>EEE MMM d HH:mm:ss Z yyyy</str>

So it looks like:
<processor>
<arr name="format">
<str>EEE MMM d HH:mm:ss Z yyyy</str>


Since the new Twitter API HTML format the client source, I added a HTML strip processor into the same declaration:

</processor>
  <processor class="solr.HTMLStripFieldUpdateProcessorFactory">
  <str name="fieldName">source_s</str>
</processor>

All configs are available per Gist [4,5].

To get the configs running, initialize SolR:

solrctl --zk ZK_HOST:2181/solr instancedir --create twitter $HOME/solr_configs
solrctl --zk ZK_HOST:2181/solr collection --create twitter -s 2 -c twitter -r 2

Setup Banana for SolR is pretty easy:
cd /software && wget https://github.com/lucidworks/banana/archive/release.zip && unzip release.zip && mv banana-release banana && cp -r banana /opt/cloudera/parcels/CDH/lib/solr/webapps/ on one of the solr hosts and check if it's running per http://solr-node:8983/banana/src/index.html. To move fast forward, I have a dashboard available on gist [5], too.

Screenshot Dashboard:


Apache NiFi flow:


Conclusion
This demo shows that's pretty easy today by using available tools to setup more or less complex data flows within a few hours. Apache NiFi is pretty stable, has a lot of sinks available and runs now 2 weeks in Google Compute, captured over 200 mio tweets and stored them in SolR as well as in HDFS. It's interesting to play around with the data in realtime, interactive driven by Banana. 




Tuesday, January 5, 2016

Apache Tez on CDH 5.4.x

Since Cloudera doesn't support Tez in their Distribution right now (but it'll come, I'm pretty confident), we experimented with Apache Tez and CDH 5.4 a bit.
To use Tez with CDH isn't so hard - and it works quite well.  And our ETL and Hive jobs finished around 30 - 50% faster.

Anyway, here the blueprint. We use CentOS 6.7 with Epel Repo.

1. Install maven 3.2.5 
wget http://archive.apache.org/dist/maven/maven-3/3.2.5/binaries/apache-maven-3.2.5-bin.tar.gz
tar xvfz apache-maven-3.2.5-bin.tar.gz -C /usr/local/
cd /usr/local/
ln -s apache-maven-3.2.5 maven

=> Compiling Tez with protobuf worked only with 3.2.5 in my case

1.1 Install 8_u40 JDK
mkdir development && cd development (thats my dev-root)

wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u40-b26/jdk-8u40-linux-x64.tar.gz"
tar xvfz jdk-8u40-linux-x64.tar.gz
export JAVA_HOME=/home/alo.alt/development/jdk1.8.0_40
export JRE_HOME=/home/alo.alt/development/jdk1.8.0_40/jre
export PATH=$PATH:/home/alo.alt/development/jdk1.8.0_40:/home/alo.alt/development/jdk1.8.0_40/jre

2. Create a maven profile.d file
vi /etc/profile.d/maven.sh
export M2_HOME=/usr/local/maven
export PATH=${M2_HOME}/bin:${PATH}

3. Get Tez
git clone https://github.com/apache/tez.git
git checkout tags/release-0.7.0
git checkout -b tristan

modify pom.xml to use hadoop-2.6.0-cdh.5.4.2

<profile>
   <id>cdh5.4.0</id>
   <activation>
   <activeByDefault>false</activeByDefault>
   </activation>
   <properties>
     <hadoop.version>2.6.0-cdh5.4.0</hadoop.version>
   </properties>
   <pluginRepositories>
     <pluginRepository>
     <id>cloudera</id>
     <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
     </pluginRepository>
   </pluginRepositories>
   <repositories>
     <repository>
       <id>cloudera</id>
       <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
     </repository>
   </repositories>
</profile>

And apply the patch from https://gist.github.com/killerwhile/23225004a78949d4c849#file-gistfile1-diff

4. Install protobuf
sudo yum -y install gcc-c++ openssl-devel glibc
wget https://protobuf.googlecode.com/files/protobuf-2.5.0.tar.bz2
tar xfvj protobuf-2.5.0.tar.bz2
cd protobuf-2.5.0/
./configure && make && make check
make install && ldconfig && protoc --version

or use the precompiled RPMS:
ftp://ftp.pbone.net/mirror/ftp5.gwdg.de/pub/opensuse/repositories/home:/kalyaka/CentOS_CentOS-6/x86_64/protobuf-2.5.0-16.1.x86_64.rpm 
ftp://ftp.pbone.net/mirror/ftp5.gwdg.de/pub/opensuse/repositories/home:/kalyaka/CentOS_CentOS-6/x86_64/protobuf-compiler-2.5.0-16.1.x86_64.rpm

5. Build Tez against CDH 5.4.2
mvn -Pcdh5.4.2 clean package -Dtar -DskipTests=true -Dmaven.javadoc.skip=true

6. Install Tez
hadoop dfs -mkdir /apps/tez && hadoop dfs -copyFromLocal tez/tez-dist/target/tez-0.7.0.tar.gz /apps/tez/tez-0.7.0.tar.gz

sudo mkdir -P /apps/tez && tar xvfz tez/tez-dist/target/tez-0.7.0.tar.gz -C /apps/tez/

6.1 create a tez-site.xml in /apps/tez/conf/
<configuration>
  <property>
    <name>tez.lib.uris</name>
    <value>${fs.default.name}/apps/tez/tez-0.7.0.tar.gz</value>
  </property>
</configuration>

7. Run Tez with Yarn
export TEZ_HOME=/apps/tez
export TEZ_CONF_DIR=${TEZ_HOME}/conf
export HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${TEZ_CONF_DIR}:$(find ${TEZ_HOME} -name "*.jar" | paste -sd ":")"

hive> set hive.execution.engine=tez;
hive> SELECT s07.description, s07.salary, s08.salary, s08.salary - s07.salary FROM sample_07 s07 JOIN sample_08 s08 ON ( s07.code = s08.code) WHERE s07.salary < s08.salary ORDER BY s08.salary-s07.salary DESC LIMIT 1000;

beeline --hiveconf tez.task.launch.env="LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$YOUR_HADOOP_COMMON_HOME/lib/native" \ 
--hiveconf tez.am.launch.env="LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$YOUR_HADOOP_COMMON_HOME/lib/native" '
Check if you have the lib*.so available in the native folder (or point to the folder which contains the .so files).

Sources:
https://gist.github.com/killerwhile/23225004a78949d4c849#file-gistfile1-diff
http://tez.apache.org/install.html