User Tools

Site Tools


tutorials:cb1:customization:deploying_hadoop_cluster_on_cubieboard_guide

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

tutorials:cb1:customization:deploying_hadoop_cluster_on_cubieboard_guide [2013/10/17 11:16]
admin
tutorials:cb1:customization:deploying_hadoop_cluster_on_cubieboard_guide [2013/12/23 14:50] (current)
Line 12: Line 12:
 {{:​tutorials:​cb1:​customization:​img_0564.jpg?​820|}} {{:​tutorials:​cb1:​customization:​img_0564.jpg?​820|}}
 ====Just start==== ====Just start====
-Routerhadoop_0.20.203 /Lubuntu 12.04v1.04(JDK-1.8)Power and network cables+ 
 +Router ​provide cluster with a lan  
 + 
 +hadoop_0.20.203 ​ you can get it from here http://​hadoop.apache.org/​common/​releases.html 
 + 
 +Lubuntu 12.04v1.04(JDK-1.8) ​Power and network cables 
 {{:​tutorials:​cb1:​customization:​img_0579.jpg|780}} {{:​tutorials:​cb1:​customization:​img_0579.jpg|780}}
  
Line 19: Line 25:
 1 master 3slaves both work in wlan network ,LAN connection between the nodes can ping each other. 1 master 3slaves both work in wlan network ,LAN connection between the nodes can ping each other.
 ==For master== ==For master==
-=Create a user+Create a user
 <​code>​ <​code>​
 sudo addgroup hadoop sudo addgroup hadoop
Line 38: Line 44:
 </​code>​ </​code>​
 ==For each slaves== ==For each slaves==
-you should do the same things. +You should do the same things.E.for slave1 ​node . 
-E.for slave1+
 add user add user
 <​code>​ <​code>​
Line 57: Line 63:
 </​code>​ </​code>​
 ==Static IP settings== ==Static IP settings==
 +For each node 
 <​code>​ <​code>​
 sudo vim ./​etc/​network/​interfaces sudo vim ./​etc/​network/​interfaces
 add add
- 
 #auto lo #auto lo
 # iface lo inet loopback # iface lo inet loopback
Line 78: Line 83:
 nameserver 192.168.1.1 nameserver 192.168.1.1
 </​code>​ </​code>​
-====Ssh sever ==== + 
-===slaves and master ​to achieve mutual login without password=== +Stay here to sure all cbs have an user with Static IP which can ping each other. 
-Master+====SSH sever ==== 
 + 
 +master<no passwd>​slave1 
 + 
 +master<​no passwd>​slave2 
 + 
 +master<​no passwd>​slave3 
 + 
 +B  ---no passwd-->​A 
 + 
 +A
 <​code>​ <​code>​
 ssh-keygen –t rsa –P '' ​ ssh-keygen –t rsa –P '' ​
Line 88: Line 103:
 scp ~/​.ssh/​id_rsa.pub hadoop@192.168.1.40:​~/​ scp ~/​.ssh/​id_rsa.pub hadoop@192.168.1.40:​~/​
 </​code>​ </​code>​
-Slaves+B
 <​code>​ <​code>​
 mkdir ~/.ssh mkdir ~/.ssh
Line 96: Line 111:
 rm –r ~/​id_rsa.pub rm –r ~/​id_rsa.pub
 </​code>​ </​code>​
 +master to slave1
 +<​code>​
 +hadoop@master:​~$ ​
 +hadoop@master:​~$ ssh slave1
 +Welcome to Linaro 13.04 (GNU/Linux 3.4.43+ armv7l)
 +
 + * Documentation: ​ https://​wiki.linaro.org/​
 +Last login: Thu Oct 17 03:38:36 2013 from master
 +</​code>​
 +slave1 to master
 +<​code>​
 +hadoop@slave1:​~$ ssh master
 +Welcome to Linaro 13.04 (GNU/Linux 3.4.43+ armv7l)
 +
 + * Documentation: ​ https://​wiki.linaro.org/​
 +Last login: Thu Oct 17 03:38:58 2013 from slave1
 +
 +</​code>​
 +
 +
 ====JDK==== ====JDK====
-===JDK path modification===+==JDK path modification== 
 +''​hadoop@master:​~$ vim /​etc/​profile''​ 
 ''​ #export JAVA_HOME=/​lib/​jdk''​ ''​ #export JAVA_HOME=/​lib/​jdk''​
 +
 +You also should add to other nodes.
  
 ====Hadoop configuration==== ====Hadoop configuration====
-===Hadoop configuration parameters=== +You should edit core-site.xml hdfs-site.xml mapred-site.xml on /​hadoop/​hadoop_0.20.203_master/​conf ​for master  
-You should edit core-site.xml hdfs-site.xml mapred-site.xml on /​hadoop/​hadoop_0.20.203_master/​conf+You can do Hadoop configuration on your host computer . 
 +<​code>​ 
 +aaron@cubietech:/​work$ sudo mv hadoop_0.20.203 hadoop  
 +aaron@cubietech:/​work$ cd hadoop/​conf/​ 
 +aaron@cubietech:/​work/​hadoop/​conf$ sudo vim core-site.xml 
 + 
 +</​code>​ 
  
 core-site.xml core-site.xml
Line 148: Line 194:
 </​configuration>​ </​configuration>​
 </​code>​ </​code>​
-===7.How ​to run===+After that ,You should copy hadoop ​to every node 
 <​code>​ <​code>​
-bin/​hadoop ​namenode ​-format ​+scp -r hadoop root@192.168.1.40:​/usr/local 
 +scp -r hadoop ​root@192.168.1.41:/​usr/​local 
 +scp -r hadoop root@192.168.1.42:/​usr/​local 
 +scp -r hadoop root@192.168.1.43:/​usr/​local 
 +</​code>​
  
 +====How to run====
 +''​hadoop@master:​~$ cd /​usr/​local/​hadoop/''​
 +
 +format filesys
 +
 +''​bin/​hadoop namenode -format '' ​
 +<​code>​
 +hadoop@master:/​usr/​local/​hadoop$ bin/hadoop namenode -format
 +13/10/17 05:49:16 INFO namenode.NameNode:​ STARTUP_MSG: ​
 +/​************************************************************
 +STARTUP_MSG:​ Starting NameNode
 +STARTUP_MSG: ​  host = master/​192.168.1.40
 +STARTUP_MSG: ​  args = [-format]
 +STARTUP_MSG: ​  ​version = 0.20.203.0
 +STARTUP_MSG: ​  build = http://​svn.apache.org/​repos/​asf/​hadoop/​common/​branches/​branch-0.20-security-203 -r 1099333; compiled by '​oom'​ on Wed May  4 07:57:50 PDT 2011
 +************************************************************/​
 +Re-format filesystem in /​usr/​local/​hadoop/​datalog1 ? (Y or N) Y
 +Re-format filesystem in /​usr/​local/​hadoop/​datalog2 ? (Y or N) Y
 +13/10/17 05:49:22 INFO util.GSet: VM type       = 32-bit
 +13/10/17 05:49:22 INFO util.GSet: 2% max memory = 19.335 MB
 +13/10/17 05:49:22 INFO util.GSet: capacity ​     = 2^22 = 4194304 entries
 +13/10/17 05:49:22 INFO util.GSet: recommended=4194304,​ actual=4194304
 +13/10/17 05:49:24 INFO namenode.FSNamesystem:​ fsOwner=hadoop
 +13/10/17 05:49:24 INFO namenode.FSNamesystem:​ supergroup=supergroup
 +13/10/17 05:49:24 INFO namenode.FSNamesystem:​ isPermissionEnabled=true
 +13/10/17 05:49:24 INFO namenode.FSNamesystem:​ dfs.block.invalidate.limit=100
 +13/10/17 05:49:24 INFO namenode.FSNamesystem:​ isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
 +13/10/17 05:49:24 INFO namenode.NameNode:​ Caching file names occuring more than 10 times 
 +13/10/17 05:49:26 INFO common.Storage:​ Image file of size 112 saved in 0 seconds.
 +13/10/17 05:49:26 INFO common.Storage:​ Storage directory /​usr/​local/​hadoop/​datalog1 has been successfully formatted.
 +13/10/17 05:49:26 INFO common.Storage:​ Image file of size 112 saved in 0 seconds.
 +13/10/17 05:49:27 INFO common.Storage:​ Storage directory /​usr/​local/​hadoop/​datalog2 has been successfully formatted.
 +13/10/17 05:49:27 INFO namenode.NameNode:​ SHUTDOWN_MSG: ​
 +/​************************************************************
 +SHUTDOWN_MSG:​ Shutting down NameNode at master/​192.168.1.40
 +************************************************************/​
 +</​code>​
 +
 +<​code>​
 bin/hadoop dfsadmin -report ​ bin/hadoop dfsadmin -report ​
  
Line 158: Line 247:
 bin/​stop-all.sh // stop bin/​stop-all.sh // stop
  
-./​bin/​hadoop jar hadoop-examples-0.20.203.0.jar pi 100 100 //​Calculates PI 
 </​code>​ </​code>​
-====Troubleshooting==== + 
-===== You can also read this tutorial ​=====+''​./​bin/​hadoop jar hadoop-examples-0.20.203.0.jar pi 100 100 ''​ 
 + 
 + 
 +{{:​tutorials:​cb1:​installation:​screenshot_from_2013-07-31_18_18_35.png|800}} 
 + 
 +You can also see filesys on web 
 + 
 + 
 +http:​192.168.1.2:​50030 
 + 
 +http:​192.168.1.2:​50070 
 + 
 + 
 +{{:​tutorials:​cb1:​installation:​screenshot_from_2013-07-31_18_18_14.png|800}} 
 + 
 + 
 +You can also read this tutorial ​ 
 http://​www.cnblogs.com/​xia520pi/​archive/​2012/​05/​16/​2503949.html http://​www.cnblogs.com/​xia520pi/​archive/​2012/​05/​16/​2503949.html
 <WRAP noprint> <WRAP noprint>
 {{tag>​Cubieboard Cubietruck}} {{tag>​Cubieboard Cubietruck}}
 </​WRAP>​ </​WRAP>​
tutorials/cb1/customization/deploying_hadoop_cluster_on_cubieboard_guide.1381979816.txt.gz · Last modified: 2013/12/23 14:50 (external edit)