User Tools

Site Tools


tutorials:cb1:customization:deploying_hadoop_cluster_on_cubieboard_guide

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

tutorials:cb1:customization:deploying_hadoop_cluster_on_cubieboard_guide [2013/10/17 13:16]
admin [SSH sever]
tutorials:cb1:customization:deploying_hadoop_cluster_on_cubieboard_guide [2013/12/23 14:50] (current)
Line 132: Line 132:
  
 ====JDK==== ====JDK====
-===JDK path modification===+==JDK path modification== 
 +''​hadoop@master:​~$ vim /​etc/​profile''​ 
 ''​ #export JAVA_HOME=/​lib/​jdk''​ ''​ #export JAVA_HOME=/​lib/​jdk''​
 +
 +You also should add to other nodes.
  
 ====Hadoop configuration==== ====Hadoop configuration====
-===Hadoop configuration parameters=== +You should edit core-site.xml hdfs-site.xml mapred-site.xml on /​hadoop/​hadoop_0.20.203_master/​conf ​for master  
-You should edit core-site.xml hdfs-site.xml mapred-site.xml on /​hadoop/​hadoop_0.20.203_master/​conf+You can do Hadoop configuration on your host computer . 
 +<​code>​ 
 +aaron@cubietech:/​work$ sudo mv hadoop_0.20.203 hadoop  
 +aaron@cubietech:/​work$ cd hadoop/​conf/​ 
 +aaron@cubietech:/​work/​hadoop/​conf$ sudo vim core-site.xml 
 + 
 +</​code>​ 
  
 core-site.xml core-site.xml
Line 183: Line 194:
 </​configuration>​ </​configuration>​
 </​code>​ </​code>​
-===7.How ​to run===+After that ,You should copy hadoop ​to every node 
 <​code>​ <​code>​
-bin/​hadoop ​namenode ​-format ​+scp -r hadoop root@192.168.1.40:​/usr/local 
 +scp -r hadoop ​root@192.168.1.41:/​usr/​local 
 +scp -r hadoop root@192.168.1.42:/​usr/​local 
 +scp -r hadoop root@192.168.1.43:/​usr/​local 
 +</​code>​
  
 +====How to run====
 +''​hadoop@master:​~$ cd /​usr/​local/​hadoop/''​
 +
 +format filesys
 +
 +''​bin/​hadoop namenode -format '' ​
 +<​code>​
 +hadoop@master:/​usr/​local/​hadoop$ bin/hadoop namenode -format
 +13/10/17 05:49:16 INFO namenode.NameNode:​ STARTUP_MSG: ​
 +/​************************************************************
 +STARTUP_MSG:​ Starting NameNode
 +STARTUP_MSG: ​  host = master/​192.168.1.40
 +STARTUP_MSG: ​  args = [-format]
 +STARTUP_MSG: ​  ​version = 0.20.203.0
 +STARTUP_MSG: ​  build = http://​svn.apache.org/​repos/​asf/​hadoop/​common/​branches/​branch-0.20-security-203 -r 1099333; compiled by '​oom'​ on Wed May  4 07:57:50 PDT 2011
 +************************************************************/​
 +Re-format filesystem in /​usr/​local/​hadoop/​datalog1 ? (Y or N) Y
 +Re-format filesystem in /​usr/​local/​hadoop/​datalog2 ? (Y or N) Y
 +13/10/17 05:49:22 INFO util.GSet: VM type       = 32-bit
 +13/10/17 05:49:22 INFO util.GSet: 2% max memory = 19.335 MB
 +13/10/17 05:49:22 INFO util.GSet: capacity ​     = 2^22 = 4194304 entries
 +13/10/17 05:49:22 INFO util.GSet: recommended=4194304,​ actual=4194304
 +13/10/17 05:49:24 INFO namenode.FSNamesystem:​ fsOwner=hadoop
 +13/10/17 05:49:24 INFO namenode.FSNamesystem:​ supergroup=supergroup
 +13/10/17 05:49:24 INFO namenode.FSNamesystem:​ isPermissionEnabled=true
 +13/10/17 05:49:24 INFO namenode.FSNamesystem:​ dfs.block.invalidate.limit=100
 +13/10/17 05:49:24 INFO namenode.FSNamesystem:​ isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
 +13/10/17 05:49:24 INFO namenode.NameNode:​ Caching file names occuring more than 10 times 
 +13/10/17 05:49:26 INFO common.Storage:​ Image file of size 112 saved in 0 seconds.
 +13/10/17 05:49:26 INFO common.Storage:​ Storage directory /​usr/​local/​hadoop/​datalog1 has been successfully formatted.
 +13/10/17 05:49:26 INFO common.Storage:​ Image file of size 112 saved in 0 seconds.
 +13/10/17 05:49:27 INFO common.Storage:​ Storage directory /​usr/​local/​hadoop/​datalog2 has been successfully formatted.
 +13/10/17 05:49:27 INFO namenode.NameNode:​ SHUTDOWN_MSG: ​
 +/​************************************************************
 +SHUTDOWN_MSG:​ Shutting down NameNode at master/​192.168.1.40
 +************************************************************/​
 +</​code>​
 +
 +<​code>​
 bin/hadoop dfsadmin -report ​ bin/hadoop dfsadmin -report ​
  
Line 193: Line 247:
 bin/​stop-all.sh // stop bin/​stop-all.sh // stop
  
-./​bin/​hadoop jar hadoop-examples-0.20.203.0.jar pi 100 100 //​Calculates PI 
 </​code>​ </​code>​
-====Troubleshooting==== + 
-===== You can also read this tutorial ​=====+''​./​bin/​hadoop jar hadoop-examples-0.20.203.0.jar pi 100 100 ''​ 
 + 
 + 
 +{{:​tutorials:​cb1:​installation:​screenshot_from_2013-07-31_18_18_35.png|800}} 
 + 
 +You can also see filesys on web 
 + 
 + 
 +http:​192.168.1.2:​50030 
 + 
 +http:​192.168.1.2:​50070 
 + 
 + 
 +{{:​tutorials:​cb1:​installation:​screenshot_from_2013-07-31_18_18_14.png|800}} 
 + 
 + 
 +You can also read this tutorial ​ 
 http://​www.cnblogs.com/​xia520pi/​archive/​2012/​05/​16/​2503949.html http://​www.cnblogs.com/​xia520pi/​archive/​2012/​05/​16/​2503949.html
 <WRAP noprint> <WRAP noprint>
 {{tag>​Cubieboard Cubietruck}} {{tag>​Cubieboard Cubietruck}}
 </​WRAP>​ </​WRAP>​
tutorials/cb1/customization/deploying_hadoop_cluster_on_cubieboard_guide.1381987007.txt.gz · Last modified: 2013/12/23 14:50 (external edit)