这里以统计单词为例
1 首先建立mapper.py
mkdir /usr/local/hadoop-python
cd /usr/local/hadoop-python
vim mapper.py
mapper.py
import sys
for line in sys.stdin:
line = line.strip()
words = line.split()
for word in words:
print '%s\t%s' % (word, 1)
文件保存后,请注意将其权限作出相应修改:
chmod a+x /usr/local/hadoop-python/mapper.py
2 建立reducer.py
vim reducer.py
from operator import itemgetter
import sys
current_word = None
current_count = 0
word = None
for line in sys.stdin:
line = line.strip()
word, count = line.split('\t', 1)
try:
count = int(count)
except ValueError:
continue
if current_word == word:
current_count += count
else:
if current_word:
print '%s\t%s' % (current_word, current_count)
current_count = count
current_word = word
if current_word == word:
print '%s\t%s' % (current_word, current_count)
文件保存后,请注意将其权限作出相应修改:
chmod a+x /usr/local/hadoop-python/reducer.py
首先可以在本机上测试以上代码,这样如果有问题可以及时发现:
# echo "foo foo quux labs foo bar quux" | /usr/local/hadoop-python/mapper.py
输出:
foo 1
foo 1
quux 1
labs 1
foo 1
bar 1
quux 1
再运行以下包含reduce.py的代码:
echo "foo foo quux labs foo bar quux" | /usr/local/hadoop-python/mapper.py | sort -k1,1 | /usr/local/hadoop-python/reducer.py
输出:
bar 1
foo 3
labs 1
quux 2
3 在Hadoop上运行Python代码
准备工作:
下载文本文件:
yum install wget -y
mkdir input
cd /usr/local/hadoop-python/input
wget http://www.gutenberg.org/files/5000/5000-8.txt
wget http://www.gutenberg.org/cache/epub/20417/pg20417.txt
然后把这二本书上传到hdfs文件系统上:
# 在hdfs上的该用户目录下创建一个输入文件的文件夹
hdfs dfs -mkdir /input
# 上传文档到hdfs上的输入文件夹中
hdfs dfs -put /usr/local/hadoop-python/input/pg20417.txt /input
寻找你的streaming的jar文件存放地址,注意2.6的版本放到share目录下了,可以进入hadoop安装目录寻找该文件:
cd $HADOOP_HOME
find ./ -name "*streaming*.jar"
然后就会找到我们的share文件夹中的hadoop-straming*.jar文件:
./share/hadoop/tools/lib/hadoop-streaming-2.8.4.jar
./share/hadoop/tools/sources/hadoop-streaming-2.8.4-test-sources.jar
./share/hadoop/tools/sources/hadoop-streaming-2.8.4-sources.jar
/usr/local/hadoop-2.8.4/share/hadoop/tools/lib
由于这个文件的路径比较长,因此我们可以将它写入到环境变量:
vim /etc/profile
export STREAM=/usr/local/hadoop-2.8.4/share/hadoop/tools/lib/hadoop-streaming-2.8.4.jar
由于通过streaming接口运行的脚本太长了,因此直接建立一个shell名称为run.sh来运行:
vim run.sh
hadoop jar /usr/local/hadoop-2.8.4/share/hadoop/tools/lib/hadoop-streaming-2.8.4.jar \
-files /usr/local/hadoop-python/mapper.py,/usr/local/hadoop-python/reducer.py \
-mapper /usr/local/hadoop-python/mapper.py \
-reducer /usr/local/hadoop-python/reducer.py \
-input /input/pg20417.txt \
-output /output1
hadoop jar $STREAM \-files /usr/local/hadoop-python/mapper.py,/usr/local/hadoop-python/reducer.py \-mapper /usr/local/hadoop-python/mapper.py \-reducer /usr/local/hadoop-python/reducer.py \-input /input/pg20417.txt \-output /output1
<blockquote>
<p>这里以统计单词为例</p>
</blockquote>
<h2><a id="1_mapperpy_2"></a>1 首先建立mapper.py</h2>
<pre><div class="hljs"><code class="lang-shell">mkdir /usr/local/hadoop-python
cd /usr/local/hadoop-python
vim mapper.py
</code></div></pre>
<p>mapper.py</p>
<pre><div class="hljs"><code class="lang-python"><span class="hljs-comment">#!/usr/bin/env python</span>
<span class="hljs-keyword">import</span> sys
<span class="hljs-comment"># input comes from STDIN (standard input) 输入来自STDIN(标准输入)</span>
<span class="hljs-keyword">for</span> line <span class="hljs-keyword">in</span> sys.stdin:
<span class="hljs-comment"># remove leading and trailing whitespace 删除前导和尾随空格</span>
line = line.strip()
<span class="hljs-comment"># split the line into words 把线分成单词</span>
words = line.split()
<span class="hljs-comment"># increase counters 增加柜台</span>
<span class="hljs-keyword">for</span> word <span class="hljs-keyword">in</span> words:
<span class="hljs-comment"># write the results to STDOUT (standard output); </span>
<span class="hljs-comment"># 将结果写入STDOUT(标准输出);</span>
<span class="hljs-comment"># what we output here will be the input for the</span>
<span class="hljs-comment"># Reduce step, i.e. the input for reducer.py</span>
<span class="hljs-comment"># tab-delimited; the trivial word count is 1</span>
<span class="hljs-comment"># 我们在此处输出的内容将是Reduce步骤的输入,即reducer.py制表符分隔的输入; # 平凡的字数是1</span>
<span class="hljs-built_in">print</span> <span class="hljs-string">'%s\t%s'</span> % (word, <span class="hljs-number">1</span>)
</code></div></pre>
<p>文件保存后,请注意将其权限作出相应修改:</p>
<pre><div class="hljs"><code class="lang-Shell">chmod a+x /usr/local/hadoop-python/mapper.py
</code></div></pre>
<h2><a id="2_reducerpy_39"></a>2 建立reducer.py</h2>
<pre><div class="hljs"><code class="lang-Shell">vim reducer.py
</code></div></pre>
<pre><div class="hljs"><code class="lang-python"><span class="hljs-comment">#!/usr/bin/env python</span>
<span class="hljs-keyword">from</span> operator <span class="hljs-keyword">import</span> itemgetter
<span class="hljs-keyword">import</span> sys
current_word = <span class="hljs-literal">None</span>
current_count = <span class="hljs-number">0</span>
word = <span class="hljs-literal">None</span>
<span class="hljs-comment"># input comes from STDIN 输入来自STDIN</span>
<span class="hljs-keyword">for</span> line <span class="hljs-keyword">in</span> sys.stdin:
<span class="hljs-comment"># remove leading and trailing whitespace </span>
<span class="hljs-comment"># 删除前导和尾随空格</span>
line = line.strip()
<span class="hljs-comment"># parse the input we got from mapper.py</span>
<span class="hljs-comment"># 解析我们从mapper.py获得的输入</span>
word, count = line.split(<span class="hljs-string">'\t'</span>, <span class="hljs-number">1</span>)
<span class="hljs-comment"># convert count (currently a string) to int</span>
<span class="hljs-comment"># 将count(当前为字符串)转换为int</span>
<span class="hljs-keyword">try</span>:
count = <span class="hljs-built_in">int</span>(count)
<span class="hljs-keyword">except</span> ValueError:
<span class="hljs-comment"># count was not a number, so silently</span>
<span class="hljs-comment"># ignore/discard this line</span>
<span class="hljs-comment"># count不是数字,因此请忽略/丢弃此行</span>
<span class="hljs-keyword">continue</span>
<span class="hljs-comment"># this IF-switch only works because Hadoop sorts map output</span>
<span class="hljs-comment"># by key (here: word) before it is passed to the reducer</span>
<span class="hljs-comment"># 该IF开关仅起作用是因为Hadoop在将映射输出传递给reducer之前按键(此处为word)对 # 映射输出进行排序</span>
<span class="hljs-keyword">if</span> current_word == word:
current_count += count
<span class="hljs-keyword">else</span>:
<span class="hljs-keyword">if</span> current_word:
<span class="hljs-comment"># write result to STDOUT</span>
<span class="hljs-comment"># 将结果写入STDOUT</span>
<span class="hljs-built_in">print</span> <span class="hljs-string">'%s\t%s'</span> % (current_word, current_count)
current_count = count
current_word = word
<span class="hljs-comment"># do not forget to output the last word if needed!</span>
<span class="hljs-comment"># 如果需要,不要忘记输出最后一个单词!</span>
<span class="hljs-keyword">if</span> current_word == word:
<span class="hljs-built_in">print</span> <span class="hljs-string">'%s\t%s'</span> % (current_word, current_count)
</code></div></pre>
<p>文件保存后,请注意将其权限作出相应修改:</p>
<pre><div class="hljs"><code class="lang-shell">chmod a+x /usr/local/hadoop-python/reducer.py
</code></div></pre>
<p>首先可以在本机上测试以上代码,这样如果有问题可以及时发现:</p>
<pre><div class="hljs"><code class="lang-shell"><span class="hljs-meta"># </span><span class="language-bash"><span class="hljs-built_in">echo</span> <span class="hljs-string">"foo foo quux labs foo bar quux"</span> | /usr/local/hadoop-python/mapper.py</span>
</code></div></pre>
<pre><div class="hljs"><code class="lang-shell">输出:
foo 1
foo 1
quux 1
labs 1
foo 1
bar 1
quux 1
</code></div></pre>
<p>再运行以下包含reduce.py的代码:</p>
<pre><div class="hljs"><code class="lang-shell">echo "foo foo quux labs foo bar quux" | /usr/local/hadoop-python/mapper.py | sort -k1,1 | /usr/local/hadoop-python/reducer.py
</code></div></pre>
<pre><code class="lang-powershell">输出:
bar 1
foo 3
labs 1
quux 2
</code></pre>
<h2><a id="3_HadoopPython_129"></a>3 在Hadoop上运行Python代码</h2>
<p>准备工作:<br />
下载文本文件:</p>
<pre><div class="hljs"><code class="lang-shell">yum install wget -y
</code></div></pre>
<pre><div class="hljs"><code class="lang-shell">mkdir input
cd /usr/local/hadoop-python/input
</code></div></pre>
<pre><div class="hljs"><code class="lang-shell">wget http://www.gutenberg.org/files/5000/5000-8.txt
wget http://www.gutenberg.org/cache/epub/20417/pg20417.txt
</code></div></pre>
<p>然后把这二本书上传到hdfs文件系统上:</p>
<pre><div class="hljs"><code class="lang-shell"><span class="hljs-meta"># </span><span class="language-bash">在hdfs上的该用户目录下创建一个输入文件的文件夹</span>
hdfs dfs -mkdir /input
<span class="hljs-meta">
# </span><span class="language-bash">上传文档到hdfs上的输入文件夹中</span>
hdfs dfs -put /usr/local/hadoop-python/input/pg20417.txt /input
</code></div></pre>
<p>寻找你的streaming的jar文件存放地址,注意2.6的版本放到share目录下了,可以进入hadoop安装目录寻找该文件:</p>
<pre><div class="hljs"><code class="lang-shell">cd $HADOOP_HOME
find ./ -name "*streaming*.jar"
</code></div></pre>
<p>然后就会找到我们的share文件夹中的hadoop-straming*.jar文件:</p>
<pre><div class="hljs"><code class="lang-shell">./share/hadoop/tools/lib/hadoop-streaming-2.8.4.jar
./share/hadoop/tools/sources/hadoop-streaming-2.8.4-test-sources.jar
./share/hadoop/tools/sources/hadoop-streaming-2.8.4-sources.jar
</code></div></pre>
<pre><div class="hljs"><code class="lang-shell">/usr/local/hadoop-2.8.4/share/hadoop/tools/lib
</code></div></pre>
<p>由于这个文件的路径比较长,因此我们可以将它写入到环境变量:</p>
<pre><div class="hljs"><code class="lang-shell">vim /etc/profile
</code></div></pre>
<pre><div class="hljs"><code class="lang-shell">export STREAM=/usr/local/hadoop-2.8.4/share/hadoop/tools/lib/hadoop-streaming-2.8.4.jar
</code></div></pre>
<p>由于通过streaming接口运行的脚本太长了,因此直接建立一个shell名称为run.sh来运行:</p>
<pre><div class="hljs"><code class="lang-shell">vim run.sh
</code></div></pre>
<pre><div class="hljs"><code class="lang-shell">hadoop jar /usr/local/hadoop-2.8.4/share/hadoop/tools/lib/hadoop-streaming-2.8.4.jar \
-files /usr/local/hadoop-python/mapper.py,/usr/local/hadoop-python/reducer.py \
-mapper /usr/local/hadoop-python/mapper.py \
-reducer /usr/local/hadoop-python/reducer.py \
-input /input/pg20417.txt \
-output /output1
</code></div></pre>
<pre><div class="hljs"><code class="lang-shell">hadoop jar $STREAM \-files /usr/local/hadoop-python/mapper.py,/usr/local/hadoop-python/reducer.py \-mapper /usr/local/hadoop-python/mapper.py \-reducer /usr/local/hadoop-python/reducer.py \-input /input/pg20417.txt \-output /output1
</code></div></pre>
留言