You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: hadoop/README.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -152,7 +152,7 @@ Under Linux, you can also simply run `make_linux.sh` in this project's folder to
152
152
153
153
In order to test our example, we now need to set up a single-node Hadoop cluster. We therefore follow the guide given at [http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html](http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html). Here we provide the installation guide for Hadoop 2.7.2 Linux / Ubuntu.
154
154
155
-
### 2.4.1. Download, Unpacking, and Setup
155
+
<h4>2.4.1. Download, Unpacking, and Setup</h4>
156
156
157
157
Here we discuss how to download and unpack Hadoop.
158
158
<ol>
@@ -163,7 +163,7 @@ Here we discuss how to download and unpack Hadoop.
163
163
<li>A new folder named <code>X/hadoop-2.7.2</code> should have appeared. If you chose a different Hadoop version, replace <code>2.7.2.</code> accordingly in the following steps.</li>
164
164
<li>In order to run Hadoop, you must have <code>JAVA_HOME</code> set correctly. Open the file <code>X/etc/hadoop/hadoop-env.sh</code>. Find the line <code>export JAVA_HOME=${JAVA_HOME}</code> and replace it with <code>export JAVA_HOME=$(dirname $(dirname $(readlink -f $(which javac))))</code>.</li></ol>
165
165
166
-
#### 2.4.2. Testing basic Functionality
166
+
<h4>2.4.2. Testing basic Functionality</h4>
167
167
168
168
We can now test whether everything above has turned out well and all is downloaded, unpacked, and set up correctly.
169
169
<ol>
@@ -178,7 +178,7 @@ cat output/*
178
178
This third command should produce a lot of logging output and the last one should say something like <code>1 dfsadmin</code>. If that is the case, you are doing well.
179
179
</li></ol>
180
180
181
-
#### 2.4.3. Setup for Single-Computer Pseudo-Distributed Execution
181
+
<h4>2.4.3. Setup for Single-Computer Pseudo-Distributed Execution</h4>
182
182
183
183
For really using Hadoop in a pseudo-distributed fashion on our local computer, we have to do <ahref="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html#Pseudo-Distributed_Operation">more</a>:
184
184
<ol>
@@ -206,7 +206,7 @@ For really using Hadoop in a pseudo-distributed fashion on our local computer, w
206
206
</configuration>
207
207
</pre></li></ol>
208
208
209
-
#### 2.4.4. Setup for SSH for Passwordless Connection to Local Host
209
+
<h4>2.4.4. Setup for SSH for Passwordless Connection to Local Host</h4>
210
210
211
211
In order to run Hadoop in a pseudo-distributed fashion, we need to enable passwordless SSH connections to the local host.
<li>You will get displayed some text such as <code>Generating public/private dsa key pair.</code> followed by a couple of other things. After completing the above commands, you should test the result by again executing <code>ssh localhost</code>. You will now no longer be asked for a password and directly receive a welcome message, something like <code>Welcome to Ubuntu 15.10 (GNU/Linux 4.2.0-35-generic x86_64)</code> or whatever Linux distribution you use. Via a ssh connection, you can, basically, open a terminal to and run commands on a remote computer (which, in this case, is your own, current computer). You can return to the normal (non-ssh) terminal by entering <code>exit</code> and pressing return, after which you will be notified that <code>Connection to localhost closed.</code></li>
231
231
</ol>
232
232
233
-
#### 2.4.6. Running the Hadoop-Provided Map-Reduce Job Locally
233
+
<h4>2.4.6. Running the Hadoop-Provided Map-Reduce Job Locally</h4>
234
234
235
235
We now want to test whether our installation and setup works correctly by further following the steps given in the <ahref="http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html#Execution">tutorial</a>.
236
236
<ol>
@@ -324,7 +324,7 @@ bin/hdfs dfs -rm -R output
324
324
325
325
### 2.6 Troubleshooting
326
326
327
-
#### 2.6.1. "No such file or directory"
327
+
<h4>2.6.1. "No such file or directory"</h4>
328
328
329
329
Sometimes, you may try to copy some file or folder to HDFS and get an error that no such file or directory exists. Then do the following:
0 commit comments