Blog
Blog
A big blog for Big Data.
 
hdfs_webui-image
HDFS File Browser

Improving the Hadoop DFS Web UI

By | September 13th, 2016 | HDFS File Browser

Here at Altiscale, we have a diverse set of customers, from media companies to financial services firms to manufacturers. This leads to a diverse set of requirements on the Altiscale Data Cloud, and we develop new solutions to address these requirements. Because of our dedication to the Hadoop community, we contribute these innovations back to the open source community whenever we can. Some of our other contributions include DockerContainerExecutor (at one time the most-watched JIRA), Shell-script rewrite, GraphiteSink, KafkaSink, etc.

Thanks to great work on WebHDFS and the subsequent upgrade of the Namenode UI to use HTML5, the Hadoop Namenode finally got a modern UI.

picture1 picture2
The Old UI The newer WebHDFS UI

To understand why this UI was a significant advance, we should look at how the old UI worked.

picture3

In the old UI, a client would contact the Namenode HTTP port and request to view a directory. The HTTP server would then create a brand new DFSClient object, contact itself (no kidding), and then render a JSP page, which is finally sent back to the client. The UI itself was also pretty dated, and had at least one XSS vulnerability discovered (hat tip to Derek Dagit) of which we know.

picture4

In the new UI, the client requests to view a directory. A WebHDFS server implementation on the Namenode simply returns JSON data to the client. It is then the client’s responsibility to render the data on the user’s web browser. Here we see the advantages of using a well-defined REST service instead of a custom protocol. We also found the UI significantly more responsive and intuitive.

The newer UI was well received and we had been receiving requests to improve it even further. Although it was good to observe the state HDFS was in, users were still not able to modify files and interact with HDFS. Some of the most commonly requested features included:

  1. Creating directories (mkdir)
  2. Changing permissions (chmod)
  3. Changing ownership (chown)
  4. Setting replication (setrep)
  5. Deleting files / directories (rm)
  6. Moving files / directories (mv)
  7. Pagination and sorting capabilities

We filed an umbrella JIRA for these improvements at HDFS-7588. We were able to work with the community and implement all these features in open source Hadoop.

Here are some screenshots:

picture5 picture6
Changing permissions Changing ownership

 

picture7 picture8
Changing replication Creating directories

The obvious next step was to support file uploading. This turned out to be slightly more complicated than expected.

That tiny little thing called same-origin policy

The WebHDFS protocol has the following steps for accessing a file, when using a client like curl:

picture9

  1. The client first sends a request to the NameNode to read/write the file.
  2. The Namenode knows which datanodes have the first block of the file. It redirects the client through an HTTP 307 response to one of the datanodes.
  3. The client then sends exactly the same request to the datanode.

The astute reader will realize that this will not work with web browsers, because they implement the same origin policy. Web browsers will not simply follow the redirect. In fact, they first send a pre-flighted HTTP OPTIONS request (which contains the origin). Only if the server implements CORS and replies with a 200 along with the methods and so on, does the browser send the original request. This is done to protect the user of the web browser from leaking sensitive information.

However, in our case, the problem is further complicated because our original request was an HTTP PUT via XMLHttpRequest . As it turns out, XMLHttpRequest is a living standard, and web-browsers still differ in their behavior when they encounter this. In our testing, Mozilla Firefox (v35.0) did not send the pre-flighted request for an AJAX PUT request. Google Chrome (v37.0) did. Here’s an illuminating discussion about the the issue.

Our only recourse was to change the WebHDFS protocol itself, so that now when an additional parameter (nodirect) is set on the request, the Namenode returns a 200 OK response (in contrast to a 307 redirect), and puts the datanode location in the response. Scripts on the browser then create a new XMLHttpRequest to the datanode. This works well.

Conclusion

We were able to contribute features to make the HDFS web browser fully usable. HDFS users who do not want to wait on the command line can now easily and intuitively get their work done via this interface. For now, these features are slated to be available with Hadoop-3 (although at Altiscale we chose to backport them to our Hadoop-2.7 clusters.) We’d love to hear from you if you found these features useful.

Acknowledgements

The idea initially was implemented by Travis Thompson and Howard Weingram as part of an internal hackday. From the Apache community, Haohui Mao and Allen Wittenauer were instrumental in getting this integrated. Nina Stawski and Dragana Mijalkovic helped with the front end changes and UI design.