Skip to content

Commit fe0641d

Browse files
author
Thomas Weise
committed
Improved Documentation for JSON and MPI Examples
1 parent 03fafe9 commit fe0641d

File tree

2 files changed

+20
-1
lines changed

2 files changed

+20
-1
lines changed

jsonRPC/README.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,16 @@
22

33
[JSON RPC](https://en.wikipedia.org/wiki/JSON-RPC) is a remote procedure call ([RPC](https://en.wikipedia.org/wiki/Remote_procedure_call)) approach (specified [here](http://json-rpc.org/)) where the exchanged data structures are encoded in the JavaScript Object Notation ([JSON](https://en.wikipedia.org/wiki/JSON)). The data is exchanged via either [HTTP](https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol) or [TCP](https://en.wikipedia.org/wiki/Transmission_Control_Protocol).
44

5-
JSON RCPs are somehow similar to [web services](https://en.wikipedia.org/wiki/Web_service) ([examples](http://github.com/thomasWeise/distributedComputingExamples/tree/master/webServices/)), but are more light-weight (as the data structures smaller and simpler than [SOAP](https://en.wikipedia.org/wiki/SOAP)/[XML](https://en.wikipedia.org/wiki/XML), [examples for XML processing](http://github.com/thomasWeise/distributedComputingExamples/tree/master/xml/java)).
5+
JSON RCPs are an alternative to [web services](https://en.wikipedia.org/wiki/Web_service) ([examples](http://github.com/thomasWeise/distributedComputingExamples/tree/master/webServices/)), but are more light-weight (as the data structures smaller and simpler than [SOAP](https://en.wikipedia.org/wiki/SOAP)/[XML](https://en.wikipedia.org/wiki/XML), [examples for XML processing](http://github.com/thomasWeise/distributedComputingExamples/tree/master/xml/java)).
6+
7+
If we compare JSON RPC implemented with [briandilley](https://github.com/briandilley)'s [jsonrpc4j](https://github.com/briandilley/jsonrpc4j) framework to web services implemented with [Axis2](http://axis.apache.org/axis2/java/core/) and [Java RMI](https://en.wikipedia.org/wiki/Java_remote_method_invocation) ([examples](http://github.com/thomasWeise/distributedComputingExamples/tree/master/javaRMI/)), then we find that
8+
9+
1. It has many positive features of web services (which Java RMI lacks), namely is human-readable and can be transported over HTTP.
10+
2. The implementation is more light-weight: The protocol data units are smaller, the codebase seems to be smaller.
11+
3. No code needs to be generated and we can specify services as interfaces that we can use on both the client and server side - exactly as we do in Java RMI.
12+
4. JSON RPC services can either be deployed into a servlet container or compiled into a fat jar, which is quite nice.
13+
14+
So JSON RPCs are quite a nice technology.
615

716
## 1. Examples
817

mpi/README.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,16 @@
22

33
The Message Passing Interface ([MPI](https://en.wikipedia.org/wiki/Message_Passing_Interface)) is a standard framework for highly-efficient communication in distributed systems.
44

5+
## 1. Introduction
6+
7+
In our past lessons, we have focused on the outside and inside view of an organization's distributed application environment, i.e., how the applications can be represented as interactive websites to the users and how they can interact with each other under the hood. This is mainly interesting for business and enterprise software developing and distribution is mainly used for inter-operability, not for performance gain.
8+
9+
But what if we want to use distribution for performance gain? Well, [web services](https://en.wikipedia.org/wiki/Web_service) ([examples](http://github.com/thomasWeise/distributedComputingExamples/tree/master/webServices/)) would probably not our technology of choice. What would we want? Well, if we want performance, then what we want is to solve a problem faster. By dividing it into smaller pieces, into sub-problems. Each of these problems would be solved by a different process on a different computer - in parallel.
10+
11+
We are, hence, looking for a technology that allows us to first send the different sub-problems to different processes on different computers in our network (usually in our cluster). The processes should solve these problems (which means they may also need to exchange some information while doing so, if the problems are related and depending on each other). Finally, all the results of the sub-problems should be sent to a central process which combines them to a final result of the overall problem. Something like this what we would like. And this is MPI, the message passing interface.
12+
13+
MPI is implemented for a variety of programming languages, but lower-level and higher-performance languages like `C` and `Fortran` are usually what we would like to choose to implement a program for high-performance computing. Here we discuss examples for the former language, i.e., `C`.
14+
515
## 1. Examples
616

717
The following examples are included in this folder.

0 commit comments

Comments
 (0)