Peer-to-Peer (P2P) computing is a new name for a relatively old model of computing that is coming back into favor. Looking back at its history, the Internet started as a network of computers (peer nodes) communicating directly with each other. As computing models, software and network architectures grew and evolved, different models of computer networking were used, resulting in different architectures, most of them with the distinct notions of a client and a server, roles being performed by different machines. The pendulum seems to have swung back to the notion of a node on the network being both a client and a server at different times and in different contexts. This has been spurred by the availability of cheap computers, cheap bandwidth, cheap data storage, and idle processor cycles. We can then define P2P computing as computing through direct collaboration between nodes (without resorting to an intermediary such as a server), including the sharing of data, processing cycles, and resources, such as storage and printers. Within an organization, this has the advantage of relieving some of the load off the servers. Outside organizational boundaries, the P2P model will enable computing and sharing of resources in unstructured environments where it makes no sense to have servers.
P2P computing has been made famous by the Napster court battles and is mainly known as a model for file sharing. In fact, P2P is a full-fledged model of distributed computing that enables collaboration, intensive computation, resource sharing, and so on. Currently, however, most P2P applications provide only one type of service, and are usually limited to file sharing or instant messaging. Aside from the Napster example (which is not necessarily a true P2P model, because it requires the Napster servers as intermediaries, at least in the discovery phase), some common P2P applications include Gnutella and FreeNet, two information sharing applications; and Groove, a P2P collaboration platform that allows users to share a variety of tools in addition to files and documents.
Current P2P applications are also restricted in their deployment platform and differ significantly in their interfaces. There is, therefore, a definite need for an open platform for P2P application development to enable multiple kinds of device independent applications and services, and a common interface for interoperability. The JXTA project is the latest attempt to solve that problem, and it remains to be seen whether it succeeds. The core of the JXTA architecture is a set of three layers: the JXTA Core, which handles communications, peer establishment, and other low level services; the JXTA Services layer that handles common services, such as indexing, searching, and file sharing; and the JXTA Applications layer which is the deployment framework for applications such as e-mail, instant messaging, and file sharing. In addition, the JXTA architecture provides for a set of platform independent protocols for routing, discovery, binding, and other tasks.
Relating Peer Computing to Web Services
At its core, P2P computing is a kind of service-oriented architecture in that it enables distributed computing through the loose coupling of systems, with emphasis on resource (instead of service) description and dynamic discovery. The major differences are not technical, but in terms of maturity. Although Web service standards are maturing and their interoperability is being tested in open environments, P2P standards have been very slow to emerge, hindering their acceptance as a valid option for e-business. Eventually, however, both Web services and P2P standards will clearly converge, especially when it comes to service description (will P2P adopt WSDL?), dynamic discovery (will Web services adopt whatever mechanism the P2P community converges to?), and security.