Monday, 12 December 2011

RESTful Web services: The basics

REST( Representational State Transfer) defines a set of architectural principles by which you can design Web services that focus on a system's resources, including how resource states are addressed and transferred over HTTP by a wide range of clients written in different languages. If measured by the number of Web services that use it, REST has emerged in the last few years alone as a predominant Web service design model. In fact, REST has had such a large impact on the Web that it has mostly displaced SOAP- and WSDL-based interface design because it's a considerably simpler style to use.
REST didn't attract this much attention when it was first introduced in 2000 by Roy Fielding at the University of California, Irvine, in his academic dissertation, "Architectural Styles and the Design of Network-based Software Architectures," which analyzes a set of software architecture principles that use the Web as a platform for distributed computing (see http://www.ibm.com/developerworks/library/ws-restful/#resources for a link to this dissertation). Now, years after its introduction, major frameworks for REST have started to appear and are still being developed because it's slated, for example, to become an integral part of Java™ 6 through JSR-311.
This article suggests that in its purest form today, when it's attracting this much attention, a concrete implementation of a REST Web service follows four basic design principles:
  • Use HTTP methods explicitly.
  • Be stateless.
  • Expose directory structure-like URIs.
  • Transfer XML, JavaScript Object Notation (JSON), or both.
The following sections expand on these four principles and propose a technical rationale for why they might be important for REST Web service designers.

One of the key characteristics of a RESTful Web service is the explicit use of HTTP methods in a way that follows the protocol as defined by RFC 2616. HTTP GET, for instance, is defined as a data-producing method that's intended to be used by a client application to retrieve a resource, to fetch data from a Web server, or to execute a query with the expectation that the Web server will look for and respond with a set of matching resources.
REST asks developers to use HTTP methods explicitly and in a way that's consistent with the protocol definition. This basic REST design principle establishes a one-to-one mapping between create, read, update, and delete (CRUD) operations and HTTP methods. According to this mapping:
  • To create a resource on the server, use POST.
  • To retrieve a resource, use GET.
  • To change the state of a resource or to update it, use PUT.
  • To remove or delete a resource, use DELETE.
An unfortunate design flaw inherent in many Web APIs is in the use of HTTP methods for unintended purposes. The request URI in an HTTP GET request, for example, usually identifies one specific resource. Or the query string in a request URI includes a set of parameters that defines the search criteria used by the server to find a set of matching resources. At least this is how the HTTP/1.1 RFC describes GET. But there are many cases of unattractive Web APIs that use HTTP GET to trigger something transactional on the server—for instance, to add records to a database. In these cases the GET request URI is not used properly or at least not used RESTfully. If the Web API uses GET to invoke remote procedures, it looks like this:

GET /adduser?name=Robert HTTP/1.1
It's not a very attractive design because the Web method above supports a state-changing operation over HTTP GET. Put another way, the HTTP GET request above has side effects. If successfully processed, the result of the request is to add a new user—in this example, Robert—to the underlying data store. The problem here is mainly semantic. Web servers are designed to respond to HTTP GET requests by retrieving resources that match the path (or the query criteria) in the request URI and return these or a representation in a response, not to add a record to a database. From the standpoint of the intended use of the protocol method then, and from the standpoint of HTTP/1.1-compliant Web servers, using GET in this way is inconsistent.
Beyond the semantics, the other problem with GET is that to trigger the deletion, modification, or addition of a record in a database, or to change server-side state in some way, it invites Web caching tools (crawlers) and search engines to make server-side changes unintentionally simply by crawling a link. A simple way to overcome this common problem is to move the parameter names and values on the request URI into XML tags. The resulting tags, an XML representation of the entity to create, may be sent in the body of an HTTP POST whose request URI is the intended parent of the entity (see Listings 1 and 2).

GET /adduser?name=Robert HTTP/1.1
            
POST /users HTTP/1.1
Host: myserver
Content-Type: application/xml
<?xml version="1.0"?>
<user>
  <name>Robert</name>
</user>
            

The method above is exemplary of a RESTful request: proper use of HTTP POST and inclusion of the payload in the body of the request. On the receiving end, the request may be processed by adding the resource contained in the body as a subordinate of the resource identified in the request URI; in this case the new resource should be added as a child of /users. This containment relationship between the new entity and its parent, as specified in the POST request, is analogous to the way a file is subordinate to its parent directory. The client sets up the relationship between the entity and its parent and defines the new entity's URI in the POST request.
A client application may then get a representation of the resource using the new URI, noting that at least logically the resource is located under /users, as shown in Listing 3.
GET /users/Robert HTTP/1.1
Host: myserver
Accept: application/xml

Using GET in this way is explicit because GET is for data retrieval only. GET is an operation that should be free of side effects, a property also known as idempotence.
A similar refactoring of a Web method also needs to be applied in cases where an update operation is supported over HTTP GET, as shown in Listing 4.
GET /updateuser?name=Robert&newname=Bob HTTP/1.1

This changes the name attribute (or property) of the resource. While the query string can be used for such an operation, and Listing 4 is a simple one, this query-string-as-method-signature pattern tends to break down when used for more complex operations. Because your goal is to make explicit use of HTTP methods, a more RESTful approach is to send an HTTP PUT request to update the resource, instead of HTTP GET, for the same reasons stated above (see Listing 5).
PUT /users/Robert HTTP/1.1
Host: myserver
Content-Type: application/xml
<?xml version="1.0"?>
<user>
  <name>Bob</name>
</user>

Using PUT to replace the original resource provides a much cleaner interface that's consistent with REST's principles and with the definition of HTTP methods. The PUT request in Listing 5 is explicit in the sense that it points at the resource to be updated by identifying it in the request URI and in the sense that it transfers a new representation of the resource from client to server in the body of a PUT request instead of transferring the resource attributes as a loose set of parameter names and values on the request URI. Listing 5 also has the effect of renaming the resource from Robert to Bob, and in doing so changes its URI to /users/Bob. In a REST Web service, subsequent requests for the resource using the old URI would generate a standard 404 Not Found error.
As a general design principle, it helps to follow REST guidelines for using HTTP methods explicitly by using nouns in URIs instead of verbs. In a RESTful Web service, the verbs—POST, GET, PUT, and DELETE—are already defined by the protocol. And ideally, to keep the interface generalized and to allow clients to be explicit about the operations they invoke, the Web service should not define more verbs or remote procedures, such as /adduser or /updateuser. This general design principle also applies to the body of an HTTP request, which is intended to be used to transfer resource state, not to carry the name of a remote method or remote procedure to be invoked. 

REST Web services need to scale to meet increasingly high performance demands. Clusters of servers with load-balancing and failover capabilities, proxies, and gateways are typically arranged in a way that forms a service topology, which allows requests to be forwarded from one server to the other as needed to decrease the overall response time of a Web service call. Using intermediary servers to improve scale requires REST Web service clients to send complete, independent requests; that is, to send requests that include all data needed to be fulfilled so that the components in the intermediary servers may forward, route, and load-balance without any state being held locally in between requests.
A complete, independent request doesn't require the server, while processing the request, to retrieve any kind of application context or state. A REST Web service application (or client) includes within the HTTP headers and body of a request all of the parameters, context, and data needed by the server-side component to generate a response. Statelessness in this sense improves Web service performance and simplifies the design and implementation of server-side components because the absence of state on the server removes the need to synchronize session data with an external application.
Figure 1 illustrates a stateful service from which an application may request the next page in a multipage result set, assuming that the service keeps track of where the application leaves off while navigating the set. In this stateful design, the service increments and stores a previousPage variable somewhere to be able to respond to requests for next.
Stateful services like this get complicated. In a Java Platform, Enterprise Edition (Java EE) environment stateful services require a lot of up-front consideration to efficiently store and enable the synchronization of session data across a cluster of Java EE containers. In this type of environment, there's a problem familiar to servlet/JavaServer Pages (JSP) and Enterprise JavaBeans (EJB) developers who often struggle to find the root causes of java.io.NotSerializableException during session replication. Whether it's thrown by the servlet container during HttpSession replication or thrown by the EJB container during stateful EJB replication, it's a problem that can cost developers days in trying to pinpoint the one object that doesn't implement Serializable in a sometimes complex graph of objects that constitute the server's state. In addition, session synchronization adds overhead, which impacts server performance.
Stateless server-side components, on the other hand, are less complicated to design, write, and distribute across load-balanced servers. A stateless service not only performs better, it shifts most of the responsibility of maintaining state to the client application. In a RESTful Web service, the server is responsible for generating responses and for providing an interface that enables the client to maintain application state on its own. For example, in the request for a multipage result set, the client should include the actual page number to retrieve instead of simply asking for next (see Figure 2). 

A stateless Web service generates a response that links to the next page number in the set and lets the client do what it needs to in order to keep this value around. This aspect of RESTful Web service design can be broken down into two sets of responsibilities as a high-level separation that clarifies just how a stateless service can be maintained:
Server
  • Generates responses that include links to other resources to allow applications to navigate between related resources. This type of response embeds links. Similarly, if the request is for a parent or container resource, then a typical RESTful response might also include links to the parent's children or subordinate resources so that these remain connected.
  • Generates responses that indicate whether they are cacheable or not to improve performance by reducing the number of requests for duplicate resources and by eliminating some requests entirely. The server does this by including a Cache-Control and Last-Modified (a date value) HTTP response header.

Client application
  • Uses the Cache-Control response header to determine whether to cache the resource (make a local copy of it) or not. The client also reads the Last-Modified response header and sends back the date value in an If-Modified-Since header to ask the server if the resource has changed. This is called Conditional GET, and the two headers go hand in hand in that the server's response is a standard 304 code (Not Modified) and omits the actual resource requested if it has not changed since that time. A 304 HTTP response code means the client can safely use a cached, local copy of the resource representation as the most up-to-date, in effect bypassing subsequent GET requests until the resource changes.
  • Sends complete requests that can be serviced independently of other requests. This requires the client to make full use of HTTP headers as specified by the Web service interface and to send complete representations of resources in the request body. The client sends requests that make very few assumptions about prior requests, the existence of a session on the server, the server's ability to add context to a request, or about application state that is kept in between requests.
This collaboration between client application and service is essential to being stateless in a RESTful Web service. It improves performance by saving bandwidth and minimizing server-side application state. 

From the standpoint of client applications addressing resources, the URIs determine how intuitive the REST Web service is going to be and whether the service is going to be used in ways that the designers can anticipate. A third RESTful Web service characteristic is all about the URIs.
REST Web service URIs should be intuitive to the point where they are easy to guess. Think of a URI as a kind of self-documenting interface that requires little, if any, explanation or reference for a developer to understand what it points to and to derive related resources. To this end, the structure of a URI should be straightforward, predictable, and easily understood.
One way to achieve this level of usability is to define directory structure-like URIs. This type of URI is hierarchical, rooted at a single path, and branching from it are subpaths that expose the service's main areas. According to this definition, a URI is not merely a slash-delimited string, but rather a tree with subordinate and superordinate branches connected at nodes. For example, in a discussion threading service that gathers topics ranging from Java to paper, you might define a structured set of URIs like this:

http://www.myservice.org/discussion/topics/{topic}
The root, /discussion, has a /topics node beneath it. Underneath that there are a series of topic names, such as gossip, technology, and so on, each of which points to a discussion thread. Within this structure, it's easy to pull up discussion threads just by typing something after /topics/.
In some cases, the path to a resource lends itself especially well to a directory-like structure. Take resources organized by date, for instance, which are a very good match for using a hierarchical syntax.
This example is intuitive because it is based on rules:

http://www.myservice.org/discussion/2008/12/10/{topic}
The first path fragment is a four-digit year, the second path fragment is a two-digit day, and the third fragment is a two-digit month. It may seem a little silly to explain it that way, but this is the level of simplicity we're after. Humans and machines can easily generate structured URIs like this because they are based on rules. Filling in the path parts in the slots of a syntax makes them good because there is a definite pattern from which to compose them:

http://www.myservice.org/discussion/{year}/{day}/{month}/{topic}
Some additional guidelines to make note of while thinking about URI structure for a RESTful Web service are:
  • Hide the server-side scripting technology file extensions (.jsp, .php, .asp), if any, so you can port to something else without changing the URIs.
  • Keep everything lowercase.
  • Substitute spaces with hyphens or underscores (one or the other).
  • Avoid query strings as much as you can.
  • Instead of using the 404 Not Found code if the request URI is for a partial path, always provide a default page or resource as a response.
URIs should also be static so that when the resource changes or the implementation of the service changes, the link stays the same. This allows bookmarking. It's also important that the relationship between resources that's encoded in the URIs remains independent of the way the relationships are represented where they are stored. 
A resource representation typically reflects the current state of a resource, and its attributes, at the time a client application requests it. Resource representations in this sense are mere snapshots in time. This could be a thing as simple as a representation of a record in a database that consists of a mapping between column names and XML tags, where the element values in the XML contain the row values. Or, if the system has a data model, then according to this definition a resource representation is a snapshot of the attributes of one of the things in your system's data model. These are the things you want your REST Web service to serve up.
The last set of constraints that goes into a RESTful Web service design has to do with the format of the data that the application and service exchange in the request/response payload or in the HTTP body. This is where it really pays to keep things simple, human-readable, and connected.
The objects in your data model are usually related in some way, and the relationships between data model objects (resources) should be reflected in the way they are represented for transfer to a client application. In the discussion threading service, an example of connected resource representations might include a root discussion topic and its attributes, and embed links to the responses given to that topic.
<?xml version="1.0"?>
<discussion date="{date}" topic="{topic}">
  <comment>{comment}</comment>
  <replies>
    <reply from="joe@mail.com" href="/discussion/topics/{topic}/joe"/>
    <reply from="bob@mail.com" href="/discussion/topics/{topic}/bob"/>
  </replies>
</discussion>

And last, to give client applications the ability to request a specific content type that's best suited for them, construct your service so that it makes use of the built-in HTTP Accept header, where the value of the header is a MIME type. Some common MIME types used by RESTful services are shown in Table 1.
MIME-TypeContent-Type
JSONapplication/json
XMLapplication/xml
XHTMLapplication/xhtml+xml
This allows the service to be used by a variety of clients written in different languages running on different platforms and devices. Using MIME types and the HTTP Accept header is a mechanism known as content negotiation, which lets clients choose which data format is right for them and minimizes data coupling between the service and the applications that use it. 

REST is not always the right choice. It has caught on as a way to design Web services with less dependence on proprietary middleware (for example, an application server) than the SOAP- and WSDL-based kind. And in a sense, REST is a return to the Web the way it was before the age of the big application server, through its emphasis on the early Internet standards, URI and HTTP. As you've examined in the so-called principles of RESTful interface design, XML over HTTP is a powerful interface that allows internal applications, such as Asynchronous JavaScript + XML (Ajax)-based custom user interfaces, to easily connect, address, and consume resources. In fact, the great fit between Ajax and REST has increased the amount of attention REST is getting these days.
Exposing a system's resources through a RESTful API is a flexible way to provide different kinds of applications with data formatted in a standard way. It helps to meet integration requirements that are critical to building systems where data can be easily combined (mashups) and to extend or build on a set of base, RESTful services into something much bigger. This article touches on just the basics here but hopefully in a way that has enticed you to continue exploring the subject.

What's new in WebSphere Portal V6.1: JSR 286 features

The first JSR 286 feature that we consider is portlet events. portlet events represent a loosely coupled publish and subscribe communication model, in which a portlet sends out an event that is then relayed to other interested portlets by a broker component running inside the portal.
Interportlet communication has always been an important use case and, therefore, WebSphere Portal for several releases has provided a very similar communication mechanism for brokered information exchange between portlets: the WebSphere Portal cooperative portlets feature, also known as the property broker. With JSR 286 support for portlet events, the same publish and subscribe infrastructure has now been extended to support the new standardized application programming interfaces (APIs).
From an administrative perspective, there is no difference between connecting JSR 168-based cooperative portlets or JSR 286 portlets with event support: Both are done using a wiring-administration, where connections are defined from published information (a publishing event) of one portlet to processing logic (a processing) event in another portlet. Of course, the wiring tool needs to know about the event types that portlets can publish and receive. Therefore, you have to declare all events supported by a portlet in its portlet.xml deployment descriptor, as recommended by the Java Portlet Specification. Otherwise, you are not able to connect your portlets in WebSphere Portal V6.1, and the portlets cannot communicate.
You should provide user-readable display names for all declared events, so that the wiring tool can display the wire sources and targets in a user-friendly way: JSR 286 has added language-specific information that is shared between all portlets in a WAR file; therefore, the new standard defines an application resource bundle (in addition to JSR 168 portlet resource bundles) where translatable information such as these event display names is stored.
WebSphere Portal currently has no support for dynamic interconnection of portlets by simply placing them on the same page; events are exchanged only between portlets if a matching wire has been defined. Typically, pages are set up by administrators, including the wiring of portlets on the pages. Users with appropriate permissions, who can add portlets to a private view of a page, can also set up private wires that do not apply for others.
Figure 1 is a screen capture of the portlet wiring tool with the two sample portlets wired together.
The explicit wiring model allows administrators to fully control how portlets deployed in WebSphere Portal can communicate. In particular, this model allows you to define target events as global (visible across pages) and establish cross-page wires that coordinate portlets on different pages. An event from a portlet can be propagated across multiple on-page and cross-page wires. One of the cross-page wires can be additionally marked with a switch-page flag, so that, after event processing completes, the browser is redirected to the target page.
Wires are a part of the data model for pages, which implies that management functions and APIs on pages also include the contained wires: Configuration management tools such as XmlAccess or the new version 6.1 site management tool can export and import wires along with the pages. WebSphere Portal application templates, which allow you to create many instances of a given setup of portlets, pages, and business components, can set up wires as part of an application instance with communicating portlets. Finally, the public model system programming interface (SPI) for reading data model information and the public controller SPI, and the model Atom feed infrastructure (both new features of version 6.1) support reading and modifying the wiring structure of a page.
The JSR 286 specification permits portlets to send and receive complex Java objects as event payloads, as long as these payloads are Java and JAXB XML serializable. This permission allows the transfer of complex objects across class loaders and even servers, for example, when communicating with remote portlets that follow the Web Services for Remote Portlets (WSRP) 2.0 protocol. WebSphere Portal 6.1 supports WSRP 2.0 and allows full interoperability and event propagation between local and remote portlets.
The XML serialization mechanism (based on JAXB 2.0) is also used to convert complex objects between different class loaders. This serialization is commonly the case when custom payload types are packaged in the portlet WAR file because each Web module is deployed in its own class loader. XML serialization implies a performance overhead; therefore, WebSphere Portal attempts to pass direct object references when possible. You can take advantage of this feature by deploying your shared payload classes in a common class loader, for example, by using the WebSphere Application Server shared library concepts, so that serialization is not necessary and optimal performance can be obtained.
In general, you should use simple Java types (usually Strings) as event payloads because this approach maximizes the possibility of interconnecting portlets, even if they were not originally developed together. For portlets that were developed independently, complex event payload classes are rarely compatible.
You should improve the interoperability of your portlets by declaring compatible event names or aliases: Event names typically describe not only the data that is being exchanged, but also the activity that creates or requires the data. For example, a calendar portlet might react to a specific show_week_for_date processing event that takes a date as an input. To wire an input from another portlet to this activity, the other portlet would need to produce a show_week_for_date publishing event in the same portlet-specific namespace; an event name that probably does not make much sense in the context of, say, an order-processing portlet that is producing the date. For this purpose, JSR 286 supports alias names for events. For example, the order-processing portlet can declare an order_added_for_date publishing event with aliases show_week_for_date and show_month_for_date, so that users can wire the order selection event to different actions in the calendar.
Still, this approach implies a relatively tight coupling of both portlet designs. To maximize interoperability, you should declare an alias for each event that describes just the data type that it expects or produces; in other words, the minimum requirement for compatibility. For example, you can declare the calendar portlet processing event as shown in listing 1.
<event-definition
xmlns:cal=”http://www.acme.com/portlets/calendar”
  xmlns:xsd=”http://www.w3.org/2001/XMLSchema”>
  <qname>cal:show_week_for_date</qname> 
  <alias>xsd:date</alias>
  <value-type> java.util.Date</value-type>
</event-definition>
  

Using the alias name, the show_week_for_date event can be triggered by any source that produces a date, even if both connected portlets have been developed completely independently.
The new order has been added, and the date was sent to the calendar. See figure 2.
The JSR 286 event propagation mechanism describes an automated interaction of portlets that takes place whenever a portlet publishes an event. The event phase of the portlet life cycle does not allow any sort of user interaction. WebSphere Portal extends this model with an interportlet communication mechanism in which propagation of information is explicitly controlled by the user: We call this mechanism the click-to-action model.
The idea is that instead of publishing an event during the action phase, a portlet can embed event information into its markup using special HTML constructs for enabling an event source as live text. This event source then becomes an active hotspot in the browser: When clicked, the event source dynamically collects all matching processing targets on the page into a menu that is displayed to the user. The user can then select a processing action for the information that is executed. See figure 3.
The mechanism for providing an event source is different from JSR 286 in this model (semantic HTML markup instead of a Java call), but JSR 286 event processing fits well into this model as a processing target. Therefore, JSR 286 portlets that define processing events are automatically made available as click-to-action targets in WebSphere Portal V6.1. You can easily combine both programming models by providing JSR 286 source events and live text markup in your portlet. Note that click-to-action, because it is based on HTML markup only, can be used to communicate between all sorts of components that are represented as markup on a portal page. For example, you can add tagging for event sources in content management information that is displayed on a page and then send the information on a JSR 286 portlet on the same page. For more details on click-to-action support see the WebSphere Portal V6.1 Information Center.
Finally, we want to point out that the JSR 286 event support in WebSphere Portal is fully interoperable with the cooperative portlet feature supported in previous releases that we have mentioned earlier: Cooperative JSR 168 portlets can send information that is received as processing events by JSR 286 portlets, and JSR 286 portlets can publish events that are propagated to portlets using the cooperative portlet model. This interoperability allows a smooth migration or extension of existing systems with coordinated portlets onto the new portlet standard, thus protecting customer investments.
Besides portlet events, public render parameters represent an alternative way for coordination between portlets. Although both mechanisms allow exchanging information between portlets, they differ in several aspects. We first take a look at the technical details of public render parameters and their implementation in WebSphere Portal, and then we provide some guidelines to aid you in choosing the most appropriate technology for portlet communication.
As noted in the first part of this article, from a programming perspective, a public render parameter is handled almost identically to an ordinary (private) render parameter: The portlet can set and read this parameter using the same API methods that JSR 168 introduced for private render parameters. From a programmer's point of view, the important difference is that a public render parameter is declared in the portlet.xml deployment descriptor and therefore becomes an external interface of the portlet. See listing 2.
 <public-render-parameter
 xmlns:dm=”http://www.acme.com/portlets/doc-mgmt”>
  <identifier>selected-doc</identifier>
  <qname>dm:doc</qname>
 </public-render-parameter>

This portlet.xml fragment declares that the portlet code uses a render parameter with the portlet-specific identifier selected-doc and that this render parameter can be shared externally under the global (and, we hope, unique) name http://www.acme.com/portlets/doc-mgmt:doc. If a second portlet also declares a public render parameter with a global name http://www.acme.com/portlets/doc-mgmt:doc (regardless of the portlet-specific identifier that it uses) it can now share the values of this render parameter. Note that WebSphere Portal V6.1 stores the namespace and QName that you’ve declared in the portlet.xml in the URL, and thus you should make these as short as possible.
WebSphere Portal V6.1 does not require any administrative tasks to set up two portlets for sharing public parameters. The simple fact that they both use the same global name is enough; just place the two portlets on a page, and they will start interacting.
Actually, the collaboration works even across pages. By default, all public render parameters set by portlets are placed in the global scope. That fact means that you can interact with one portlet that uses a public parameter, then switch to another portlet on a different page that is using the same public parameter. When you view the second portlet for the first time, it is immediately set up with the information that was provided by the first portlet. This approach makes public render parameters a great tool for scenarios in which you have several portlets, even on different pages, that can all display information related to some global key such as a customer ID. By treating this global key as a public render parameter in all portlets, they will coordinate automatically.
Obviously, there are cases in which this sort of global information sharing is not desired. A common example is the case where you have two pairs of collaborating portlets such as a navigator and a viewer on separate pages. You want the navigator on page 1 to control the viewer on the same page but without affecting the viewer on the other page. In WebSphere Portal V6.1, you can control this behavior by limiting the sharing scope for public parameters to a page. To do so, go to Edit page settings for that page and set param.sharing.scope (under Advanced options - I want to set parameters) to a non-empty value such as scope1. All portlets that are placed on that page now use their own shared values for their declared render parameters, so they can still share information, but they cannot affect portlets on other pages. See figure 4.
When you set the page setting to the same value scope1 for another page, that page also becomes part of the same sharing scope. Generally, two portlets share the value for a public render parameter if and only if the following conditions are met:
  • They declare the parameter with the same global name in their portlet.xml deployment descriptor.
  • They are placed on the same page or on pages that have the same value for the param.sharing.scope setting.
The global sharing scope that we have seen in the beginning is a special case where the page setting is empty.
The idea of render parameters in the Java Portlet Specification 1.0 was to give portlets an API in which they can store information about their internal navigational state and to allow portals to place this information in the URL. This concept means that render parameters can provide the expected user behavior inside portlets for browser operations such as bookmarking and back-button. Each URL for a portal page correctly restores portlet internal navigational state such as user selection, if that state is represented as render parameters. Also, HTTP proxy caches can correctly cache different states of the same portal page depending on the URL.
WebSphere Portal has supported such rich bookmarkable URLs since version 5.1, and consequently public render parameters are also part of this URL information in version 6.1. They are, for example, stored as part of portal bookmarks. In fact, any URL in WebSphere Portal can contain public render parameters, and you can use product-specific URL generation functions, such as the UrlGeneration JSP tag, to set public render parameters for portlets.
As we have seen from the preceding discussion, public render parameters can be regarded as a more lightweight communication alternative compared to portlet events. The following lists contrast some of their respective features to help you decide which mechanism is more appropriate for your use case.
Public render parameters have the following features:
  • They do not usually require explicit coding but only a declaration in the portlet.xml deployment descriptor.
  • They are limited to simple string values.
  • They do not require explicit administration to set up coordination.
  • They cause no performance overhead as the number of portlets sharing information grows.
  • They are updated by URL changes such as jumping to a bookmark.
  • They can be set from links encoded in portal themes and skins.
  • They can be set on a link, created with product specific APIs, that leads from one portlet to another portlet on a different page.
Portlet events have the following features:
  • They require explicit portlet code to send and receive.
  • They can contain complex information.
  • They allow fine-grained control by setting up different sorts of wires between portlets (on-page or cross-page, public or private).
  • They can trigger cascaded updates with different information. For example, portlet A can send event X to portlet B, which in turn sends a different event Y to portlet C.
  • They cause increasing processing overhead as the number of communication links grows.
  • They must be initiated by some explicit user interaction (normally, by clicking an action link in a portlet), and they cannot be used to set up a coordinated view when first jumping to a page.
  • They can interoperate with the cooperative portlet communication features provided by previous versions of WebSphere Portal.
Both mechanisms allow you to couple data exchange with a page switch. For events, you can define page-switching cross-page wires, as explained above. For public render parameters, you can use product-specific APIs in one portlet to generate a link to another portlet on a different page and set a public render parameter for the target.
Of course, you can even combine both techniques; for example, you can declare a processing event in your portlet that sets a render parameter and at the same time declares this parameter as public, so that information can be received both ways.
This discussion should help you determine which feature is more appropriate for a given use case. As a general rule, use public render parameters where you can, and use portlet events for more complex cases for which render parameters are not sufficient.
We have now covered the portlet coordination features introduced by JSR 286. They are the most important novelties in the specification; also these are features that depend significantly on the portal implementation, which is responsible for brokering the information between portlets.
Most of the other new programming features in JSR 286, such as portlet filters, are defined by the specification in detail and have no dependencies on a particular portal implementation, so there is no need to discuss WebSphere Portal specific topics.
In the following sections, we cover a few specific details about various new JSR 286 features that can be helpful to know when you are programming JSR 286 portlets on WebSphere Portal V6.1.
Resource serving gives you full control over all aspects of the HTTP protocol. WebSphere Portal writes out all response properties that you specify on a resource response as HTTP headers, so you can control language, content type, and other information for the provided content. The flip side is that, in contrast to normal page requests, the portal does not provide any default header information for the response; all information must be explicitly set during resource serving.
WebSphere Portal implements the different cache levels PAGE, PORTLET, and FULL that are defined by the specification. As noted before, WebSphere Portal uses rich URLs that encode the full navigational state of the page. Resource URLs with the default PAGE cacheability contain a lot of specific information about other components in the portal, which in many cases is not needed within the resource request.
If you want to take advantage of HTTP caching for resource requests, be sure to set resource URL cacheability to the highest level (the least information) that is actually required to process the resource request, to generate consistent URLs and improve HTTP cacheability. Also, to make a resource request cacheable, you need to explicitly set the cache control information on the response so that the portal can generate caching headers. Unlike normal render requests, the default caching definitions in portlet.xml deployment descriptor do not apply to resource requests.
JSR 286 adds API methods for reading and writing cookie properties to portlet request and response, but leaves it open to portal implementations about how these cookies are stored and handled. WebSphere Portal directly translates cookie properties into actual HTTP cookies. If you do not explicitly specify a cookie path, the default is set to the URL context of the portal, so the cookie can be correctly received back by future portal requests but not by other Web applications on the same server.
Cookies are not within a namespace, so they can be shared between portlets and, if required, also shared with other Web applications. Cookies therefore provide an alternative mechanism for coordination between portlets that can be useful in some circumstances. New cookies that have been set by a portlet are visible to all portlets in subsequent life-cycle phases of the same client request, and also in later requests unless the client decides to discard them.
Setting cookies in the render phase is currently not supported by WebSphere Portal V6.1. As the client response is already committed during the render phase, these cookies are not transmitted to the client and thus are lost on subsequent requests.
Container runtime options allow the portlet to request a specific runtime behavior from the portal and portlet container. WebSphere Portal V6.1 supports the following container runtime options:
  • javax.portlet.escapeXml (defined by JSR 286) to avoid the default XML escaping of URLs generated by JSP tags.
  • javax.portlet.actionScopedRequestAttributes (defined by JSR 286) to retain portlet request attributes across request boundaries.
  • com.ibm.portal.public.session (product specific) to indicate that a portlet needs a session to operate correctly. It requests that the portal creates a session cookie whenever a page containing the portlet is accessed, even if no user login exists.
All of these runtime options represent workarounds for code that does not play well with the concepts of the portlet specification, but should nevertheless be supported in WebSphere Portal. Using the latter two runtime options can also cause performance degradation on heavily used portals. Therefore, try to avoid use of these runtime options and preferrably write your code in a way that makes them unnecessary.
With the JSR 168 API for URL generation, portlets always had to convert URL objects to a string using the PortletURL.toString() method before writing them out into markup. Because of the rich nature and considerable size of URLs in WebSphere Portal, this string conversion can negatively affect performance for portlets that create large numbers of URLs.
JSR 286 adds a write() method that allows you to stream URL objects directly to the response writer instead, avoiding the creation of temporary string objects. Use this method for writing out URLs. Unlike the toString() method, it also automatically provides the correct XML-escaping of URLs that is required by the XML and HTML specifications. Similar considerations apply to portlet URL tags in JavaServer™ Pages: The tag syntax that directly writes out the URL is preferred over the tag syntax that stores the generated URL in a temporary string variable.
Although WebSphere Portal V6.1 provides a fully compliant implementation of JSR 286 and supports all major features, a few optional aspects of the standard are not supported. For details, refer to the WebSphere Portal Information Center.
Like the first version of the Java Portlet API, the JSR 286 specification was developed in close cooperation with the OASIS committee that defined the Web Services for Remote Portlets (WSRP) standard. As a result, both specifications are closely aligned and represent the same programming model, only for different protocols. JSR 286 defined how local portlets interact with a Java-based portal; WSRP 2.0 defines how remote portlets interact with a portal that supports SOAP-based Web services.
WebSphere Portal V6.1 combines support for both specifications. As a result, the major programming features of JSR 286 – especially portlet coordination and resource serving – work also for remote portlets. That means that you can, for example, deploy a JSR 286 portlet on one portal installation and then consume it remotely from a different portal installation, and it continues to work as if it were installed locally.
With its support for JSR 286, the new WebSphere Portal release provides a range of new features that make portlet programming more powerful, most importantly by supporting versatile, standardized mechanisms for interportlet communication. The specification has intentionally left some aspects of these new functions open for product-specific implementations. This allowance applies particularly to interportlet communication. The specification clearly defines how portlets are programmed to exchange information, but it does not define when this information exchange actually takes place in a portal environment, or by which means portlets must be connected to control the information exchange.

Retrieving URL parameters from JSR 168 portlets using WebSphere services

The general approach you see in this article is:
  1. Intercept the user's request with a servlet filter.
  2. In the servlet filter, grab the parameter and store it temporarily in a dynamic cache.
  3. In the portlet, grab the parameter from the dynamic cache.
That's the "50,000 foot" block-diagram version of the approach. Let's see how to implement it.
You start by creating the cache. Stefan Hepper and Stephan Hesmer talk about using a dynamic cache in portlets in the developerWorks article titled "Caching data in JSR 168 portlets with WebSphere Portal V5.1". (See the http://www.ibm.com/developerworks/websphere/library/techarticles/0707_lynn/0707_lynn.html#resources for a link. In the article, see the section titled Leveraging the WebSphere dynacache infrastructure.) They offer a couple of different ways to create and set up the dynacache. The dynamic cache infrastructure is pretty nifty and, if you have a few moments, I'd suggest taking a look at it for any of your caching needs.
For the task at hand, you can just use the Application Server's administration console to enable and create the cache:
  1. Login to the admin console. In WebSphere Application Server v6, the URL is something like: http://<server>:9060/ibm/console. The port could be different for your environment. If you don't know the port, ask your system administrator. Or, if you have access to the command line, you can grep the serverindex.xml file usually found in the WebSphere install directory under:
    AppServer/profiles/wp_profile/config/cell/<cellname>/nodes/<nodename>/
    

    Look for "WC_adminhost_secure" as an endPointName or try this command (on one line):
    grep -A 1 "WC_adminhost_secure" 
    <path>/AppServer/profiles/wp_profile/config/cells/<cellname>/nodes/
    <nodename>/serverindex.xml
    

    (If anyone reading this article knows an easier way to find out which port the admin server is running on, send me email. I'll update the article and lavish kudos upon you.)
  2. Now, make sure that the Dynamic Cache Service is enabled. Go to Server => Application servers, and choose WebSphere_Portal.

    Figure 2. Select the WebSphere_Portal server
    Figure 2. Select the WebSphere_Portal server

  3. Next, select Container services => Dynamic Cache Service.

    Figure 3. Select the Dynamic Cache Service
    Figure 3. Select the Dynamic Cache Service

  4. You should see that the Dynamic Cache Service is enabled at server startup; if not, enable it.

    Figure 4. Enable the Dynamic Cache Service
    Figure 4. Enable the Dynamic Cache Service

  5. The service is enabled. So now let's create the cache. Expand Resources => Cache instances => Object cache instances.

    Figure 5. Select Object cache instances
    Figure 5. Select Object cache instances

  6. Click New.
  7. Fill in the required fields and frob (modify) any of the other fields, if you like. See the WebSphere Application Server Information Center, listed in http://www.ibm.com/developerworks/websphere/library/techarticles/0707_lynn/0707_lynn.html#resources, for help with the various settings. That way, you can make an informed decision for your environment. Here's what I set:
    • Name: Parameter Cache
    • JNDI name: services/cache/catalog/parameters
    • Checked User listener context
    • Unchecked Dependency ID support
    The names are arbitrary. Just make sure you set it to something fairly unique and then remember it for when you create your servlet filter and portlet code.
  8. Click OK, and then save the configuration. You have a freshly minted cache to work with.

    Figure 6. Your cache should show up in the list of available caches
    Figure 6. Your cache should show up in the list of available caches

  9. Again, remember the JNDI name or write it down. You will need the JNDI name to do a JNDI lookup in your code. So, either make note of it now or leave this page up in the admin console.
There is an important issue with using the dynamic cache that should be carefully considered. Since it is a cache, it could be completely filled up and not allow one to store anything more. So make sure your cache size is large enough for your application. The range is from 100 to 200,000 cache entries. So, do a little math to figure out how many expected concurrent hits times the number of parameters you'll be caching then arbitrarily double it. Of course, if you really want to find a better number, look in the WebSphere Application Server Info Center and search for "dynamic cache size". You'll end up with a list of performance advisors, monitors, viewers, and troubleshooting tips to assist you in picking the right cache size.
Now that you have a cache in which you can place the parameters, you create the servlet filter that will stash the parameters into the cache. Later, your portlet can retrieve them from the cache.
I used IBM Rational® Software Architect to create my servlet filter; you could use it, Rational Application Developer, or another development environment. The goal is to create a JAR file with your filter class in it.
You first need to create a servlet filter with the code shown in Listing 2 , which I describe in some detail below.
public class ParameterFilter implements Filter {
  
  private DistributedMap map = null;
  
  public ParameterFilter() {…}
  public void destroy() {…}

  public void init(FilterConfig arg0) throws ServletException {
    try {
      InitialContext ic = new InitialContext();
      this.map = (DistributedMap)
               ic.lookup("services/cache/catalog/parameters");
      ic.close();
    }catch(NamingException ne) {
      System.out.println("NamingException error");
      System.out.println(ne.getMessage());
    }
  }
…
}

The first thing task in the init method is to use the JNDI name to look up the cache you created earlier. Assuming you remembered what you used for step 7 above, you'll want to use it here in the InitialContext.lookup method.
The real juicy part of the ServletFilter is the doFilter method.
public class ParameterFilter implements Filter {
…
  public void doFilter(javax.servlet.ServletRequest request,  
                       javax.servlet.ServletResponse response,
                       javax.servlet.FilterChain chain) 
     throws java.io.IOException, javax.servlet.ServletException{
    
     HttpServletRequest httpRequest = (HttpServletRequest)request;
     HttpSession session = httpRequest.getSession();
     Enumeration parms = httpRequest.getParameterNames();
     while(parms.hasMoreElements()) {
       StringBuffer key = new StringBuffer(
                                  (String)parms.nextElement());
       String value = httpRequest.getParameter(key.toString());
       //mangle the key so that it's unique for this session
       key.append(".");
       key.append(session.getId());
       map.put(key.toString(), value);
       System.out.println("Parameter Filter: "+key+" -- "+value);
     }

     // forward the wrapped request to the next filter in the chain
     if (chain != null){
       chain.doFilter(request, response);
     }
   }
}

Get all the parameters from the HttpSession and put each one into the DistributedMap. Then, mangle the parameter names so that they are unique for each session; otherwise, you could end up overwriting another user's data in the map. This code moves all the parameters. If you would prefer to only target certain parameters, you can easily make the changes; that task is left as an optional exercise for the reader.
I'll reemphasize that this is a cache. Because we are caching all the parameters that are passed through the URL, the cache could fill up. So, while I leave the exercise of targeting parameters to the reader, you should implement such a mechanism and also enhance what you learn in this article.
For illustrative purposes, the example code includes a System.out.println statement so that you can look in the portal SystemOut.log file to see your filter in action. You might notice that this code does not handle multiple parameters of the same name. That is, if you had a URL like http://<server>/wps/portal?parm=foo&parm=bar you'd end up overwriting parm in the dynamic cache. You could overcome this restriction fairly easily in several different ways. One way would be to put parameters in the dynamic cache assuming there are multiples; that is, use dynamic cache names that are indexed. For example: parm.0.sessionID=foo, parm.1.sessionID=bar, and so on. Then, you would need to pull them out of the cache in a similar fashion. Another way would be to comma-separate them in the cache. For example: parm.sessionID=foo,bar. There are many ways to marshal the data; it is just a matter of choosing what is right for your application.
When you are finished coding (or in this case copying the code into) your ServletFilter class, build this code into a JAR file.
Now, you are ready to install the filter.
  1. Place the JAR file you created above in the shared/app directory under the portal install directory:
    ${portalserver}/shared/app
    

    On the Linux® server, I used:
    /opt/ibm/WebSphere/PortalServer/shared/app
     

  2. Configure the filter in WebSphere Portal's web.xml file:
    ${appserver}/profiles/wp_profile/config/cells/${server}/applications/wps.ear/
       deployments/wps/wps.war/WEB-INF 
    

    On Linux, I used:
    /opt/ibm/WebSphere/AppServer/profiles/wp_profile/config/cells/icat28/
    applications/wps.ear/deployments/wps/wps.war/WEB-INF

  3. Add the following code to the web.xml file:
    <filter>
       <filter-name>Parameter Filter</filter-name>
       <filter-class>com.ibm.catalog.filters.ParameterFilter</filter-class>
    </filter>
    
    <filter-mapping>
           <filter-name> Parameter Filter</filter-name>
     <url-pattern>/myportal/*</url-pattern>
    </filter-mapping>
    <filter-mapping>
           <filter-name> Parameter Filter</filter-name>
           <url-pattern>/portal/*</url-pattern>
    </filter-mapping> 
    

    This code does two things. First, it defines the filter in the Application Server. Second it tells the Application Server that before you pass the request on to WebSphere Portal, call the Parameter filter class. Because the URL filter mapping filters both "/portal/" and "/myportal/", the filter will get called whether or not you are logged on. (The "/myportal/" is the logged-in portal url.)
  4. Restart the portal server.
  5. After WebSphere Portal is up, you can test the filter. Access the portal by typing something like: http://<servername>/wps/portal?parm=quux
  6. Then, look in the SystemOut.log file in the portal server's log directory. You should see a line that looks similar to the following, (assuming, of course, that you put a System.out.println statement in your ServletFilter code):
    [7/7/07 16:30:24:177 EDT] 0000007c SystemOutO Parameter Filter: 
       parm.dl9vOW6IhS68o0GqTjZS7QQ – quux
    

This didn't work the first time for me either. If it worked for you, YaY! Then I guess this article did its job. For the rest of us, here are the problems I ran into and their fixes.
  1. Error 500, some sort of problem with the filter. For some reason your filter blew up. It happens to all of us. What happened in my case first time I ran it, was that I changed from using a String for my dynamic cache key to a StringBuffer, but I forgot to change the map.put code. Inspect your code, it shouldn't be terribly long and whatever is broken should be easy to spot. If this doesn't work, capture all exceptions and output them to a log file.
  2. My filter never seems to get executed. In my case, I defined the filter in the wrong place so it never got executed. Go back and look at which web.xml file you modified. It's the one in the profiles/wp_profile/config directory, NOT the one in the profiles/wp_profile/installedApps directory! I know seems like a silly mistake, but it happens.
You can use any old portlet for this part. What you will do is pretty non-invasive. You add a few lines to the init method and then add a getParameter method.
  1. Add the following lines of code to your init method:
    public abstract class ParameterPortlet extends GenericPortlet {
      
      private DistributedMap parameterMap = null;
    
      public void init() throws PortletException{
        super.init();
        try {
          InitialContext ic = new InitialContext();
          parameterMap = (DistributedMap) 
                 ic.lookup("services/cache/catalog/parameters");
          ic.close();
        } catch (NamingException e) {
          System.out.println("NamingException in the portlet");
        }
      }
      …
    }

    This code simply looks up the same cache that you defined at the beginning of this article and assigns the DistributedMap to a variable.
  2. Next, add the getParameter method.
    public abstract class ParameterPortlet extends GenericPortlet {
      …
      protected String getParameter(RenderRequest request, String name){
          
          StringBuffer key = new StringBuffer(name);
          key.append(".");
          key.append(request.getPortletSession().getId());
          String parm = (String)parameterMap.get(key.toString());
          System.out.println("Portlet parm: "+key+" -- "+parm);
          parameterMap.remove(key.toString());
          return parm;
      }
    
      …
    }
    

    This code simply grabs the parameter out of the DistributedMap and returns it. It removes the parameter from the map once this method is called; you may or may not what to do that. It's just depends on what you plan to do with the parameters. In this case, I moved them to a request attribute so I'd have further access to them in the rest of the portlet. This is easy to do by adding these few lines of code to the getParameter method:
    …     
    String parm = (String)parameterMap.get(key.toString());
    if(null != parm){
        request.setAttribute(name, parm);
    }
    …

That's all there is to the portlet, other than using the parameters. You could implement other methods in addition to getParameter. Perhaps you need to mirror all the getParameter type methods. Maybe you want this to be more transparent and don't care where your parameters come from. You could use the getParameter method above as wrapper to the request.getParameter method to look in the request and then in the dynamic cache for parameters.
Here's one example of a super-duper getParameter method:
protected String getParameter(RenderRequest request, String name){
      
    StringBuffer key = new StringBuffer(name);
    key.append(".");
    key.append(request.getPortletSession().getId());
    String parm = (String)parameterMap.get(key.toString());
    if(null == parm) {
      parm = request.getParameter(name);
    }
    if(null == parm) {
      parm = (String)request.getAttribute(name);
    }
    if(null != parm) {
      request.setAttribute(name, parm);
    }
    System.out.println("Portlet parm: "+key+" -- "+parm);
    parameterMap.remove(key.toString());
    return parm;
}

You might want to change the order of precedence; for example, a portlet specific parameter might be more important than a URL parameter.
So far, your portlets can read URL parameters, but that doesn't do you much good unless you can get to the page on which the portlet resides. Those of you who know about URL Mappings and Custom Unique Names, can skip to the Conclusion, if you so desire.
The problem with what we have done so far is that unless your portlet is on the default portal page, you still have a very long URL on which to add parameters. You can use URL Mapping to get a URL that points directly to your page.
Here's what you need to do:
  1. Login to WebSphere Portal as an administrator.
  2. Go to the Administration page.
  3. Select Portal Settings => Custom Unique Names => Pages.

    Figure 7. Navigate to Custom Unique Names for Pages
    Figure 7. Navigate to Custom Unique Names for Pages

  4. Search for the page that contains your portlet.
  5. Click the pencil icon and give the page a unique name.

    Figure 8. Set a unique name for your page
    Figure 8. Set a unique name for your page

  6. Now that your page has a unique name, you can set a URL mapping for that page. In my case, I wanted a URL that points directly to my Results page, and I wanted something easy like: http://<servername>/wps/portal/ResultsYou can specify what you want by selecting Portal Settings => URL Mapping, and clicking the New Context button.
  7. Choose a label; in this case, I chose Results, as you can see in the URL above.
  8. Now, click the Edit mapping button (Edit mapping button) in the same row as the Context label you chose.
  9. Find and select the page with the unique name you selected in step 5.You don't really need a custom Unique Name, but it sure is easier that remembering that your page's Unique Identifier is "6_28NFOKG10OD4602PH9OATU2083".
  10. Save everything and you are done.
Your page should now be available by just tacking on the Context label you chose. If you add parameters onto the end of that URL, your portlet will pick them up. For example you could pass a query parameter to the "Results" page above to show the results of that query:
http://<servername>/wps/portal/Results?query=search+me