Skip to main content

Using JavaScript hashCode to enable Cocoon caching of POST requests

I've just faced an issue with the Cocoon caching related to POST requests. Let me describe the use case here. We use a custom XQueryGenerator to execute XQuery code over Sedna XML Database and then process the XML results in the Cocoon pipeline. For the sake of performance, I configured the pipeline caching based on the expiration timeout of 60 seconds for all XQuery invocations:
<map:pipeline id="cached-services" type="expires" internal-only="true">
  <map:parameter name="cache-expires" value="60"/>
  <map:parameter name="cache-key" 
                 value="{request:sitemapURI}?{request:queryString}"/>

  <map:match pattern="cached-internal-xquery/**">
    <map:generate src="cocoon:/xquery-macro/{1}" type="queryStringXquery">
      <map:parameter name="contextPath" value="{request:contextPath}"/>
    </map:generate>
    <map:transform src="xslt/postprocessXqueryResults.xslt" type="saxon"/>
    <map:serialize type="xml"/>
  </map:match>
</map:pipeline>
So you can see that both a request sitemap URI and a query string are used to form the cache key. It works perfectly until you want to send XQuery parameters via POST method instead of GET. Then the query string will be empty and identical for all the POST requests. As a result, one POST request's results will be cached for all of them, the caching breaks it all.

You may wonder why we need POST requests to actually load XML data. This is because we cannot predict how many request parameters will be there as they are generated from the list of identifiers like this:
// id_list is a Collection of identifiers to be sent as request parameters
var postData = id_list.stringJoin(
        function(object) { return "id=" + object },
        "&"
);
    
var uri = "xquery/basictype_tree";

// This sends an asynchronous request and
// inserts its results into the containerId element.
new SimpleContainerTransaction(
    {
        "uri": uri, "containerId": "treenode-details-container",
        "method": "POST", "data": postData
    }
).execute();
Here the SimpleContainerTransaction is a part of a custom YUI3-based Transaction utility.

Now it's time to fix the issue. It seems quite obvious that we should simply generate a fake GET parameter in addition to meaningful POST parameters. This fake parameter will be a hash of POST parameters to make identical requests have identical hash values. As soon as we implement this, the caching should work perfectly for this use case as well.

As we generate POST parameters string in JavaScript, I googled for JavaScript hash implementations and discovered this pretty overview of possible JavaScript hash solutions. So I adapted the first one and incorporated it into our project JS library:
String.prototype.hashCode = function() {
    var charCode, hash = 0;
    if (this.length == 0) return hash;
    for (var i = 0; i < this.length; i++) {
        charCode = this.charCodeAt(i);
        hash = ((hash << 5) - hash) + charCode;
        hash = hash & hash; // Convert to 32bit integer
    }
    return hash;
}
This extends all String objects' with the hashCode function. So let's fix now the caching issue by appending POST parameters hash as a GET parameter to the URL:
var uri = "xquery/basictype_tree?hash=" + postData.hashCode();
That's it, the caching works fine again.

Comments

Popular posts from this blog

DynamicReports and Spring MVC integration

This is a tutorial on how to exploit DynamicReports reporting library in an existing  Spring MVC based web application. It's a continuation to the previous post where DynamicReports has been chosen as the most appropriate solution to implement an export feature in a web application (for my specific use case). The complete code won't be provided here but only the essential code snippets together with usage remarks. Also I've widely used this tutorial that describes a similar problem for an alternative reporting library. So let's turn to the implementation description and start with a short plan of this how-to: Adding project dependencies. Implementing the Controller part of the MVC pattern. Modifying the View part of the MVC pattern. Modifying web.xml. Adding project dependencies I used to apply Maven Project Builder throughout my Java applications, thus the dependencies will be provided in the Maven format. Maven project pom.xml file: net.sourcefo

Connection to Amazon Neptune endpoint from EKS during development

This small article will describe how to connect to Amazon Neptune database endpoint from your PC during development. Amazon Neptune is a fully managed graph database service from Amazon. Due to security reasons direct connections to Neptune are not allowed, so it's impossible to attach a public IP address or load balancer to that service. Instead access is restricted to the same VPC where Neptune is set up, so applications should be deployed in the same VPC to be able to access the database. That's a great idea for Production however it makes it very difficult to develop, debug and test applications locally. The instructions below will help you to create a tunnel towards Neptune endpoint considering you use Amazon EKS - a managed Kubernetes service from Amazon. As a side note, if you don't use EKS, the same idea of creating a tunnel can be implemented using a Bastion server . In Kubernetes we'll create a dedicated proxying pod. Prerequisites. Setting up a tunnel.

Elasticsearch CORS with basic authentication setup

This is a short "recipe" article explaining how to configure remote ElasticSearch instance to support CORS requests and basic authentication using Apache HTTP Server 2.4. Proxy To start with, we need to configure Apache to proxy requests to the Elasticsearch instance. By default, Elasticsearch is running on the port 9200: ProxyPass /elastic http://localhost:9200/ ProxyPassReverse /elastic http://localhost:9200/ Basic authentication Enabling basic authentication is easy. By default, Apache checks the user credentials against the local file which you can create using the following command: /path/to/htpasswd -c /usr/local/apache/password/.htpasswd_elasticsearch elasticsearchuser Then you'll need to use the following directives to allow only authenticated users to access your content: AuthType Basic AuthName "Elastic Server" AuthUserFile /usr/local/apache/password/.htpasswd_elasticsearch Require valid-user For more complex setups such as LDAP-based