Skip to main content

Cocoon refactorings

I've been maintaining a complex Cocoon application for a couple of years now. Unfortunately, as the code gets older, it requires more and more time for maintenance unless you keep it clean and neat from the beginning. Finally I've found time to refactor the project gradually and I'll try to keep it this way. In this article I'm going to review the steps I've taken to improve the code and build quality.

Remove duplicated resources
I've started with removing duplicated and unused resources (mostly images and icons). Many of them were duplicated across the application in several Cocoon blocks. So I had to keep common resources in a single shared block and modify references from other blocks accordingly. For this I've added the following sitemap rule into all blocks:
<map:match pattern="shared/resource/external/**">
  <map:read src="servlet:shared:/resource/external/{1}"/>
This single improvement decreased the build time by 30% (that is about 10 seconds). Besides, I've refactored CSS files by extracting common.css that is common for all blocks and have left only specific CSS rules in each block.

Remove duplicated resources - update from 18th June
I've found a much more elegant way to achieve the same. It uses ResourceExistsSelector in Cocoon. BTW, this selector is widely used for different sitemap patterns. Here is the code that I've added to all block sitemaps:
<map:match pattern="resource/external/**">
  <map:select type="resource-exists">
    <map:when test="resource/external/{1}">
      <map:read src="resource/external/{1}"/>
      <map:read src="servlet:shared:/resource/external/{1}"/>
In comparison with the method described above, this one gives you the same block-relative URL as in the shared block. It helps to prevent issues, e.g. using background images in shared CSS files. Because in this case you'd prefer a single relative URL that is valid for all blocks.

Extract sub-sitemaps
The next step was extracting sub-sitemaps using Cocoon Mounts (official doc here). This one I had in my mind for a long time. In several blocks we had good use cases for this such as separating a file generating pipeline and a test pipeline:
<map:match pattern="file/**">
  <map:mount uri-prefix="file" src="sitemap-file.xmap"/>

<map:match pattern="test/**">
  <map:mount uri-prefix="test" src="sitemap-test.xmap"/>
Ideally you should take care about it while designing your app and you'll be able to benefit from auto-mounting and dynamic mounting.

Further steps
There is much more to improve if you check the community sources:


Popular posts from this blog

DynamicReports and Spring MVC integration

This is a tutorial on how to exploit DynamicReports reporting library in an existing  Spring MVC based web application. It's a continuation to the previous post where DynamicReports has been chosen as the most appropriate solution to implement an export feature in a web application (for my specific use case). The complete code won't be provided here but only the essential code snippets together with usage remarks. Also I've widely used this tutorial that describes a similar problem for an alternative reporting library. So let's turn to the implementation description and start with a short plan of this how-to: Adding project dependencies. Implementing the Controller part of the MVC pattern. Modifying the View part of the MVC pattern. Modifying web.xml. Adding project dependencies I used to apply Maven Project Builder throughout my Java applications, thus the dependencies will be provided in the Maven format. Maven project pom.xml file: net.sourcefo

Do It Yourself Java Profiling

This article is a free translation of the Russian one that is a transcript of the Russian video lecture done by Roman Elizarov at the Application Developer Days 2011 conference. The lecturer talked about profiling of Java applications without any standalone tools. Instead, it's suggested to use internal JVM features (i.e. threaddumps, java agents, bytecode manipulation) to implement profiling quickly and efficiently. Moreover, it can be applied on Production environments with minimal overhead. This concept is called DIY or "Do It Yourself". Below the lecture's text and slides begin. Today I'm giving a lecture "Do It Yourself Java Profiling". It's based on the real life experience that was gained during more than 10 years of developing high-loaded finance applications that work with huge amounts of data, millions currency rate changes per second and thousands of online users. As a result, we have to deal with profiling. Application pro

Using Oracle impdp utility to reload database

Here I'll show an example of using Oracle Data Pump Import (impdp) utility. It allows importing Oracle data dumps. Specifically, below is the list of steps I used on an existing Oracle schema to reload the data from a dump. Steps to reload the data from an Oracle dump We start with logging into SQL Plus as sysdba to be able to manage users. sqlplus sys/password@test as sysdba Dropping the existing user. CASCADE clause will ensure that all schema objects are removed before the user. SQL> DROP USER test CASCADE; Creating a fresh user will automatically create an empty schema with the same name. SQL> CREATE USER test IDENTIFIED BY "testpassword"; Granting DBA role to the user to load the dump later. Actually, it's an overkill and loading the dump can be permitted using a more granular role IMP_FULL_DATABASE . SQL> GRANT DBA TO test; Registering the directory where the dump is located. SQL> CREATE DIRECTORY dump_dir AS '/home/test/dumpd