Skip to main content

Using Jetty 6 to run a Cocoon block

This article explains how to set up maven jetty plugin 6 for rapid development of Cocoon applications (this part is based on the official tutorial about cocoon maven plugin). The article also describes an interesting pitfall reported here by Robby Pelssers.

Modify project pom file
In order to enable Jetty support you need to add the following plugins into your project pom file:
<!-- Prepares block resources and classes for Jetty plugin -->
<plugin>
  <groupId>org.apache.cocoon</groupId>
  <artifactId>cocoon-maven-plugin</artifactId>
  <version>1.0.0</version>
  <executions>
    <execution>
      <phase>compile</phase>
      <goals>
        <goal>prepare</goal>
      </goals>
      <configuration>
        <useConsoleAppender>true</useConsoleAppender>
      </configuration>
    </execution>
  </executions>
</plugin>

<!-- Runs current cocoon block as a webapp using ReloadingClassLoader -->
<plugin>
  <groupId>org.mortbay.jetty</groupId>
  <artifactId>maven-jetty-plugin</artifactId>
  <version>6.1.21</version>
  <configuration>
    <webAppSourceDirectory>${project.build.directory}/rcl/webapp</webAppSourceDirectory>
    <contextPath>/</contextPath>
    <systemProperties>
      <systemProperty>
        <name>org.apache.cocoon.mode</name>
        <value>dev</value>
      </systemProperty>
    </systemProperties>
  </configuration>
</plugin>
See the details in this cocoon tutorial. The only difference is that I updated plugins versions. It was not really necessary but I did it in attempt to solve the issue mentioned below. I wouldn't advise you to update maven-jetty-plugin to versions 6.1.22-6.1.26 as they cut stack traces in case of troubles that makes it much more difficult to understand issues. Moreover, version update may require adding a new dependency to your project, so don't do it if there is no real need (or add this dependency to the webapp block with provided scope to exclude it from the final build):
<dependency>
  <groupId>org.apache.cocoon</groupId>
  <artifactId>cocoon-block-deployment</artifactId>
  <version>1.2.1</version>
  <scope>runtime</scope>
</dependency>

Set up rcl.properties file
The idea and syntax are well described in this cocoon tutorial. It only lacks examples. Let's say we have two blocks: search and shared where search depends on shared. If we want to test search block being able to change resources in shared block, you should add shared classes to your rcl.properties:
org.lagivan.prototype.search.service%classes-dir=./target/classes

org.lagivan.prototype.shared.service%classes-dir=../shared/target/classes
%exclude-lib=org.lagivan.prototype:shared

Run jetty plugin
Running jetty from maven is really easy:
mvn jetty:run
The details can be found on the plugin documentation page.

Issue with ContextLoaderListener
Here I'm sharing my investigation results on the issue with the following stack trace:
2012-05-10 16:56:18.978::WARN:  Failed startup of context org.mortbay.jetty.plugin.Jetty6PluginWebAppContext@61b2e165{/,C:\workspace\search\target\rcl\webapp}
java.lang.RuntimeException: Cannot invoke listener org.springframework.web.context.ContextLoaderListener@1ef0a6e8
 at org.apache.cocoon.tools.rcl.wrapper.servlet.ReloadingListener.invoke(ReloadingListener.java:190)
 at org.apache.cocoon.tools.rcl.wrapper.servlet.ReloadingListener.contextInitialized(ReloadingListener.java:213)
 at org.mortbay.jetty.handler.ContextHandler.startContext(ContextHandler.java:548)
Please check the following advices:
  • Check any "Caused by:" exceptions in the stack trace. It may be quite enough if the issue is simple. If you don't see any caused-by exceptions, it may be caused by the problem mentioned above - try another maven-jetty-plugin version (e.g. 6.1.21).
  • Now if you have a BeanCreationException like this:
Caused by: org.springframework.beans.factory.BeanCreationException: 
Error creating bean with name 'datasheetFileSearcher' defined in URL 
[jar:file:/C:/Users/lagivan/.m2/repository/org/lagivan/prototype/datasheet/1.0-SNAPSHOT/datasheet-1.0-SNAPSHOT.jar!/META-INF/cocoon/spring/datasheet-application-context.xml]: 
Error setting property values; 
nested exception is org.springframework.beans.PropertyBatchUpdateException; 
nested PropertyAccessExceptions (1) are:
PropertyAccessException 1: org.springframework.beans.TypeMismatchException: 
Failed to convert property value of type [org.lagivan.prototype.SpiderImpl] to required type [org.lagivan.prototype.Spider] for property 'spider'; 
nested exception is java.lang.IllegalArgumentException: 
Cannot convert value of type [org.lagivan.prototype.SpiderImpl] to required type [org.lagivan.prototype.Spider] for property 'spider': 
no matching editors or conversion strategy found
  • It means you've got the same issue as me and here is my explanation. Let's say as in the previous part we have three blocks search, datasheet, shared where search depends on both datasheet and shared blocks and datasheet also depends on shared. Then if you want to debug shared block and add its classes to rcl.properties file, you also have to add datasheet block classes as well. Otherwise there will be two copies of the each class from shared block being loaded: one - by some standard class loader from shared.jar and another - by ReloadingClassLoader from your classes directory. Thus, the correct solution is to add classes of all dependent cocoon blocks:
org.lagivan.prototype.search.service%classes-dir=./target/classes

org.lagivan.prototype.datasheet.service%classes-dir=../datasheet/target/classes
%exclude-lib=org.lagivan.prototype:datasheet

org.lagivan.prototype.shared.service%classes-dir=../shared/target/classes
%exclude-lib=org.lagivan.prototype:shared
  • If you're still not lucky and facing another issue, debug sources mentioned in your stack trace. In order to do this I had to add maven plugins and some cocoon libraries as dependencies of the failing block.

Comments

Popular posts from this blog

Connection to Amazon Neptune endpoint from EKS during development

This small article will describe how to connect to Amazon Neptune database endpoint from your PC during development. Amazon Neptune is a fully managed graph database service from Amazon. Due to security reasons direct connections to Neptune are not allowed, so it's impossible to attach a public IP address or load balancer to that service. Instead access is restricted to the same VPC where Neptune is set up, so applications should be deployed in the same VPC to be able to access the database. That's a great idea for Production however it makes it very difficult to develop, debug and test applications locally. The instructions below will help you to create a tunnel towards Neptune endpoint considering you use Amazon EKS - a managed Kubernetes service from Amazon. As a side note, if you don't use EKS, the same idea of creating a tunnel can be implemented using a Bastion server . In Kubernetes we'll create a dedicated proxying pod. Prerequisites. Setting up a tunnel.

Notes on upgrade to JSF 2.1, Servlet 3.0, Spring 4.0, RichFaces 4.3

This article is devoted to an upgrade of a common JSF Spring application. Time flies and there is already Java EE 7 platform out and widely used. It's sometimes said that Spring framework has become legacy with appearance of Java EE 6. But it's out of scope of this post. Here I'm going to provide notes about the minimal changes that I found required for the upgrade of the application from JSF 1.2 to 2.1, from JSTL 1.1.2 to 1.2, from Servlet 2.4 to 3.0, from Spring 3.1.3 to 4.0.5, from RichFaces 3.3.3 to 4.3.7. It must be mentioned that the latest final RichFaces release 4.3.7 depends on JSF 2.1, JSTL 1.2 and Servlet 3.0.1 that dictated those versions. This post should not be considered as comprehensive but rather showing how I did the upgrade. See the links for more details. Jetty & Tomcat. JSTL. JSF & Facelets. Servlet. Spring framework. RichFaces. Jetty & Tomcat First, I upgraded the application to run with the latest servlet container versio

Extracting XML comments with XQuery

I've just discovered that it's possible to process comment nodes using XQuery. Ideally it should not be the case if you take part in designing your data formats, then you should simply store valuable data in plain xml. But I have to deal with OntoML data source that uses a bit peculiar format while export to XML, i.e. some data fields are stored inside XML comments. So here is an example how to solve this problem. XML example This is an example stub of one real xml with irrelevant data omitted. There are several thousands of xmls like this stored in Sedna XML DB collection. Finally, I need to extract the list of pairs for the complete collection: identifier (i.e. SOT1209 ) and saved timestamp (i.e. 2012-12-12 23:58:13.118 GMT ). <?xml version="1.0" standalone="yes"?> <!--EXPORT_PROGRAM:=eptos-iso29002-10-Export-V10--> <!--File saved on: 2012-12-12 23:58:13.118 GMT--> <!--XML Schema used: V099--> <cat:catalogue xmlns:cat=