Skip to main content

My first Play Framework application

My first Play Framework application has recently gone live. This time it was the Java version of the framework, next time I may move on to Scala finally. Nevertheless, I've learned a lot and will try to share some knowledge here. The application is meant for managing the data stored in the Dydra graph database (RDF & SPARQL). It's a thick client application meaning that the data loading happens in the client JavaScript layer via JSON requests, while routing and user authentication is done by means of Play Framework. Also I've made use of the RequireJS support in Play Framework for dynamic JS module loading. For the UI I've chosen a well-known YUI library. More details are following below.
  1. Application architecture.
  2. User model and authentication.
  3. Dydra database layer and SPARQL client.
  4. JavaScript logic and YUI.
  5. RequireJS module loading.
Application architecture
As any web application, this one can be described in terms of MVC pattern. Play Framework encourages following this pattern by introducing Ebean ORM with support for JPA annotations, Model/Controller class hierarchies and the powerful Scala-based template engine. Having played enough with Spring MVC's request mapping annotations, I've found it awesome that Play Framework introduces the central configuration file for all the HTTP routing in the application - conf/routes file.

This was a generic Play Framework-based application design. However, my application deviates with most of the controller logic written in JavaScript, while only the user authentication is implemented with the default approach. BTW, for this purpose Play Framework provides security helpers with Java annotations that you'll see below.

User model and authentication
I've borrowed the implementation of the user model and authentication from the sample application Zentasks. That official sample application tutorial is a great way for a quick start. Then I've customized it for my case. For example, I've added Unsecured authenticator to allow anonymous users to preview the home page without redirection to the unauthorized page.
public class Unsecured extends Security.Authenticator {
    @Override
    public String getUsername(Context ctx) {
        String name = ctx.session().get("name");
        // Returns "" to avoid redirection to unauthorized page
        return name == null ? "" : name;
    }
}
Then the home page controller class looks like:
@Security.Authenticated(Unsecured.class)
public class Home extends Controller {
    // Action handlers here
}
Finally, I've benefited from the database evolutions support. They allow managing database changes in a simple way. Moreover, you can configure the evolutions to be applied automatically without pressing "Apply this script now!" button. So my database configuration looks like:
# Using default H2 database in the embedded mode
db.default.driver=org.h2.Driver
db.default.url="jdbc:h2:db/play"
# Database evolutions are applied automatically
applyEvolutions.default=true

Dydra database layer and SPARQL client
Main functional part of the application's model is based on Dydra RDF database. Although there already exists a JPA-like Play Framework plugin for RDF databases called Imperium, I've decided it's an overkill for my use case and have opted for the client-side JavaScript SPARQL client. For this purpose I've borrowed sparql-client.js implementation (a part of SKOSjs application).

So I've ended up using cross-domain JavaScript requests to interact with the database. By default their usage is forbidden due to security reasons, so the browsers will complain. However, as long as the server sends a Access-Control-Allow-Origin response header (that is done by Dydra database), such requests will work with some restrictions. First, I could not make HTTP POST requests working cross-domain, so I used HTTP GET for SPARQL 1.1 Update queries. Second, although there are several authentication ways in Dydra, I've ended up using "API Authentication Key as a Query String Parameter".

JavaScript logic and YUI
I've reused the YUI3-based Transaction utility again and have built a number of child transaction classes that encapsulate sending SPARQL requests to the RDF database and rendering the results to the user. Here I'll show just a basic SparqlTransaction that builds the SPARQL query from the template and parameters, then sends the SPARQL request to the RDF database and returns the resulting JSON data for further processing. Afterwards, the data can be represented in a YUI data table or in any other way. However, that code still requires a lot of refactoring so I may show some examples later in another post.
// Basic SPARQL transaction extends ContainerTransaction
function SparqlTransaction(config) {
    Y = config.Y;
    Y_transaction.ContainerTransaction.call(this, config);

    // this function is called when the SPARQL request returns some data
    // to be overriden
    this.onDataLoad = function(config, data) {};

    // override the onComplete function
    this.onComplete = function (txId, response) {
        // response already contains SPARQL query
        // but it needs to be parametrized
        var query = formatQuery(response.responseText, config.queryParams);
        // sending SPARQL query HTTP GET request with success/failure callbacks
        config.client.select(query, function (data, caller) {
            hideLoadingImage(config.containerId);
            caller.onDataLoad(config, data);
        }, function () {
            hideLoadingImage(config.containerId);
            alert("Connectivity issue occurred")
        }, this);
    };

    function hideLoadingImage(containerId) {
        if (containerId) Y.one("#" + containerId).get('childNodes').remove();
    }
}

// Substitutes the placeholders {n} with corresponding parameters
function formatQuery() {
    var queryString = arguments[0];
    var parameters = arguments[1] || [];
    for (var i = 0; i < parameters.length; i++) {
        queryString = queryString.replace(
            new RegExp('\\{'+i+'\\}', 'g'), parameters[i]);
    }
    return queryString;
}
Here is how I've created and invoked the SPARQL transaction to show some response to the user:
var transaction = new Common.SparqlTransaction({
    "Y": Y,
    "uri": query_url,
    "queryParams": [id],
    "client": client
});
transaction.onDataLoad = function(config, data) {
    var props = data.results.bindings[0];
    // showing props values to the user here
};
transaction.execute();
To conclude, the SPARQL transaction sends the AJAX request to the server to read the SPARQL query resource, then the SPARQL query is parametrized and sent to the RDF database using the SPARQL client. Finally, the JSON results of the SPARQL request are rendered to the user.

RequireJS module loading
Play Framework supports one more cool feature - JavaScript module loading based on RequireJS library. Here you can find hints and examples explaining how to make use of this feature. It can be extremely useful, especially in my case when most controller logic is written in JavaScript. However, I still have some issues to resolve as combining resources and minification do not seem to work properly yet. This is because I've made use of YUI and have tried to integrate it with RequireJS using UseYUI plugin. It will be a good topic for another post as soon as I resolve the issues.

Comments

Post a Comment

Popular posts from this blog

Connection to Amazon Neptune endpoint from EKS during development

This small article will describe how to connect to Amazon Neptune database endpoint from your PC during development. Amazon Neptune is a fully managed graph database service from Amazon. Due to security reasons direct connections to Neptune are not allowed, so it's impossible to attach a public IP address or load balancer to that service. Instead access is restricted to the same VPC where Neptune is set up, so applications should be deployed in the same VPC to be able to access the database. That's a great idea for Production however it makes it very difficult to develop, debug and test applications locally. The instructions below will help you to create a tunnel towards Neptune endpoint considering you use Amazon EKS - a managed Kubernetes service from Amazon. As a side note, if you don't use EKS, the same idea of creating a tunnel can be implemented using a Bastion server . In Kubernetes we'll create a dedicated proxying pod. Prerequisites. Setting up a tunnel.

Notes on upgrade to JSF 2.1, Servlet 3.0, Spring 4.0, RichFaces 4.3

This article is devoted to an upgrade of a common JSF Spring application. Time flies and there is already Java EE 7 platform out and widely used. It's sometimes said that Spring framework has become legacy with appearance of Java EE 6. But it's out of scope of this post. Here I'm going to provide notes about the minimal changes that I found required for the upgrade of the application from JSF 1.2 to 2.1, from JSTL 1.1.2 to 1.2, from Servlet 2.4 to 3.0, from Spring 3.1.3 to 4.0.5, from RichFaces 3.3.3 to 4.3.7. It must be mentioned that the latest final RichFaces release 4.3.7 depends on JSF 2.1, JSTL 1.2 and Servlet 3.0.1 that dictated those versions. This post should not be considered as comprehensive but rather showing how I did the upgrade. See the links for more details. Jetty & Tomcat. JSTL. JSF & Facelets. Servlet. Spring framework. RichFaces. Jetty & Tomcat First, I upgraded the application to run with the latest servlet container versio

Extracting XML comments with XQuery

I've just discovered that it's possible to process comment nodes using XQuery. Ideally it should not be the case if you take part in designing your data formats, then you should simply store valuable data in plain xml. But I have to deal with OntoML data source that uses a bit peculiar format while export to XML, i.e. some data fields are stored inside XML comments. So here is an example how to solve this problem. XML example This is an example stub of one real xml with irrelevant data omitted. There are several thousands of xmls like this stored in Sedna XML DB collection. Finally, I need to extract the list of pairs for the complete collection: identifier (i.e. SOT1209 ) and saved timestamp (i.e. 2012-12-12 23:58:13.118 GMT ). <?xml version="1.0" standalone="yes"?> <!--EXPORT_PROGRAM:=eptos-iso29002-10-Export-V10--> <!--File saved on: 2012-12-12 23:58:13.118 GMT--> <!--XML Schema used: V099--> <cat:catalogue xmlns:cat=