Skip to main content

Handling data issues with XQuery

This post is about the good practice on how to handle the data issues with XQuery. We store a huge amount of xml data in Sedna XML database, and there happen to be synchronization issues with external systems resulting in xml data issues. One of possible consequences is that an XQuery function may receive an input parameter of an unexpected type. The limitation of the problem is that Sedna supports only XQuery 1.0 and not XQuery 3.0. As a result, try/catch expressions are not yet available for Sedna that makes handling issues harder and nastier. BTW, a while ago I created a feature request for Sedna to add support of XQuery 3.0 - you're welcome to upvote it!

Issue description
Here is the initial XQuery that simply returns the subtitle tag value.
declare function vp:getMapSubtitle($vp as element(value-proposition)?) as xs:string? {
We faced a data issue when there appeared two value propositions as an input parameter $vp. It resulted in the following Java exception and Sedna error message:
org.xmldb.api.base.XMLDBException: SEDNA Message: ERROR XPTY0004
It is a type error if, during the static analysis phase, an expression is found to 
have a static type that is not appropriate for the context in which the expression 
occurs, or during the dynamic evaluation phase, the dynamic type of a value does 
not match a required type as specified by the matching rules in 2.5.4 SequenceType 
Details: Error in function call. Return value does not match the required type. 
Expected type is [element(value-proposition)?], more than one item is given.
Query line: 7, column:1
It's a pretty good message, contains all the required details to locate the failing code, including the query line. However, it still takes time to navigate there, to make sure that it was indeed a data issue and finally to sort it out. Also it'd be unwise to fix the input parameter type to accept any number of elements as it would result in ignoring the data issues. So the problem here is to reduce the amount of time wasted on unnecessary (and probably repeated) investigations.

That's why it makes sense to catch the error using if condition as a workaround and provide an appropriate error message with a sufficient data issue description. Here is the new code:
declare function vp:getMapSubtitle($vp as element(value-proposition)*) as xs:string? {
    if (count($vp) < 2)
    then data($vp/topicmeta/subtitle)
      let $vp_ids := for $single_vp in $vp return data($single_vp/@id)
      return error(xs:QName("vp_getMapSubtitle"), 
        concat("More than one VP provided: ", string-join($vp_ids, ", ")))
As a result, in case of the frequent data issue with multiple VP’s, we’ll see the following message in Java application logs:
org.xmldb.api.base.XMLDBException: SEDNA Message: ERROR vp_getMapSubtitle
More than one VP provided: vp_ACTT4S-800E, vp_BT136B-600E
This approach saves plenty of time, allowing to resolve the data issue right away. Still it's far from perfect as the message is hidden in the application logs. So the next step will be to show a proper message to the end user and set up a log checker system to notify responsible parties about the data issue.


Popular posts from this blog

DynamicReports and Spring MVC integration

This is a tutorial on how to exploit DynamicReports reporting library in an existing  Spring MVC based web application. It's a continuation to the previous post where DynamicReports has been chosen as the most appropriate solution to implement an export feature in a web application (for my specific use case). The complete code won't be provided here but only the essential code snippets together with usage remarks. Also I've widely used this tutorial that describes a similar problem for an alternative reporting library. So let's turn to the implementation description and start with a short plan of this how-to: Adding project dependencies. Implementing the Controller part of the MVC pattern. Modifying the View part of the MVC pattern. Modifying web.xml. Adding project dependencies I used to apply Maven Project Builder throughout my Java applications, thus the dependencies will be provided in the Maven format. Maven project pom.xml file: net.sourcefo

Using Oracle impdp utility to reload database

Here I'll show an example of using Oracle Data Pump Import (impdp) utility. It allows importing Oracle data dumps. Specifically, below is the list of steps I used on an existing Oracle schema to reload the data from a dump. Steps to reload the data from an Oracle dump We start with logging into SQL Plus as sysdba to be able to manage users. sqlplus sys/password@test as sysdba Dropping the existing user. CASCADE clause will ensure that all schema objects are removed before the user. SQL> DROP USER test CASCADE; Creating a fresh user will automatically create an empty schema with the same name. SQL> CREATE USER test IDENTIFIED BY "testpassword"; Granting DBA role to the user to load the dump later. Actually, it's an overkill and loading the dump can be permitted using a more granular role IMP_FULL_DATABASE . SQL> GRANT DBA TO test; Registering the directory where the dump is located. SQL> CREATE DIRECTORY dump_dir AS '/home/test/dumpd

Connection to Amazon Neptune endpoint from EKS during development

This small article will describe how to connect to Amazon Neptune database endpoint from your PC during development. Amazon Neptune is a fully managed graph database service from Amazon. Due to security reasons direct connections to Neptune are not allowed, so it's impossible to attach a public IP address or load balancer to that service. Instead access is restricted to the same VPC where Neptune is set up, so applications should be deployed in the same VPC to be able to access the database. That's a great idea for Production however it makes it very difficult to develop, debug and test applications locally. The instructions below will help you to create a tunnel towards Neptune endpoint considering you use Amazon EKS - a managed Kubernetes service from Amazon. As a side note, if you don't use EKS, the same idea of creating a tunnel can be implemented using a Bastion server . In Kubernetes we'll create a dedicated proxying pod. Prerequisites. Setting up a tunnel.