Skip to main content

Connection to Amazon Neptune endpoint from EKS during development

This small article will describe how to connect to Amazon Neptune database endpoint from your PC during development. Amazon Neptune is a fully managed graph database service from Amazon. Due to security reasons direct connections to Neptune are not allowed, so it's impossible to attach a public IP address or load balancer to that service. Instead access is restricted to the same VPC where Neptune is set up, so applications should be deployed in the same VPC to be able to access the database. That's a great idea for Production however it makes it very difficult to develop, debug and test applications locally. The instructions below will help you to create a tunnel towards Neptune endpoint considering you use Amazon EKS - a managed Kubernetes service from Amazon. As a side note, if you don't use EKS, the same idea of creating a tunnel can be implemented using a Bastion server. In Kubernetes we'll create a dedicated proxying pod.
  1. Prerequisites.
  2. Setting up a tunnel.
  3. Usage.
Prerequisites
  • Kubectl should be installed and configured locally to connect to your EKS cluster.
Setting up a tunnel
  1. Login to AWS from command line. There are various ways to do it but I'd recommend to look at AWS Vault command line tool which helps to manage MFA connections to AWS. Once it's set up, you'll need to run a command:
    aws-vault exec PROFILE
  2. Save kubeconfig for your cluster:
    aws eks update-kubeconfig --name CLUSTER
    or switch to the cluster if it was saved earlier:
    kubectl config use-context arn:aws:eks:eu-west-1:ACCOUNT_ID:cluster/CLUSTER
  3. Check which pods are running:
    kubectl get pods -n NAMESPACE
  4. Create a neptune-proxy pod which will proxy incoming requests using some socat magic to the configured Neptune endpoint from the local port 8182 (default port for Neptune):
    kubectl run neptune-proxy --image=alpine/socat --port=8182 -n NAMESPACE --command -- /bin/sh -c 'socat tcp-l:8182,fork,reuseaddr tcp:NEPTUNE_CLUSTER.cluster-ro-qwerty.eu-west-1.neptune.amazonaws.com:8182'
    The pod will stay running for future connections. If required, you should explicitly delete it.
  5. Start port forwarding from localhost to the neptune-proxy pod:
    kubectl port-forward neptune-proxy 8182:8182 -n NAMESPACE
Usage
At this point you should be able to connect to the Neptune cluster endpoint from localhost:
  • Check status from command line:
    curl https://localhost:8182/status -ks
  • You can configure your application in dev mode to connect to https://localhost:8182/sparql endpoint.
However, you might have to deal with the "invalid" certificate issue because you have to use HTTPS but the certificate served by Amazon will not match localhost. The issue can be worked around by enabling insecure mode with a flag (e.g. for curl/wget utilities) or using NoopHostnameVerifier if you use Apache HttpClient by any chance.

Comments

Popular posts from this blog

Notes on upgrade to JSF 2.1, Servlet 3.0, Spring 4.0, RichFaces 4.3

This article is devoted to an upgrade of a common JSF Spring application. Time flies and there is already Java EE 7 platform out and widely used. It's sometimes said that Spring framework has become legacy with appearance of Java EE 6. But it's out of scope of this post. Here I'm going to provide notes about the minimal changes that I found required for the upgrade of the application from JSF 1.2 to 2.1, from JSTL 1.1.2 to 1.2, from Servlet 2.4 to 3.0, from Spring 3.1.3 to 4.0.5, from RichFaces 3.3.3 to 4.3.7. It must be mentioned that the latest final RichFaces release 4.3.7 depends on JSF 2.1, JSTL 1.2 and Servlet 3.0.1 that dictated those versions. This post should not be considered as comprehensive but rather showing how I did the upgrade. See the links for more details. Jetty & Tomcat. JSTL. JSF & Facelets. Servlet. Spring framework. RichFaces. Jetty & Tomcat First, I upgraded the application to run with the latest servlet container versio

Extracting XML comments with XQuery

I've just discovered that it's possible to process comment nodes using XQuery. Ideally it should not be the case if you take part in designing your data formats, then you should simply store valuable data in plain xml. But I have to deal with OntoML data source that uses a bit peculiar format while export to XML, i.e. some data fields are stored inside XML comments. So here is an example how to solve this problem. XML example This is an example stub of one real xml with irrelevant data omitted. There are several thousands of xmls like this stored in Sedna XML DB collection. Finally, I need to extract the list of pairs for the complete collection: identifier (i.e. SOT1209 ) and saved timestamp (i.e. 2012-12-12 23:58:13.118 GMT ). <?xml version="1.0" standalone="yes"?> <!--EXPORT_PROGRAM:=eptos-iso29002-10-Export-V10--> <!--File saved on: 2012-12-12 23:58:13.118 GMT--> <!--XML Schema used: V099--> <cat:catalogue xmlns:cat=