Skip to main content

Connection to Amazon Neptune endpoint from EKS during development

This small article will describe how to connect to Amazon Neptune database endpoint from your PC during development. Amazon Neptune is a fully managed graph database service from Amazon. Due to security reasons direct connections to Neptune are not allowed, so it's impossible to attach a public IP address or load balancer to that service. Instead access is restricted to the same VPC where Neptune is set up, so applications should be deployed in the same VPC to be able to access the database. That's a great idea for Production however it makes it very difficult to develop, debug and test applications locally. The instructions below will help you to create a tunnel towards Neptune endpoint considering you use Amazon EKS - a managed Kubernetes service from Amazon. As a side note, if you don't use EKS, the same idea of creating a tunnel can be implemented using a Bastion server. In Kubernetes we'll create a dedicated proxying pod.
  1. Prerequisites.
  2. Setting up a tunnel.
  3. Usage.
Prerequisites
  • Kubectl should be installed and configured locally to connect to your EKS cluster.
Setting up a tunnel
  1. Login to AWS from command line. There are various ways to do it but I'd recommend to look at AWS Vault command line tool which helps to manage MFA connections to AWS. Once it's set up, you'll need to run a command:
    aws-vault exec PROFILE
  2. Save kubeconfig for your cluster:
    aws eks update-kubeconfig --name CLUSTER
    or switch to the cluster if it was saved earlier:
    kubectl config use-context arn:aws:eks:eu-west-1:ACCOUNT_ID:cluster/CLUSTER
  3. Check which pods are running:
    kubectl get pods -n NAMESPACE
  4. Create a neptune-proxy pod which will proxy incoming requests using some socat magic to the configured Neptune endpoint from the local port 8182 (default port for Neptune):
    kubectl run neptune-proxy --image=alpine/socat --port=8182 -n NAMESPACE --command -- /bin/sh -c 'socat tcp-l:8182,fork,reuseaddr tcp:NEPTUNE_CLUSTER.cluster-ro-qwerty.eu-west-1.neptune.amazonaws.com:8182'
    The pod will stay running for future connections. If required, you should explicitly delete it.
  5. Start port forwarding from localhost to the neptune-proxy pod:
    kubectl port-forward neptune-proxy 8182:8182 -n NAMESPACE
Usage
At this point you should be able to connect to the Neptune cluster endpoint from localhost:
  • Check status from command line:
    curl https://localhost:8182/status -ks
  • You can configure your application in dev mode to connect to https://localhost:8182/sparql endpoint.
However, you might have to deal with the "invalid" certificate issue because you have to use HTTPS but the certificate served by Amazon will not match localhost. The issue can be worked around by enabling insecure mode with a flag (e.g. for curl/wget utilities) or using NoopHostnameVerifier if you use Apache HttpClient by any chance.

Comments

Popular posts from this blog

DynamicReports and Spring MVC integration

This is a tutorial on how to exploit DynamicReports reporting library in an existing  Spring MVC based web application. It's a continuation to the previous post where DynamicReports has been chosen as the most appropriate solution to implement an export feature in a web application (for my specific use case). The complete code won't be provided here but only the essential code snippets together with usage remarks. Also I've widely used this tutorial that describes a similar problem for an alternative reporting library. So let's turn to the implementation description and start with a short plan of this how-to: Adding project dependencies. Implementing the Controller part of the MVC pattern. Modifying the View part of the MVC pattern. Modifying web.xml. Adding project dependencies I used to apply Maven Project Builder throughout my Java applications, thus the dependencies will be provided in the Maven format. Maven project pom.xml file: net.sourcefo

Do It Yourself Java Profiling

This article is a free translation of the Russian one that is a transcript of the Russian video lecture done by Roman Elizarov at the Application Developer Days 2011 conference. The lecturer talked about profiling of Java applications without any standalone tools. Instead, it's suggested to use internal JVM features (i.e. threaddumps, java agents, bytecode manipulation) to implement profiling quickly and efficiently. Moreover, it can be applied on Production environments with minimal overhead. This concept is called DIY or "Do It Yourself". Below the lecture's text and slides begin. Today I'm giving a lecture "Do It Yourself Java Profiling". It's based on the real life experience that was gained during more than 10 years of developing high-loaded finance applications that work with huge amounts of data, millions currency rate changes per second and thousands of online users. As a result, we have to deal with profiling. Application pro

Using Oracle impdp utility to reload database

Here I'll show an example of using Oracle Data Pump Import (impdp) utility. It allows importing Oracle data dumps. Specifically, below is the list of steps I used on an existing Oracle schema to reload the data from a dump. Steps to reload the data from an Oracle dump We start with logging into SQL Plus as sysdba to be able to manage users. sqlplus sys/password@test as sysdba Dropping the existing user. CASCADE clause will ensure that all schema objects are removed before the user. SQL> DROP USER test CASCADE; Creating a fresh user will automatically create an empty schema with the same name. SQL> CREATE USER test IDENTIFIED BY "testpassword"; Granting DBA role to the user to load the dump later. Actually, it's an overkill and loading the dump can be permitted using a more granular role IMP_FULL_DATABASE . SQL> GRANT DBA TO test; Registering the directory where the dump is located. SQL> CREATE DIRECTORY dump_dir AS '/home/test/dumpd