Remotely connect to JMX behind an AWS Loadbalancer

JMX is useful to gather various information about a running Java application. Mostly it is used to gain detailed data about CPU-consumption (where are the hotspots in my code?) or Memory-allocation (what is eating up my memory?). Especially when a system is under load and behaves ‘unexpectedly’ that information is valuable. Here are the steps you have to take in order to get remote access to JMX when you host your servers in the Amazon cloud behind a loadbalancer.

1. Associate Public IP Address

The servers that are created in your ElasticBeanstalk environment need to have a public IP-Address. To do this, make sure you’ve selected the following option in your Elastic Beanstalk environment’s configuration:

Elastic Beanstalk → Your Application → Your Environment → Configuration →  VPC → Associate Public IP Address

Be aware that exposing the instances behind a LB undermines it’s purpose: only having one public end-point. By making all your instances public, they are open for attacks of any kind!

2. Change server startup parameters

A quick search on stackoverflow reveals the required arguments you have to add to your Java application:

-Dcom.sun.management.jmxremote.port=PORT
-Dcom.sun.management.jmxremote.rmi.port=PORT
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Djava.rmi.server.hostname=PUBLIC_IP

In a previous post I’ve described how to pass these arguments to the Java application (add them as JAVA_OPTS) and choosing a PORT is trivial. However you should use the same port for .port and .rmi.port to minimize the ports you have to open in the security group later on.

Now what about the PUBLIC_IP? If you leave out this parameter, JMX will most likely bind to the private IP-Address and you wont be able to access it from outside. My first attempt was checking the instance’s environment variables to see if the server’s public IP is added there. Unfortunately it was not. My second attempt was to expose an environment variable JMX_IP utilizing .ebextensions like this:

commands:
  01-setip:
    command: export JMX_IP=$(curl http://169.254.169.254/latest/meta-data/public-ipv4)

The idea was to export an environment variable JMX_IP and then reference it in the JAVA_OPTS like so -Djava.rmi.server.hostname=${JMX_IP} which did not work as well. I’m not sure why (maybe you know?), but I ended up changing the script that builds my deployment artifact:

This results into the following run.sh file which is executed during deployment:

As you can see, the public ip address is fetched from the instance’s meta-data URL and added as an option to the Java process. If you added the startup parameters from above, the server should start JMX and expose it on the public IP address.

3. Open JMX port in security group

In the final step you have to open a port to be able to connect to your server. Before we proceed, be aware that we disabled any authentication! That means that as soon as the port is open, anyone that knows the IP of your server can access to it. Therefore, either close the port as soon as you’re finished your work and/or add SSL to your JMX configuration!

To open the port, I recommend creating a new security group under

EC2 → Security Groups → Create Security Group

in your VPC. Add an Inbound rule to open a TCP port you’ve chosen for JMX (if you decided to give a different port for rmi, it has to be opened as well) and add your IP address as the source. Note the security group’s name (e.g. sg-abcdefg) and add it to your Elastic Beanstalk configuration:

Elastic Beanstalk → Your Application → Your Environment → Configuration → Instances → EC2 security groups

Finally, you should be able to add and connect the EC2 instances using jvisualvm.

Advertisements

vertx-jooq-async 0.4 released!

vertx-jooq-async the (un)beloved child of vertx-jooq has been released in version 0.4! This version finally implements insertReturningPrimaryAsync in all VertxDAOs*.

In addition, the complex code in the DAOs default methods has been abstracted and moved to vertx-jooq-async-shared. That means less code duplication and fewer points of failure – a move that will very likely be done in vertx-jooq soon as well.

Finally the vertx-dependency version has been bumped to 3.5. (and so the support for RXJava2).

* Unfortunately, this only works for MySQL and numeric auto-increment keys. That is, because the implementation is based on the implementations of io.vertx.ext.asyncsql.impl.AsyncSQLConnectionImpl and only the MySQL variant returns the generated ID.

Convert List<CompletableFuture> to CompletableFuture<List>

Sometimes I find myself in the situation where I have to perform some asynchronous tasks and perform another asynchronous task when all those tasks have been completed. As always in such situations, I’ve searched stackoverflow for a how-to and the top rated answer suggests the following solution:

That solution totally works and it is good. However, if you often deal with Streams, a more functional approach would be neat. So I started coding a Collector that does this operation for me in one go. I won’t go into detail of how a Collector works, but this blog-post helped me out a lot understanding it.

Finally I ended up with this solution, which I’ve uploaded to github:

And here is how you would use it:

Happy coding!


Update

Obviously it was late yesterday ^^ The solution I posted was hidden in another answer with less upvotes. It suggest using Collectors.collectAndThen together with the sequence-method above. In my opinion this is cleaner than following my approach with writing the Collector on your own (DRY-principle). The final solution is posted below and it contains another Collector-factory method that can be used if you’re not interested in the results or the CompletableFutures to collect are of type Void.

vertx-jooq 2.4 released

Note: Unfortunately, the dependencies to rxjava1 haven’t been removed completely in 2.4.0. RX users please checkout version 2.4.1.

I’m happy to announce the release of vertx-jooq 2.4. The main driver behind this release was the dependency upgrade to vertx 3.5 and thus the change of the rx-dependency in vertx-jooq-rx from rxjava to rxjava2. In addition, the JSON key names of the generated POJO are now the same as their database counterparts. This change has been announced here. Last but not least, a bug has been fixed when dealing with JsonObjectConverter or JsonArrayConverter.

Converting a jOOQ-Record into a vertx JsonObject

In vertx-jooq, there are a couple of ways to convert a jOOQ Record into a vertx JsonObject. First, checkout the straightforward way, which involves two steps: load the POJO using a generated DAO and then convert it with the toJson method.

It turns out however that on the way to a JsonObject, a lot of copying is going on. First the SELECT statement is mapped to a Record, then converted to a POJO and finally converted into a JsonObject. The code below is copied from DAOImpl class which is extended by every VertxDao:

If you’ve enabled the generation of interfaces in vertx-jooq (which is the default), then you could convert the Record directly into a JsonObject without converting to POJO first.

The major disadvantage of this solution is that you have to write the SELECT statement by yourself. On the other hand, you save the creation of the POJO, which is a plus. But what if we’re joining on another table so we cannot directly map into a generated Record? See this example:

Because the fetched Record is dynamic, we cannot simply call toJson on it. Instead we call the fetchOneMap method which “returns at most one resulting record as a name/value map.” Luckily, JsonObject has a constructor taking a Map<String,Object> which is exactly the same generic type returned (it even returns the same Map implementation that JsonObject uses under the hood: LinkedHashMap). It is noteworthy that this could even be done with pure jOOQ and vertx (leaving the execution aside). Some caveats though:

  1. No conversion happens. If you use a custom generator to generate the schema objects and have overwritten handleCustomTypeToJson / handleCustomTypeFromJson, you’re probably in bad shape. If one of the fields you’re fetching is one of those types, this conversion is bypassed.
  2. Because the constructor makes no checks and takes the map ‘as is’ without copying, you’re probably adding illegal JSON-types.
  3. The Map‘s key will have the name of the database column it is representing. This differs from the default-setting in vertx-jooq, which uses the name of the POJO’s member variable representing that column and thus would produce different JSON compared to the toJson method for the same Record/POJO. Starting with 2.3.5 however this can be changed so the keys are rendered with the same syntax.

Lessons learned

When simple CRUD is not enough, you have to fetch some joined Records and need to convert the results into a JsonObject, you should change the syntax how the JsonObject's keys are rendered in the POJO (3.). This is the first step in making both ways of generating JSON interoperable.

Things get trickier if you have to deal with conversion of custom JSON types. If your app/service is mostly reading data, you could handle conversion already on jOOQ-level so your converted type is already a valid JsonObject type. For example, if you’re using a SQL DateTime field “lastUpdated” on database level, just write a converter that formats the DateTime as a String. In turn, both the generated POJO and the fetchOneMap/fetchMaps methods will return a String for the “lastUpdated” entry and produce same results.

This can become a problem when those converted values need to be altered by the same app: a) it is more convenient and less error-prone to set a java.time.LocalDateTime object instead of dealing with Strings and b) some of the types may have special SQL-functions (e.g. dateAdd) which you cannot use any longer for that type.

Conclusion

Surprisingly there is no golden rule. First, I recommend to change the syntax of how JSON keys are rendered to be the same as the database column names (I will most likely change that behavior to the default in one of the next releases). This means you can use the fetchOneMap/fetchMaps methods most of the time to produce JSON. When dealing with custom types you should check how frequently those are updated by your app. If you’re facing a read-only app, write custom jOOQ converters for these types. If these types are altered quite often, you should stick to their actual type and handle the conversion into a proper JSON value on your own.

vertx-jooq 2.3.5 gives you control over the generated JSON

One of the goodies of vertx-jooq is that it allows you to automatically convert POJOs and jOOQ-Records from and into JSON-Objects. Until now however, the key of the generated JSON-field was fixed to the POJO’s member name that represents that column (which stays the default). Starting with version 2.3.5 this behavior can be changed in different ways:

  1. Subclass the VertxGenerator of your choice and overwrite getJsonKeyName-method.
  2. Subclass VertxGeneratorStrategy of your choice and overwrite getJsonKeyName-method.
  3. Set a different delegate used by the VertxGeneratorStrategy (see also #6) and change the way how the POJO’s member names are rendered.

Imagine you want the JSON keys to be exactly the same as their database column counterparts. Here is how you would do it in all three different methods:

Option 1:

Option 2:

Option 3:

Whatever way you prefer, you also need to alter your code generation configuration and set the correct Generator/GeneratorStrategy. Checkout the github page for a how-to or if you’re new to vertx-jooq.

Hello async my old friend

Let me present you vertx-jooq-async: the world’s first fully async and type-safe SQL code generation and execution tool for VertX™. Async? Wasn’t it asynchronous before? It was, but what the code did was wrapping JDBC-calls using Vertx’ executeBlocking-method, which just shifts the blocking code from one event loop to another (see also).

1r9aly

With this release however, things have changed. jOOQ is still used for generating code and creating type-safe queries, but code is executed utilizing vertx-mysql-postgresql-client. That library in turn is based on a non-blocking driver for MySQL and Postgres databases, so this time it is really non-blocking.

Although some stuff is yet missing – you should go and check out the github-page. Now.