1. TROUBLESHOOTING SPRING’S RESTTEMPLATE REQUESTS TIMEOUT
Spring’s RestTemplate is one of the options to make client HTTP requests to endpoints, it facilitates communication with the HTTP servers, handles the connections and transforms the XML, JSON, … request / response payloads to / from POJOs via HttpMessageConverter.
By default RestTemplate doesn’t use a connection pool to send requests to a server, it uses a SimpleClientHttpRequestFactory that wraps a standard JDK’s HttpURLConnection taking care of opening and closing the connection.
But what if an application needs to send a large number of requests to a server? Wouldn’t it be a waste of efforts to open and close a connection for each request sent to the same host? Wouldn’t it make sense to use a connection pool the same way a JDBC connection pool is used to interact with a DB or a thread pool is used to execute tasks concurrently?
In this post I’ll cover configuring RestTemplate to use a connection pool using a pooled-implementation of the ClientHttpRequestFactory interface, run a load test using JMeter, troubleshoot requests timeout and reconfigure the connection pool.
1. CONFIGURING TOMCAT TO LISTEN ON MULTIPLE PORTS USING SPRING BOOT
Today I landed in a stackoverflow question about how to access the JavaMelody UI using a port other than the one used by the APIs. I went ahead I answered the question with some initial code I had for another blog post I’m working on but then decided to make it its own post.
While this could certainly be accomplished adding a couple of Spring Boot configuration properties: management.context-path and management.port to
application.yml for instance, and reuse the same path used for actuator endpoints (
/admin/health, …), there might be some cases where having the servlet container listening on a port other that the two mentioned earlier might make sense, maybe for monitoring, traffic analysis, firewall rules, etc..
In this post I’ll add support for configuring embedded Tomcat to listen on multiple ports and configure JavaMelody to exclusively use one of those ports to display its reports using Spring Boot.
1. CENTRALIZED AND VERSIONED CONFIGURATION USING SPRING CLOUD CONFIG SERVER AND GIT
The previous blog post covered Service Registration and Discovery as an infrastructure service used in a microservice architecture. In this post I’ll review another infrastructure microservice, the Configuration management service.
Configuration is used to prevent hard-coding values in the applications. But it’s also used to specify what is expected to be different between deployment environments (qa, staging, production, …) such as host names, mail and database credentials and other backing services. The latter type of configuration is identified as one of the elements in The Twelve-Factor App.
The benefits of this practice are that it promotes building only one artifact, only one binary to be executed in all the environments. You should never have to build a different executable for each environment, this will most-likely cause troubleshooting and debugging headaches.
As part of the Spring Cloud Series, this post describes the Spring Cloud Config’s server and client usage, the configuration storage options and its relation with the Discovery Server in a registration-first and configuration-first approaches.
1. MICROSERVICES REGISTRATION AND DISCOVERY USING SPRING CLOUD, EUREKA, RIBBON AND FEIGN
This blog post reviews an implementation pattern used in microservice architectural style: Service Registration and Discovery. The Discovery service and Configuration service, the latter discussed in this post, are most-likely the first infrastructure microservices needed when implementing such architecture.
Why is the Discovery service needed in a microservice architecture? Because if a running instance of service X needs to send requests to a running instance of service Y, it first needs to know the host and port service Y runs and listens on. One might think that service X may know this information via some static configuration file but this approach doesn’t sound very practical in an elastic-like distributed setup where services are dynamically (de)provisioned, triggered either manually or by some condition such as an auto-scaling policy or failure for instance.
Continuing with the Spring Cloud Series covered in this blog, this post discusses Spring Cloud Netflix’s Service Discovery component, a Spring-friendly and embedded Eureka server where services instances can register with and clients can discover such instances.
1. INTEGRATION TESTING USING SPRING BOOT, POSTGRES AND DOCKER
The integration testing goal is to verify that interaction between different parts of the system work well.
Consider this simple example:
Successful run of this integration test validates that:
- Class attribute actorDao is found in the dependency injection container.
- If there are multiple implementations of ActorDao interface, the dependency injection container is able to sort out which one to use.
- Credentials needed to communicate with the backend database are correct.
- Actor class attributes are correctly mapped to database column names.
- actor table has exactly 200 rows.
This trivial integration test is taking care of possible issues that unit testing wouldn’t be able to find. It comes at a cost though, a backend database needs to be up and running. If resources used by the integration test also includes a message broker or text-based search engine, instances of such services would need to be running and reachable. As can be seen, extra effort is needed to provision and maintain VMs / Servers / … for integration tests to interact with.
In this blog entry, a continuation of the Spring Cloud Series, I’ll show and explain how to implement integration testing using Spring Boot, Postgres and Docker to pull Docker images, start container(s), run the DAOs-related tests using one or multiple Docker containers and dispose them once the tests are completed.
1. DOCUMENTING MULTIPLE REST API VERSIONS USING SPRING BOOT, JERSEY AND SWAGGER
A few days ago I was completing the accompanying source code for Microservices using Spring Boot, Jersey, Swagger and Docker blog entry and found some problems while adding documentation support to multiple implementation versions of the “Hello” JAX-RS resource endpoints where the version information was passed in the URL or Accept Header.
Recapping the details, I stated my belief that Swagger’s BeanConfig singleton instantiation approach wasn’t going to work. The reason I believe so is because a BeanConfig instance dynamically creates only one Swagger definition file and it would require one definition file for each version. Yes, there are some “hacks” to get away with it, like removing the endpoints implementation where the version is passed in the Accept Header and then only one Swagger definition file will include the documentation for every endpoint implementation version. I don’t think this is the right answer though.
This post and its accompanying source code will tackle on this issue and demonstrate how to generate multiple Swagger definition files to feed Swagger UI to display JAX-RS APIs documentation for multiple implementation versions.
1. MICROSERVICES USING SPRING BOOT, JERSEY, SWAGGER AND DOCKER
Having used Spring for some years, I was impressed how easy it is to develop Spring-based apps using Spring Boot, an opinionated framework favoring convention over configuration and more impressed how easy it is to build and use common components of distributed systems using Spring Cloud which is built on top of Spring Boot.
Microservices has been a hot topic for a couple of years now, defined as a software architectural style to compose applications from a set of small and collaborating services each one implementing a specific purpose.
This and the other posts in this series are more of a hands-on experience once it has been decided to implement some solution using this pattern. This post won’t cover the trade-off using Microservices or Monolith First as discussed here and here or Don’t Start Monolith school of thoughts. Please follow these links or browse the REFERENCES section if you are also interested in these concepts or debates.
FIXING com.sun.tools.javac.Main IS NOT ON THE CLASSPATH APACHE ANT COMPILATION ERROR
A few days ago I was setting my development environment up and it involved executing some Apache Ant tasks. I had installed latest Java 7, jdk1.7.0_80 to be exact and Ant 1.8.4, required by the project.
Attempted to compile it and got this output:
1. TROUBLESHOOTING CPU SPIKES IN JAVA APPLICATIONS
Possible causes of high CPU usage in Java apps might be related to:
- Garbage Collector (GC) executing Major or Full collections too frequently without freeing much memory as a result of a memory leak in one of the applications served by the servlet container or a leak in the servlet container itself.
- GC often executing Major or Full collections (similar to previous suggestion) because in fact, the application needs more memory.
- Issues with the application like resource contention, long running jobs, expensive computation, …
The intention of this entry is to document a simple process to troubleshoot high CPU usage in Java apps once it has been observed that it’s not related to GC.
2. FIND THE JVM PROCESS
1. TECH.ASIMIO.NET = AWS S3 + JEKYLL + JENKINS
Creating a blog was in my TODO list for too long, being too ambitious prevented it me from just getting it up. WordPress or Drupal + phpCAS or Java-based blog/CMS + Jasig CAS for Single Sign On between the blog + Asimio.net; it was too complex and time-consuming.
Earlier this year I read Soft Skills: The software developer’s life manual which I found it to be a really interesting book and that was it, I was decided to start a blog, but took a different route than what was suggested, it had to be simple and fun where I would learn something new in the process and after a day of quick research, I decided it to use Jekyll to generate a static blog and hosted it on Amazon S3 since I’m already using a couple of AWS services with Asimio.net.
In this post I’ll detail how to accomplish this and optionally, use Jenkins to implement a Continuous Deployment approach to automatically deploy the blog when new posts become available. It will also serve me as a short how to guide in case I decide to create a static site again.