Java Ninja Chronicles By Norris Shelton

Things I learned in the pursuit of code

Logback link is the new logging standard. It is built upon the Simple Logging Facade For Java (SLF4J) API. This API allows you to code to a standard API, the easily switch out the implementation as you wish.

This blog entry will cover several examples of things that are common in a Logback configuration file.


Maven dependencies for Logback

Including the following Maven dependency will pull in Logback and the SLF4J API that it needs.

<dependency>
    <groupId>ch.qos.logback</groupId>
    <artifactId>logback-classic</artifactId>
    <version>${logback.version}</version>
</dependency>

I also like to combine all of my logging from dependent APIs into my logs as well. To do this, you need to ensure that any dependencies that have a dependency on commons-logging are excluded. An example is below:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-webmvc</artifactId>
    <exclusions>
        <exclusion>
            <artifactId>commons-logging</artifactId>
            <groupId>commons-logging</groupId>
        </exclusion>
    </exclusions>
</dependency>

If you do that, then you can pull in the SLF4J API that implements the commons-logging API, but funnels all of the log messages through the standard SLF4J API that can then log via Logback. This library is included via:

<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>jcl-over-slf4j</artifactId>
    <version>${slf4j.version}</version>
</dependency>

The same can be done for Log4J.

<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>log4j-over-slf4j</artifactId>
    <version>${slf4j.version}</version>
</dependency>

Configuration file name and location

When Logback starts-up it looks for a file on the classpath named logback.xml. If you are using Maven, the file will normally be located in src/main/resources. If you have a separate test logging configuration, then you can place a file named logback-test.xml in src/test/resources. Logback looks for the test file first and will use that configuration in-lieu of the regular logback configuration file.


Basic configuration file attributes

The basic logback configuration file is enclosed by an configuration element.

<configuration>
...
</configuration>

The configuration element has several attributes.

  • debug – instructs Logback to dump status data. This does NOT affect any logging levels.
    <configuration debug="true">
    ...
    </configuration>
    
  • scan – instructs Logback to scan for changes in it’s configuration file and automatically load the new configuration. By default, Logback scans the configuration file for changes once every minute. To change how often the configuration file is scanned for changes, also include the scanPeriod.
    <configuration scan="true">
    ...
    </configuration>
    
  • scanPeriod – this can be used in conjunction with scan and modifies how often the Logback configuration file is scanned for changes. Every 30 seconds in this case. Valid units of measure are milliseconds, seconds, minutes, hours. Milliseconds is assumed if no unit of time is specified.
    <configuration scan="true" scanPeriod="30 seconds">
    ...
    </configuration>
    

Logging components

Logback configuration files are mainly composed of two things, appender, root and logger elements.

Appenders

Appender elements define the various logging targets. Common appenders are:

  • ConsoleAppender – this logs to the system console.
  • FileAppender – this logs to a file.
  • RollingFileAppender – this logs to a file and then rolls to an archive file by some scheme. Common examples are daily and by file size.
  • SMTPAppender – this logs statements to an email and mails them to a recipient(s).

Roots

Root elements define which messages will be sent to the enclosed appenders. You can have one more root elements and they can contain one or more appender(s). Here are some examples:

This says to log messages to the appender identified by CONSOLE at the INFO level or higher.

<root level="INFO">
    <appender-ref ref="CONSOLE"/>
</root>

This says to log messages to the appender identified by ROLLING_FILE and SMTP at the WARN or higher.

<root level="WARN">
    <appender-ref ref="ROLLING_FILE"/>
    <appender-ref ref="SMTP"/>
</root>

Loggers

Logger elements offer a way to override what is logged via root elements or appender elements. Take the following example.

<logger name="com.javaninja" level="INFO"/>

<root level="WARN">
    <appender-ref ref="ROLLING_FILE"/>
    <appender-ref ref="SMTP"/>
</root>

This says that ROLLING_FILE and SMTP should show log messages at the WARN level or higher. However, the logger element says that anything that is associated with a logger instance that has a value of com.javaninja or more should be logged at the INFO level, regardless of the root level.

You can also specify an appender within a logger

<logger name="com.javaninja" level="INFO">
    <appender-ref ref="ROLLING_FILE"/>
</logger>

<root level="WARN">
    <appender-ref ref="ROLLING_FILE"/>
    <appender-ref ref="SMTP"/>
</root>

This says to set the logging level of com.javaninja to INFO, but only for the ROLLING_FILE appender. This one I have never actually used, but thought it would be good to show the flexibility.


Logging pattern

The logging pattern determines what is logged for your statements. This is a good time to include things like the application that is logging and the hostname that you are actually running on.

This is a logging pattern that I like;

%d{yyyy-MM-dd HH:mm:ss}|${HOSTNAME}|my-app|%-5level|%msg ||%class:%line %n

This pattern is pipe-delimited (|). The various items being logged are as follows:

  • %d{yyyy-MM-dd HH:mm:ss} – this logs the date in the specified format.
  • ${HOSTNAME} – this logs the hostname that the application is running on.
  • my-app – this a name that I use to identify my application. This is useful if I’m running multiple apps on the same machine and they are logging to the same log or to the console. Another way to do this is via %contextName. This logs the value of the logging contextName. The default value is default. This value is set via
    <contextName>my-app</contextName>
    
  • %-5level – this displays the logging level (e.g. debug, info, warn or error. The -5 allocated 5 characters for the field, even if it only takes 4 characters.
  • %msg – this is the actual message being logged.
  • %class:%line – this logs the name of the class, including the full package, and the line number in the class that logged the message. No more text searching for a message to see where it came from and having to make a guess when the same message is logged in more than one place.
  • %n – this outputs a newline character at the end of the line.

Logging to Console

Logging is done via appenders. An example console appender is below.

<configuration>
	<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
		<encoder>
			<pattern>%d{yyyy-MM-dd HH:mm:ss}|${HOSTNAME}|%contextName|%-5level|%msg ||%class:%line %n</pattern>
		</encoder>
	</appender>
	<!-- the console defaults to INFO level-->
	<root level="INFO">
		<appender-ref ref="CONSOLE"/>
	</root>
</configuration>

Here I am saying to log the information with the specified logging pattern and I want the root logging level to be INFO. I then place the appender reference containing the reference to the CONSOLE within it. This says that all logging messages that are severity INFO and higher will be logged. Any log messages below INFO will not be logged.


Writing to a file

FileAppender

Writing to a file is as simple as using the FileAppender

<configuration>
  <appender name="FILE" class="ch.qos.logback.core.FileAppender">
    <file>javaninja.log</file>
    <append>true</append>
    <encoder>
      <pattern>%d{yyyy-MM-dd HH:mm:ss}|${HOSTNAME}|%contextName|%-5level|%msg ||%class:%line %n</pattern>
    </encoder>
  </appender>
        
  <root level="INFO">
    <appender-ref ref="FILE" />
  </root>
</configuration>

The file element indicates the current logging file name and location. With that said, rarely do you ever have a real need to write to a huge file that just grows and grows and grows. The more likely usage is a RollingFileAppender. This is a file that archives it’s older contents according to some schedule. By day and by size are the two most common schedules.

Personally, I hate when a file is rolled by size. All that does is force a developer to wade through files to determine which one has the logging that I need. Generally, you see this when there is too much being logged. I prefer to filter the log messages appropriately, then roll the file by day.

RollingFileAppender

A RollingFileAppender must have a RollingPolicy (what) and a Triggering policy (when). Generally, with time-based triggering policies, the file doesn’t change by the time, but by logging events. When a logging event is received, a check is made to determine if rolling should occur. In common usage, you will never see the difference.

I like to use TimeBasedRollingPolicy, which performs both of these functions. The TimeBasedRollingPolicy rolling schedule is inferred from the value of the fileNamePattern enclosed within it’s tag. Meaning, if you give a date pattern that includes the day, then it will roll per day. If you give it a date pattern that contains the year and month, it will rotate monthly. If you use a date/time pattern that only includes the hour, the file will rotate per hour. There is also a maxHistory element that indicates the number of archive files to keep. It will automatically delete the archive files and any enclosing directories as defined by the fileNamePattern

<configuration>
    <appender name="ROLLING_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>/opt/server/log/javaninja.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!-- daily rollover -->
            <fileNamePattern>/opt/server/log/archive/javaninja.%d{yyyy-MM-dd}.log</fileNamePattern>
            <!-- keep 90 days' worth of history -->
            <maxHistory>90</maxHistory>
        </rollingPolicy>
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:ss}|${HOSTNAME}|%contextName|%-5level|%msg ||%class:%line %n</pattern>
        </encoder>
    </appender>

    <root level="INFO">
        <appender-ref ref="ROLLING_FILE" />
    </root>
</configuration>

The active logging file will be kept in /opt/server/log and it will be named javaninja.log. When the first log message of a new day is logged, the contents of that file will be moved to /opt/server/log/archive and the date will be added to the file name. On the 91st day and thereafter, the oldest file will be deleted.

SMTPAppender

Sending an email is something that I like to do with exceptions in production. It is a fast way to surface problems and catch them as they happen. The SMTPAppender takes the normal values that you would expect to send an email.

  • SMTPHost – This is the location of the SMTP Host. It is common for it to be localhost, but it could be one common server.
  • To – to whom should the email be sent. This element can be repeated if there are multiple mail account recipients.
  • From – this is the value of the email sender. It usually doesn’t have to be a real account. If your SMTP server requires authentication, then it may need to be a real account and include a password.
  • Subject – the subject value of the email. Variable substitution can be used. I usually follow the pattern in the Logback documentation and include the host that the log message came from, the name of the logger (package and class) and the message. If you have multiple apps on that host, it may be useful to put in an indicator to note the app that the messages originated from.

You can use the patternLayout which takes a pattern just like the other appenders, but I prefer the HTMLLayout. With this, comes the caveat, that the data will be included in an HTML table. The layout will automatically put each specifier (logging item) into it’s own column. This means that you should not include separators. If you do, then the separators will be placed in their own table column and will waste space. Generally what I do is use my usual pattern, but remove any pipe “|” characters, spaces and new lines.

You can also use the cyclicBufferTracker to batch emails. This is useful if you have a high volume of errors and want to receive them 5, 10, 100 or any arbitrary number of messages at a time. This will cause the messages to be held until the bufferSize is exceeded, then all of the held messages will be sent in the same email. I prefer to use as small a number as possible because if you have it set too high, let’s say 100, then the first block of messages may be held for a long time until the threshold is reached.

<configuration>
	<appender name="SMTP" class="ch.qos.logback.classic.net.SMTPAppender">
		<SMTPHost>localhost</SMTPHost>
		<to>norris.shelton@javaninja.com</to>
		<from>errors@javaninja.com</from>
		<subject>${HOSTNAME} - %logger{20} - %m</subject>
		<layout class="ch.qos.logback.classic.html.HTMLLayout">
			<!--
                NOTE: HTML layout generates a separate column for each specifier.
                Adding separators will cause columns with only the separator
            -->
			<Pattern>%d{yyyy-MM-dd HH:mm:ss}${HOSTNAME}%contextName%-5level%msg%class:%line</Pattern>
		</layout>
		<cyclicBufferTracker class="ch.qos.logback.core.spi.CyclicBufferTracker">
			<!-- hold the email and send 5 log entry per email -->
			<bufferSize>5</bufferSize>
		</cyclicBufferTracker>
	</appender>

</configuration>

Conclusion

This covers most of the common things that are possible in Logback. There are tons of options to use for the configuration that accomplish pretty much anything that you would need to do in a logging framework.

February 4th, 2016

Posted In: java ninja, Javaninja, jcl-over-slf4j, log4j-over-slf4j, logback, Logging, Logging configuration

Leave a Comment

Update

Well, that was quick. It looks like there is now an easy way to do this. I don’t know if this is a Spring 4 feature, but I’m using Spring 4.2.4.

You can use @PersistenceContext to inject the LocalContainerEntityManagerFactoryBean directly.

@PersistenceContext(unitName = "entityManagerFactory")
private EntityManager entityManager;

NOTE: that the unitName was specified for the PersistenceContext. In this case, it is the id of the LocalContainerEntityManagerFactoryBean.

You don’t need the SharedEntityManagerBean

==============================================================================

 

 
When working with Springframework, it is common to define an entity manager factory like the following.

<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" p:dataSource-ref="mySqlDs" p:packagesToScan="com.javaninja.cam" p:persistenceUnitName="persistenceUnit" p:jpaVendorAdapter-ref="jpaVendorAdapter">
	<property name="jpaProperties">
		<props>
			<prop key="hibernate.show_sql">false</prop>
			<prop key="hibernate.dialect">org.hibernate.dialect.MySQL5InnoDBDialect</prop>
		</props>
	</property>
</bean>

It is common for Data Access Objects (DAO) to inject the EntityManager via

@PersistenceContext(name = "entityManager")
private protected EntityManager entityManager;

In this case, injecting that way will not work because you don’t have an EntityManager. To create an EntityManager for your factory, use the SharedEntityManagerBean to do the following

<bean id="entityManager" class="org.springframework.orm.jpa.support.SharedEntityManagerBean">
	<property name="entityManagerFactory" ref="entityManagerFactory"/>
</bean>

Now that you have an EntityManager, you can inject it via

@PersistenceContext(name = "entityManager")
private protected EntityManager entityManager;

The PersistenceContext annotation takes one of two properties.

  • name is used when an EntityManager is being specified
  • unitName is used when a PersistenceUnit name is being specified

January 23rd, 2016

Posted In: Java, java ninja, Javaninja, Spring

Leave a Comment

We had a need to include information about the current Git version of the files in the build for the purposes of troubleshooting the deploy process.

I stumbled upon a plugin that provides that functionality.

maven git commit id plugin

Here is a list of the values that it makes available.

git.build.user.email
git.build.host
git.dirty
git.remote.origin.url
git.closest.tag.name
git.commit.id.describe-short
git.commit.user.email
git.commit.time
git.commit.message.full
git.build.version
git.commit.message.short
git.commit.id.abbrev
git.branch
git.build.user.name
git.closest.tag.commit.count
git.commit.id.describe
git.commit.id
git.tags
git.build.time
git.commit.user.name

These values are made available to replace elements in your specified directory that have the proper key and are surrounded with ${}. An example is.

<constructor-arg index="0" value="${git.commit.id.abbrev}"/>

Here are the steps to using it.

Step 1 – Set the resources directory as modifiable

<build>
    <resources>
        <resource>
            <directory>src/main/resources</directory>
            <filtering>true</filtering>
            <includes>
                <include>**/*.properties</include>
                <include>**/*.xml</include>
            </includes>
        </resource>
    </resources>
</build>

Step 2 – Include the plugin

<plugins>
    <plugin>
        <groupId>pl.project13.maven</groupId>
        <artifactId>git-commit-id-plugin</artifactId>
        <version>2.2.0</version>
        <executions>
            <execution>
                <goals>
                    <goal>revision</goal>
                </goals>
            </execution>
        </executions>
        <configuration>
            <!--
                If you'd like to tell the plugin where your .git directory is,
                use this setting, otherwise we'll perform a search trying to
                figure out the right directory. It's better to add it explicitly IMHO.
            -->
            <dotGitDirectory>${project.basedir}/.git</dotGitDirectory>
        </configuration>
    </plugin>
</plugins>

Additional configuration

To have the plugin generate a file with all of it’s values, instead of replacing values in your source, use the generateGitPropertiesFile configuration. The file will be created in whatever directory you specified in the resources element in Step 1.

<configuration>
    <!-- this is false by default, forces the plugin to generate the git.properties file -->
    <generateGitPropertiesFile>true</generateGitPropertiesFile>
</configuration>

An example of a generated git.properties is below.

#Generated by Git-Commit-Id-Plugin
#Thu Jan 14 13:32:07 EST 2016
git.build.user.email=norris.shelton@java.ninja.com
git.build.host=cdimac0001.java.ninja.com
git.dirty=true
git.remote.origin.url=https\://github.java.ninja.com/my_project.git
git.closest.tag.name=
git.commit.id.describe-short=4b3fea6-dirty
git.commit.user.email=norris.shelton@java.ninja.com
git.commit.time=14.01.2016 @ 11\:19\:21 EST
git.commit.message.full=r18.i1 - US12193 - Fixing package scan in Spring Config files
git.build.version=3.17.0
git.commit.message.short=r18.i1 - US12193 - Fixing package scan in Spring Config files
git.commit.id.abbrev=4b3fea6
git.branch=R18tc
git.build.user.name=Norris Shelton
git.closest.tag.commit.count=
git.commit.id.describe=4b3fea6-dirty
git.commit.id=4b3fea6863da76d37980527b0cc28310f88d7540
git.tags=
git.build.time=14.01.2016 @ 13\:32\:07 EST
git.commit.user.name=Norris.Shelton

By default, the plugin will search for your .git directory. The plugin documentation recommends explicitly setting it. If you are using a multi-module project, you will need to point your dotGitDirectory to a directory above what it expects.

<configuration>
    <!--
        If you'd like to tell the plugin where your .git directory is,
        use this setting, otherwise we'll perform a search trying to
        figure out the right directory. It's better to add it explicitly IMHO.
    -->
    <dotGitDirectory>${project.basedir}/../.git</dotGitDirectory>
</configuration>

To instruct the plugin to work in verbose mode, specify the following configuration.

<configuration>
    <!-- false is default here, it prints some more information during the build -->
    <verbose>true</verbose>
</configuration>

January 15th, 2016

Posted In: Git, GitHub, Java, java ninja, Javaninja, Maven

Leave a Comment

To use CXF 3.x as your restful interface, use the following dependency.

<dependency>
    <groupId>org.apache.cxf</groupId>
    <artifactId>cxf-rt-frontend-jaxrs</artifactId>
    <version>${cxf.version}</version>
</dependency>

To use CXF 3.x as your rest client, use the following dependency.

<dependency>
    <groupId>org.apache.cxf</groupId>
    <artifactId>cxf-rt-rs-client</artifactId>
    <version>${cxf.version}</version>
</dependency>

January 13th, 2016

Posted In: CXF, Java, java ninja, Javaninja, json

Tags: , , , ,

Leave a Comment

I was migrating some old Jackson code to Jackson 2 (FasterXML). I ran into where they were telling the Jackson to ignore unknown properties via code.

ObjectMapper mapper = new ObjectMapper();
mapper.getDeserializationConfig().set(DeserializationConfig.Feature.FAIL_ON_UNKNOWN_PROPERTIES, false);

The new way to do this in Jackson 2 (FasterXML) is

ObjectMapper mapper = new ObjectMapper();
mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);

January 13th, 2016

Posted In: Java, java ninja, Javaninja, json, xml

Leave a Comment

I’m used to writing JUnit test classes with my Spring classes. I was working on some very old code that had no Spring at all. It was just a normal class that needed to be tested. I didn’t include any of the annotations that I normally have at the top of a class, but IntelliJ couldn’t see them. When I ran the class with Maven, it did run the tests, but also ran the tests annotated with @Ignore. What the heck. I had to do some digging and saw that you need to add a @RunWith to the class declaration.

@RunWith(BlockJUnit4ClassRunner.class)
public class W2GPdfTest {
    // normal test methods
}

Problem solved.

December 17th, 2015

Posted In: Java, java ninja, Javaninja, JUnit

Leave a Comment

A common logging configuration is a log file per day. Building off of Logback Configuration File Change, let’s determine what it would take to also add a daily rolling file. How do you do this with logback?

In this example, I turned on the automatic scanning of the logback configuration file and set it to check every 5 minutes. This is a good compromise between being able to change the logging configuration on the file and also preventing increased load on the production file system.

I also created several variables to make customizing the configuration to individual developers workstation’s easy.

In this example, I used a TimeBasedRollingAppender to roll the file once per day. Logback keeps 30 days of files around and doesn’t use compression. To enable compression, change the fileNamePattern inside the rollingPolicy to end in .gz or .zip instead of .log.

<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="5 minutes">

    <property name="contextName" value="bonus-services"/>
    <!--
        The directory that the log files will be created in.  If the directory does not exist, Logback will give an
        error.  Logback will continue to log to the console, but there will be no log created.  Some developers will
        prefer this way so that they can see the files in the console, but don't have to worry about them on their local
        machines.

        To create a symbolic link (/opt/tomcat) to your actual tomcat installation:
        sudo ln -s ~/apache-tomcat-8.0.28 /opt/tomcat
    -->
    <property name="loggingDir" value="/opt/tomcat/logs"/>
    <property name="encoderPattern"
              value="%d{HH:mm:ss.SSS}|%-5level|${HOSTNAME}|${contextName}|%msg ||%class:%line %xException{full} %n"/>


    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <!-- encoders are assigned the type ch.qos.logback.classic.encoder.PatternLayoutEncoder by default -->
        <encoder>
            <pattern>${encoderPattern}</pattern>
        </encoder>
    </appender>
    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${loggingDir}/${contextName}.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!-- daily rollover -->
            <fileNamePattern>${contextName}.%d{yyyy-MM-dd}.log</fileNamePattern>

            <!-- keep 30 days' worth of history -->
            <maxHistory>30</maxHistory>
        </rollingPolicy>

        <encoder>
            <pattern>${encoderPattern}</pattern>
        </encoder>
    </appender>

    <logger name="com.cdi" level="INFO"/>
    <!-- Show info on rest calls -->
    <logger name="org.springframework.web.client" level="DEBUG"/>
    <logger name="org.springframework.http.converter" level="DEBUG"/>

    <root level="WARN">
        <appender-ref ref="STDOUT"/>
        <appender-ref ref="FILE"/>
    </root>
</configuration>

December 15th, 2015

Posted In: java ninja, Javaninja, logback

Leave a Comment

Springframework beans don’t play well with static methods. It’s always a pain when you have a static method that needs Spring injection. I had a static class with static initializer that loaded data from a properties file, that was then accessed via static methods for each property. I needed to move away from the properties file and towards retrieving the data from the database from another bean that was a Spring bean.

The static class was PropertyUtil. The spring bean that retrieved the data from the database from PropertiesBean.

To make this work, I made the following changes:

  • @Component – This was added to the class to make it into a Spring-bean
  • Added an @Autowired property for PropertiesBean – This makes the PropertiesBean available by Spring-injection
  • Added a static PropertiesBean – This will be used to be bridge between the instance bean and the static property
  • Removed static from init and added @PostConstruct – This means that the method will be executed after the bean is constructed, but before the bean is placed into service

The flow for the bean’s creation was as followed. It was instantiated like a regular spring bean. The @Autowired properties are injected, then the @PostConstruct method is executed. That method then copied the reference from the injected bean to the static reference to make it available via static methods/properties.

This is what the final code looked like:

@Component
public class PropertyUtil {

    private static PropertiesBean propertiesBeanStatic;

    @Autowired
    private PropertiesBean propertiesBean;

    public static myValue = null;

    @PostConstruct
    public void init(){
        propertiesBeanStatic = propertiesBean;
        myValue = propertiesBeanStatic.getProperty("myProperty")));		     
    }
}

December 15th, 2015

Posted In: Java, java ninja, Javaninja, Spring

Tags:

Leave a Comment

We had a need to submit request header information on every request to a specified system when we submit a REST call. We used the Springframework RestTemplate to perform our REST call. The RestTemplate provides a nice, easy way to modify all outbound requests via the ClientHttpRequestInterceptor interface.

The contents of the interceptor is:

package com.javaninja.core.spring;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.http.HttpHeaders;
import org.springframework.http.HttpRequest;
import org.springframework.http.client.ClientHttpRequestExecution;
import org.springframework.http.client.ClientHttpRequestInterceptor;
import org.springframework.http.client.ClientHttpResponse;

import java.io.IOException;

/**
 * Interceptor for the Rest calls.  The interceptor adds the System-Key that is required in order to authenticate.
 */
public class RestTemplateInterceptor implements ClientHttpRequestInterceptor {
	private Logger logger = LoggerFactory.getLogger(getClass());

    @Value("#{properties.systemKey}")
	private String systemKey;

	/**
	 * The interceptor adds the SYSTEM-Key that they are looking for on their end.
	 *
	 * Intercept the given request, and return a response. The given {@link ClientHttpRequestExecution} allows the
	 * interceptor to pass on the request and response to the next entity in the chain.
	 * <p>
	 * <p>A typical implementation of this method would follow the following pattern:
	 * <ol>
	 *     <li>Examine the {@linkplain HttpRequest request} and body</li>
	 *     <li>Optionally {@linkplain org.springframework.http.client.support.HttpRequestWrapper wrap} the request to filter HTTP attributes.</li>
	 *     <li>Optionally modify the body of the request.</li>
	 * 	   <li><strong>Either</strong>
	 * 	   <ul>
	 * 	       <li>execute the request using {@link ClientHttpRequestExecution#execute(HttpRequest, byte[])},</li>
	 * 	       <strong>or</strong>
	 * 	       <li>do not execute the request to block the execution altogether.</li>
	 * 	   </ul>
	 * 	   <li>Optionally wrap the response to filter HTTP attributes.</li>
	 * </ol>
	 * @param request   the request, containing method, URI, and headers
	 * @param body      the body of the request
	 * @param execution the request execution
	 * @return the response
	 * @throws IOException in case of I/O errors
	 */
	@Override
	public ClientHttpResponse intercept(HttpRequest request, byte[] body, ClientHttpRequestExecution execution)
	throws IOException {
		HttpHeaders headers = request.getHeaders();
		logger.debug("System-Key : {}", systemKey);
		headers.add("System-Key", systemKey);
		return execution.execute(request, body);
	}
}

Configuring the interceptor is fairly easy. It involves two steps. The first step is to declare the list of interceptors.

    <util:list id="interceptors">
          <bean id="vendorRestTemplateInterceptor" class="com.javaninja.core.spring.RestTemplateInterceptor"/>
    </util:list>

The next step is to associate the interceptors with the desired RestTemplate.

    <bean id="vendorRestTemplate" class="org.springframework.web.client.RestTemplate"
         p:interceptors-ref="interceptors"/>

Once you have that, you can use the RestTemplate as you normally would. Every outgoing request will have the basic authorization header added to the request automatically. As a note, in that application, we had a RestTemplate with an interceptor that was used to communicate with the vendor and another RestTemplate that did not have an interceptor that was used to communicate with an internal system.

December 9th, 2015

Posted In: Java, java ninja, Javaninja, Spring

Tags: , , , , , , ,

Leave a Comment

We’ve all been there. Hot deploys are so convenient, but they consume memory via ClassLoader leakage. You eventually run out of memory and your server dies.

java.lang.OutOfMemoryError: PermGen space

Tomcat leak prevention

Tomcat helped us a lot when it came out with it’s memory leak prevention. This has been our mainstay tool since Tomcat 6. At times, even this wasn’t enough to prevent dreaded permgen errors. http://wiki.apache.org/tomcat/MemoryLeakProtection

We have all seen a log message similar to the following, thanks to Tomcat:

02-Dec-2015 14:50:41.197 WARNING [RMI TCP Connection(6)-127.0.0.1] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesJdbc The web application [bonus-services] registered the JDBC driver [com.mysql.jdbc.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered.

Java 8 says goodbye to PermGen

Java 8 helped a lot by removing the limits of permgen by opening up all of the meta space. This wasn’t as much a fix as it was to make the memory container much larger. Even this has it’s limits.

java.lang.OutOfMemoryError: Metadata space

Mattias Jiderhamn’s ClassLoader Leak Prevention library

Enter the https://github.com/mjiderhamn/classloader-leak-prevention library. This does many of the things that Tomcat does along with some others. It’s uses an Apache 2 license. There is a Servlet Context Listener that listens for the context creation and context destruction. This allows it to perform it’s work, all for your benefit.

It is very easy to integrate into your webapp. The first step is to add the Maven dependency.

<dependency>
    <groupId>se.jiderhamn</groupId>
    <artifactId>classloader-leak-prevention</artifactId>
    <version>1.15.2</version>
</dependency>

Then you need to add the listener as the first listener in your web.xml.

<listener>
    <description>https://github.com/mjiderhamn/classloader-leak-prevention</description>
    <listener-class>se.jiderhamn.classloader.leak.prevention.ClassLoaderLeakPreventor</listener-class>
</listener>

When your webapp starts up, you will see the following as logging:

ClassLoaderLeakPreventor: Settings for se.jiderhamn.classloader.leak.prevention.ClassLoaderLeakPreventor (CL: 0xcb6c98f):
ClassLoaderLeakPreventor:   stopThreads = true
ClassLoaderLeakPreventor:   stopTimerThreads = true
ClassLoaderLeakPreventor:   executeShutdownHooks = true
ClassLoaderLeakPreventor:   threadWaitMs = 5000 ms
ClassLoaderLeakPreventor:   shutdownHookWaitMs = 10000 ms
ClassLoaderLeakPreventor: Initializing context by loading some known offenders with system classloader

When you hot deploy your webapp, you will see logging similar to the following:

ClassLoaderLeakPreventor: se.jiderhamn.classloader.leak.prevention.ClassLoaderLeakPreventor shutting down context by removing known leaks (CL: 0xcb6c98f)
ClassLoaderLeakPreventor: Looping 5 RMI Targets to find leaks
ClassLoaderLeakPreventor: Looping 5 RMI Targets to find leaks
ClassLoaderLeakPreventor: Internal registry of java.beans.PropertyEditorManager not found
ClassLoaderLeakPreventor: Custom ThreadLocal of type org.springframework.core.NamedThreadLocal: Prototype beans currently in creation with value null will be made stale for later expunging from Thread[http-nio-8080-exec-1,5,main]
ClassLoaderLeakPreventor: Custom ThreadLocal of type org.springframework.core.NamedThreadLocal: Transactional resources with value null will be made stale for later expunging from Thread[http-nio-8080-exec-1,5,main]
ClassLoaderLeakPreventor: Custom ThreadLocal of type org.springframework.core.NamedThreadLocal: Transaction synchronizations with value null will be made stale for later expunging from Thread[http-nio-8080-exec-1,5,main]
ClassLoaderLeakPreventor: Custom ThreadLocal of type org.springframework.core.NamedThreadLocal: Prototype beans currently in creation with value null will be made stale for later expunging from Thread[http-nio-8080-exec-1,5,main]
ClassLoaderLeakPreventor: Custom ThreadLocal of type org.springframework.core.NamedThreadLocal: Transactional resources with value null will be made stale for later expunging from Thread[RMI TCP Connection(idle),5,RMI Runtime]
ClassLoaderLeakPreventor: Custom ThreadLocal of type org.springframework.core.NamedThreadLocal: Transaction synchronizations with value null will be made stale for later expunging from Thread[RMI TCP Connection(idle),5,RMI Runtime]
ClassLoaderLeakPreventor: Custom ThreadLocal of type org.springframework.core.NamedThreadLocal: Prototype beans currently in creation with value null will be made stale for later expunging from Thread[RMI TCP Connection(idle),5,RMI Runtime]
ClassLoaderLeakPreventor: Since Java 1.6+ is used, we can call public static final void java.util.ResourceBundle.clearCache(java.lang.ClassLoader)
ClassLoaderLeakPreventor: Releasing web app classloader from Apache Commons Logging

Notice that it is using System.out to log because Logging libraries are common causes of leaked ClassLoaders.

JVM settings

I picked this up from Michael Barnes. You will need to specify the following to make the garbage collector work correctly.

-XX:+UseG1GC -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled
  • -XX:+UseG1GC – tells the JVM to use the Garbage First Garbage Collector. This is a new GC for Java 8.
  • -XX:+CMSClassUnloadingEnabled – Tells the GC that class unloading is enabled.
  • -XX:+CMSPermGenSweepingEnabled – Tells the GC that it should enable permanent generation sweeping. Note, this the JVM will give you a message stating that this is unnecessary. Ignore the message, because it will not work without it.

Note that two of the settings are for the Concurrent Mark and Sweep garbage collector. It isn’t documented, but these flags do indeed work.

December 3rd, 2015

Posted In: Java, java ninja, Javaninja

Tags: , , , , , , , , ,

Leave a Comment

Next Page »
LinkedIn Auto Publish Powered By : XYZScripts.com