Java Ninja Chronicles By Norris Shelton

Things I learned in the pursuit of code

Servlet 3.1 is the latest greatest Servlet API version. I needed to figure out the correct Maven dependencies for using a Servlet 3.1 container, Tomcat 8 specifically. This task was made more difficult because the Java Servlet dependencies tend to move over time. It would be so much better if they were consistent and all one had to do was to change the version numbers. Or better yet, why can’t I include the dependency for Expression Language 3.0 and have it pull in all of its transitive dependencies?

The Tomcat documentation had the following:

Tomcat Servlet API JSP API JSTL API Expression Language API
8.0 3.1 2.3 1.2 3.0

It took a bit of tinkering, but I think I have the Maven dependencies for Servlet 3.1 worked out.

        <dependency>
            <groupId>javax.servlet</groupId>
            <artifactId>javax.servlet-api</artifactId>
            <version>3.1.0</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>javax.servlet.jsp</groupId>
            <artifactId>javax.servlet.jsp-api</artifactId>
            <version>2.3.1</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>javax.el</groupId>
            <artifactId>javax.el-api</artifactId>
            <version>3.0.0</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>jstl</groupId>
            <artifactId>jstl</artifactId>
            <version>1.2</version>
        </dependency>

Most of the Servlet 3.1 dependencies are packaged by Tomcat 8 and should have their scope marked as provided. The JSTL library is not packaged by Tomcat and cannot be scoped as provided. If you do mark it as provided, you will get the following exception when you try to use JSTL

HTTP Status 500 - Handler processing failed; nested exception is java.lang.NoClassDefFoundError: javax/servlet/jsp/jstl/core/Config

type Exception report

message Handler processing failed; nested exception is java.lang.NoClassDefFoundError: javax/servlet/jsp/jstl/core/Config

description The server encountered an internal error that prevented it from fulfilling this request.

exception

org.springframework.web.util.NestedServletException: Handler processing failed; nested exception is java.lang.NoClassDefFoundError: javax/servlet/jsp/jstl/core/Config
	org.springframework.web.servlet.DispatcherServlet.triggerAfterCompletionWithError(DispatcherServlet.java:1302)
	org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:977)
	org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:893)
	org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:968)
	org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:859)
	javax.servlet.http.HttpServlet.service(HttpServlet.java:622)
	org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:844)
	javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
	org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
root cause

java.lang.NoClassDefFoundError: javax/servlet/jsp/jstl/core/Config
	org.springframework.web.servlet.support.JstlUtils.exposeLocalizationContext(JstlUtils.java:101)
	org.springframework.web.servlet.view.JstlView.exposeHelpers(JstlView.java:135)
	org.springframework.web.servlet.view.InternalResourceView.renderMergedOutputModel(InternalResourceView.java:142)
	org.springframework.web.servlet.view.AbstractView.render(AbstractView.java:303)
	org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1243)
	org.springframework.web.servlet.DispatcherServlet.processDispatchResult(DispatcherServlet.java:1027)
	org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:971)
	org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:893)
	org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:968)
	org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:859)
	javax.servlet.http.HttpServlet.service(HttpServlet.java:622)
	org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:844)
	javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
	org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
note The full stack trace of the root cause is available in the Apache Tomcat/8.0.30 logs.

Apache Tomcat/8.0.30

By using the above Maven dependencies, I was able to write a JSP page that used both Expression Language (EL) and JSTL Core Tags. All that is left now is to get to coding the next great webapp that is going to change the world.

April 12th, 2016

Posted In: Glassfish, Java, java ninja, Javaninja, JSP, Servlet Spec, Tomcat

Leave a Comment

Spring has a caching abstraction that makes it very easy to add caching to your code without making a log of changes.

I will use this blog to provide an example that uses Spring Java configuration to demonstrate the configuration and usage of the basic cache methods.

Maven Dependencies

The following Maven dependencies are needed to enable the Spring Caching abstraction.

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-context-support</artifactId>
    <version>4.2.4.RELEASE</version>
</dependency>
<dependency>
    <groupId>net.sf.ehcache</groupId>
    <artifactId>ehcache</artifactId>
    <version>2.10.1</version>
</dependency>

My entire pom.xml file is below:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.javaninja</groupId>
    <artifactId>spring-ehcache</artifactId>
    <version>1.0-SNAPSHOT</version>
    <dependencies>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-context-support</artifactId>
            <version>4.2.4.RELEASE</version>
            <exclusions>
                <exclusion>
                    <artifactId>commons-logging</artifactId>
                    <groupId>commons-logging</groupId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>net.sf.ehcache</groupId>
            <artifactId>ehcache</artifactId>
            <version>2.10.1</version>
        </dependency>

        <!--
            Needed for the EqualsBuilder and HashBuilder
        -->
        <dependency>
            <groupId>org.apache.commons</groupId>
            <artifactId>commons-lang3</artifactId>
            <version>3.4</version>
        </dependency>

        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-test</artifactId>
            <version>4.2.4.RELEASE</version>
        </dependency>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.12</version>
        </dependency>

        <dependency>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-classic</artifactId>
            <version>1.1.5</version>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>jcl-over-slf4j</artifactId>
            <version>1.7.16</version>
        </dependency>
    </dependencies>


    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.5.1</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
        </plugins>

    </build>

</project>

Ehcache configuration

Next you will need an ehcache.xml file to configure your cache. An example file is located here.

I defined a cache named cars. It holds 1,000 entries that never expire. If the cache fills up, the first entries are Mine cache is configured as follows:

    <cache name="cars" maxEntriesLocalHeap="1000" eternal="true"/>

My entire ehcache.xml (which contains explanations for the various options) is below:

<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="../config/ehcache.xsd">

    <!--<ehcache xsi:noNamespaceSchemaLocation="ehcache.xsd"-->
         <!--updateCheck="true"-->
         <!--monitoring="autodetect"-->
         <!--dynamicConfig="true"-->
         <!--xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">-->
    <!--
        DiskStore configuration
        =======================

        The diskStore element is optional. To turn off disk store path creation, comment out the diskStore
        element below.

        Configure it if you have disk persistence enabled for any cache or if you use
        unclustered indexed search.

        If it is not configured, and a cache is created which requires a disk store, a warning will be
         issued and java.io.tmpdir will automatically be used.

        diskStore has only one attribute - "path". It is the path to the directory where
        any required disk files will be created.

        If the path is one of the following Java System Property it is replaced by its value in the
        running VM. For backward compatibility these should be specified without being enclosed in the ${token}
        replacement syntax.

        The following properties are translated:
        * user.home - User's home directory
        * user.dir - User's current working directory
        * java.io.tmpdir - Default temp file path
        * ehcache.disk.store.dir - A system property you would normally specify on the command line
          e.g. java -Dehcache.disk.store.dir=/u01/myapp/diskdir ...

        Subdirectories can be specified below the property e.g. java.io.tmpdir/one

        -->
    <diskStore path="java.io.tmpdir"/>
    <!--
        TransactionManagerLookup configuration
        ======================================
        This class is used by ehcache to lookup the JTA TransactionManager use in the application
        using an XA enabled ehcache. If no class is specified then DefaultTransactionManagerLookup
        will find the TransactionManager in the following order

         *GenericJNDI (i.e. jboss, where the property jndiName controls the name of the
                        TransactionManager object to look up)
         *Bitronix
         *Atomikos

        You can provide you own lookup class that implements the
        net.sf.ehcache.transaction.manager.TransactionManagerLookup interface.
        -->
    <transactionManagerLookup class="net.sf.ehcache.transaction.manager.DefaultTransactionManagerLookup"
                              properties="jndiName=java:/TransactionManager"
                              propertySeparator=";"/>
    <!--
        CacheManagerEventListener
        =========================
        Specifies a CacheManagerEventListenerFactory which is notified when Caches are added
        or removed from the CacheManager.

        The attributes of CacheManagerEventListenerFactory are:
        * class - a fully qualified factory class name
        * properties - comma separated properties having meaning only to the factory.

        Sets the fully qualified class name to be registered as the CacheManager event listener.

        The events include:
        * adding a Cache
        * removing a Cache

        Callbacks to listener methods are synchronous and unsynchronized. It is the responsibility
        of the implementer to safely handle the potential performance and thread safety issues
        depending on what their listener is doing.

        If no class is specified, no listener is created. There is no default.
        -->
    <cacheManagerEventListenerFactory class="" properties=""/>
    <!--
        TerracottaConfig
        ========================
        (Enable for Terracotta clustered operation)

        Note: You need to install and run one or more Terracotta servers to use Terracotta clustering.
        See http://www.terracotta.org/web/display/orgsite/Download.

        Specifies a TerracottaConfig which will be used to configure the Terracotta
        runtime for this CacheManager.

        Configuration can be specified in two main ways: by reference to a source of
        configuration or by use of an embedded Terracotta configuration file.

        To specify a reference to a source (or sources) of configuration, use the url
        attribute.  The url attribute must contain a comma-separated list of:
        * path to Terracotta configuration file (usually named tc-config.xml)
        * URL to Terracotta configuration file
        * <server host>:<port> of running Terracotta Server instance

        Simplest example for pointing to a Terracotta server on this machine:
        <terracottaConfig url="localhost:9510"/>

        This element has two attributes "rejoin" and "wanEnabledTSA", which can take values of either "true" or "false":
        <terracottaConfig rejoin="true" wanEnabledTSA="true" url="localhost:9510" />

        By default, these attributes are false.

        Without rejoin, if the Terracotta Server is restarted the client cannot connect back to the
        server. When enabled, this allows the client to connect to the new cluster without the
        need to restart the node.

        When wanEnabledTSA is true, the client will wait for the WAN Orchestrator to provide the
        list of WAN enabled caches. Once the Orchestrator is up and running then the client will proceed
        to create the clustered data structures.

        Example using a path to Terracotta configuration file:
        <terracottaConfig url="/app/config/tc-config.xml"/>

        Example using a URL to a Terracotta configuration file:
        <terracottaConfig url="http://internal/ehcache/app/tc-config.xml"/>

        Example using multiple Terracotta server instance URLs (for fault tolerance):
        <terracottaConfig url="host1:9510,host2:9510,host3:9510"/>

        To embed a Terracotta configuration file within the ehcache configuration, simply
        place a normal Terracotta XML config within the <terracottaConfig> element.

        Example:
        <terracottaConfig>
            <tc-config>
                <servers>
                    <server host="server1" name="s1"/>
                    <server host="server2" name="s2"/>
                </servers>
                <clients>
                    <logs>app/logs-%i</logs>
                </clients>
            </tc-config>
        </terracottaConfig>

        For more information on the Terracotta configuration, see the Terracotta documentation.
        -->
    <!--<terracottaConfig url="localhost:9510"/>-->
    <!--
        Cache configuration
        ===================

        The following attributes are required.

        name:
        Sets the name of the cache. This is used to identify the cache. It must be unique.

        maxEntriesLocalHeap:
        Sets the maximum number of objects that will be created in memory.  0 = no limit.
        In practice no limit means Integer.MAX_SIZE (2147483647) unless the cache is distributed
        with a Terracotta server in which case it is limited by resources.

        maxEntriesLocalDisk:
        Sets the maximum number of objects that will be maintained in the DiskStore
        The default value is zero, meaning unlimited.

        eternal:
        Sets whether elements are eternal. If eternal,  timeouts are ignored and the
        element is never expired.

        The following attributes and elements are optional.

        maxEntriesInCache:
        This feature is applicable only to Terracotta distributed caches.
        Sets the maximum number of entries that can be stored in the cluster. 0 = no limit.
        Note that clustered cache will still perform eviction if resource usage requires it.
        This property can be modified dynamically while the cache is operating.

        overflowToOffHeap:
        (boolean) This feature is available only in enterprise versions of Ehcache.
        When set to true, enables the cache to utilize off-heap memory
        storage to improve performance. Off-heap memory is not subject to Java
        GC. The default value is false.

        maxBytesLocalHeap:
        Defines how many bytes the cache may use from the VM's heap. If a CacheManager
        maxBytesLocalHeap has been defined, this Cache's specified amount will be
        subtracted from the CacheManager. Other caches will share the remainder.
        This attribute's values are given as <number>k|K|m|M|g|G for
        kilobytes (k|K), megabytes (m|M), or gigabytes (g|G).
        For example, maxBytesLocalHeap="2g" allots 2 gigabytes of heap memory.
        If you specify a maxBytesLocalHeap, you can't use the maxEntriesLocalHeap attribute.
        maxEntriesLocalHeap can't be used if a CacheManager maxBytesLocalHeap is set.

        Elements put into the cache will be measured in size using net.sf.ehcache.pool.sizeof.SizeOf
        If you wish to ignore some part of the object graph, see net.sf.ehcache.pool.sizeof.annotations.IgnoreSizeOf

        maxBytesLocalOffHeap:
        This feature is available only in enterprise versions of Ehcache.
        Sets the amount of off-heap memory this cache can use, and will reserve.

        This setting will set overflowToOffHeap to true. Set explicitly to false to disable overflow behavior.

        Note that it is recommended to set maxEntriesLocalHeap to at least 100 elements
        when using an off-heap store, otherwise performance will be seriously degraded,
        and a warning will be logged.

        The minimum amount that can be allocated is 128MB. There is no maximum.

        maxBytesLocalDisk:
        As for maxBytesLocalHeap, but specifies the limit of disk storage this cache will ever use.

        timeToIdleSeconds:
        Sets the time to idle for an element before it expires.
        i.e. The maximum amount of time between accesses before an element expires
        Is only used if the element is not eternal.
        Optional attribute. A value of 0 means that an Element can idle for infinity.
        The default value is 0.

        timeToLiveSeconds:
        Sets the time to live for an element before it expires.
        i.e. The maximum time between creation time and when an element expires.
        Is only used if the element is not eternal.
        Optional attribute. A value of 0 means that and Element can live for infinity.
        The default value is 0.

        diskExpiryThreadIntervalSeconds:
        The number of seconds between runs of the disk expiry thread. The default value
        is 120 seconds.

        diskSpoolBufferSizeMB:
        This is the size to allocate the DiskStore for a spool buffer. Writes are made
        to this area and then asynchronously written to disk. The default size is 30MB.
        Each spool buffer is used only by its cache. If you get OutOfMemory errors consider
        lowering this value. To improve DiskStore performance consider increasing it. Trace level
        logging in the DiskStore will show if put back ups are occurring.

        clearOnFlush:
        whether the MemoryStore should be cleared when flush() is called on the cache.
        By default, this is true i.e. the MemoryStore is cleared.

        memoryStoreEvictionPolicy:
        Policy would be enforced upon reaching the maxEntriesLocalHeap limit. Default
        policy is Least Recently Used (specified as LRU). Other policies available -
        First In First Out (specified as FIFO) and Less Frequently Used
        (specified as LFU)

        copyOnRead:
        Whether an Element is copied when being read from a cache.
        By default this is false.

        copyOnWrite:
        Whether an Element is copied when being added to the cache.
        By default this is false.

        Cache persistence is configured through the persistence sub-element.  The attributes of the
        persistence element are:

        strategy:
        Configures the type of persistence provided by the configured cache.  This must be one of the
        following values:

        * localRestartable - Enables the RestartStore and copies all cache entries (on-heap and/or off-heap)
        to disk. This option provides fast restartability with fault tolerant cache persistence on disk.
        It is available for Enterprise Ehcache users only.

        * localTempSwap - Swaps cache entries (on-heap and/or off-heap) to disk when the cache is full.
        "localTempSwap" is not persistent.

        * none - Does not persist cache entries.

        * distributed - Defers to the <terracotta> configuration for persistence settings. This option
        is not applicable for standalone.

        synchronousWrites:
        When set to true write operations on the cache do not return until after the operations data has been
        successfully flushed to the disk storage.  This option is only valid when used with the "localRestartable"
        strategy, and defaults to false.

        The following example configuration shows a cache configured for localTempSwap restartability.

        <cache name="persistentCache" maxEntriesLocalHeap="1000">
            <persistence strategy="localTempSwap"/>
        </cache>

        Cache elements can also contain sub elements which take the same format of a factory class
        and properties. Defined sub-elements are:

        * cacheEventListenerFactory - Enables registration of listeners for cache events, such as
          put, remove, update, and expire.

        * bootstrapCacheLoaderFactory - Specifies a BootstrapCacheLoader, which is called by a
          cache on initialisation to prepopulate itself.

        * cacheExtensionFactory - Specifies a CacheExtension, a generic mechanism to tie a class
          which holds a reference to a cache to the cache lifecycle.

        * cacheExceptionHandlerFactory - Specifies a CacheExceptionHandler, which is called when
          cache exceptions occur.

        * cacheLoaderFactory - Specifies a CacheLoader, which can be used both asynchronously and
          synchronously to load objects into a cache. More than one cacheLoaderFactory element
          can be added, in which case the loaders form a chain which are executed in order. If a
          loader returns null, the next in chain is called.

        * copyStrategy - Specifies a fully qualified class which implements
          net.sf.ehcache.store.compound.CopyStrategy. This strategy will be used for copyOnRead
          and copyOnWrite in place of the default which is serialization.

        Example of cache level resource tuning:
        <cache name="memBound" maxBytesLocalHeap="100m" maxBytesLocalOffHeap="4g" maxBytesLocalDisk="200g" />


        Cache Event Listeners
        +++++++++++++++++++++

        All cacheEventListenerFactory elements can take an optional property listenFor that describes
        which events will be delivered in a clustered environment.  The listenFor attribute has the
        following allowed values:

        * all - the default is to deliver all local and remote events
        * local - deliver only events originating in the current node
        * remote - deliver only events originating in other nodes

        Example of setting up a logging listener for local cache events:

        <cacheEventListenerFactory class="my.company.log.CacheLogger"
            listenFor="local" />


        Search
        ++++++

        A <cache> can be made searchable by adding a <searchable/> sub-element. By default the keys
        and value objects of elements put into the cache will be attributes against which
        queries can be expressed.

        <cache>
            <searchable/>
        </cache>


        An "attribute" of the cache elements can also be defined to be searchable. In the example below
        an attribute with the name "age" will be available for use in queries. The value for the "age"
        attribute will be computed by calling the method "getAge()" on the value object of each element
        in the cache. See net.sf.ehcache.search.attribute.ReflectionAttributeExtractor for the format of
        attribute expressions. Attribute values must also conform to the set of types documented in the
        net.sf.ehcache.search.attribute.AttributeExtractor interface

        <cache>
            <searchable>
                <searchAttribute name="age" expression="value.getAge()"/>
            </searchable>
        </cache>


        Attributes may also be defined using a JavaBean style. With the following attribute declaration
        a public method getAge() will be expected to be found on either the key or value for cache elements

        <cache>
            <searchable>
                <searchAttribute name="age"/>
            </searchable>
        </cache>

        In more complex situations you can create your own attribute extractor by implementing the
        AttributeExtractor interface. Providing your extractor class is shown in the following example:

        <cache>
            <searchable>
                <searchAttribute name="age" class="com.example.MyAttributeExtractor"/>
            </searchable>
        </cache>

        Use properties to pass state to your attribute extractor if needed. Your implementation must provide
        a public constructor that takes a single java.util.Properties instance

        <cache>
            <searchable>
                <searchAttribute name="age" class="com.example.MyAttributeExtractor" properties="foo=1,bar=2"/>
            </searchable>
        </cache>

        If you intend to use dynamic attribute extraction (see net.sf.ehcache.Cache.registerDynamicAttributesExtractor) then
        you need to enable it as follows:

        <cache>
            <searchable allowDynamicIndexing="true"/>
        </cache>


        Cache Exception Handling
        ++++++++++++++++++++++++

        By default, most cache operations will propagate a runtime CacheException on failure. An
        interceptor, using a dynamic proxy, may be configured so that a CacheExceptionHandler can
        be configured to intercept Exceptions. Errors are not intercepted.

        It is configured as per the following example:

          <cacheExceptionHandlerFactory class="com.example.ExampleExceptionHandlerFactory"
                                          properties="logLevel=FINE"/>

        Caches with ExceptionHandling configured are not of type Cache, but are of type Ehcache only,
        and are not available using CacheManager.getCache(), but using CacheManager.getEhcache().


        Cache Loader
        ++++++++++++

        A default CacheLoader may be set which loads objects into the cache through asynchronous and
        synchronous methods on Cache. This is different to the bootstrap cache loader, which is used
        only in distributed caching.

        It is configured as per the following example:

            <cacheLoaderFactory class="com.example.ExampleCacheLoaderFactory"
                                          properties="type=int,startCounter=10"/>

        Element value comparator
        ++++++++++++++++++++++++

        These two cache atomic methods:
          removeElement(Element e)
          replace(Element old, Element element)

        rely on comparison of cached elements value. The default implementation relies on Object.equals()
        but that can be changed in case you want to use a different way to compute equality of two elements.

        This is configured as per the following example:

        <elementValueComparator class="com.company.xyz.MyElementComparator"/>

        The MyElementComparator class must implement the is net.sf.ehcache.store.ElementValueComparator
        interface. The default implementation is net.sf.ehcache.store.DefaultElementValueComparator.


        SizeOf Policy
        +++++++++++++

        Control how deep the SizeOf engine can go when sizing on-heap elements.

        This is configured as per the following example:

        <sizeOfPolicy maxDepth="100" maxDepthExceededBehavior="abort"/>

        maxDepth controls how many linked objects can be visited before the SizeOf engine takes any action.
        maxDepthExceededBehavior specifies what happens when the max depth is exceeded while sizing an object graph.
         "continue" makes the SizeOf engine log a warning and continue the sizing. This is the default.
         "abort"    makes the SizeOf engine abort the sizing, log a warning and mark the cache as not correctly tracking
                    memory usage. This makes Ehcache.hasAbortedSizeOf() return true when this happens.

        The SizeOf policy can be configured at the cache manager level (directly under <ehcache>) and at
        the cache level (under <cache> or <defaultCache>). The cache policy always overrides the cache manager
        one if both are set. This element has no effect on distributed caches.

        Transactions
        ++++++++++++

        To enable an ehcache as transactions, set the transactionalMode

        transactionalMode="xa" - high performance JTA/XA implementation
        transactionalMode="xa_strict" - canonically correct JTA/XA implementation
        transactionMode="local" - high performance local transactions involving caches only
        transactionalMode="off" - the default, no transactions

        If set, all cache operations will need to be done through transactions.

        To prevent users keeping references on stored elements and modifying them outside of any transaction's control,
        transactions also require the cache to be configured copyOnRead and copyOnWrite.

        CacheWriter
        ++++++++++++

        A CacheWriter can be set to write to an underlying resource. Only one CacheWriter can be
        configured per cache.

        The following is an example of how to configure CacheWriter for write-through:

            <cacheWriter writeMode="write-through" notifyListenersOnException="true">
                <cacheWriterFactory class="net.sf.ehcache.writer.TestCacheWriterFactory"
                                    properties="type=int,startCounter=10"/>
            </cacheWriter>

        The following is an example of how to configure CacheWriter for write-behind:

            <cacheWriter writeMode="write-behind" minWriteDelay="1" maxWriteDelay="5"
                         rateLimitPerSecond="5" writeCoalescing="true" writeBatching="true" writeBatchSize="1"
                         retryAttempts="2" retryAttemptDelaySeconds="1">
                <cacheWriterFactory class="net.sf.ehcache.writer.TestCacheWriterFactory"
                                    properties="type=int,startCounter=10"/>
            </cacheWriter>

        The cacheWriter element has the following attributes:
        * writeMode: the write mode, write-through or write-behind

        These attributes only apply to write-through mode:
        * notifyListenersOnException: Sets whether to notify listeners when an exception occurs on a writer operation.

        These attributes only apply to write-behind mode:
        * minWriteDelay: Set the minimum number of seconds to wait before writing behind. If set to a value greater than 0,
          it permits operations to build up in the queue. This is different from the maximum write delay in that by waiting
          a minimum amount of time, work is always being built up. If the minimum write delay is set to zero and the
          CacheWriter performs its work very quickly, the overhead of processing the write behind queue items becomes very
          noticeable in a cluster since all the operations might be done for individual items instead of for a collection
          of them.
        * maxWriteDelay: Set the maximum number of seconds to wait before writing behind. If set to a value greater than 0,
          it permits operations to build up in the queue to enable effective coalescing and batching optimisations.
        * writeBatching: Sets whether to batch write operations. If set to true, writeAll and deleteAll will be called on
          the CacheWriter rather than write and delete being called for each key. Resources such as databases can perform
          more efficiently if updates are batched, thus reducing load.
        * writeBatchSize: Sets the number of operations to include in each batch when writeBatching is enabled. If there are
          less entries in the write-behind queue than the batch size, the queue length size is used.
        * rateLimitPerSecond: Sets the maximum number of write operations to allow per second when writeBatching is enabled.
        * writeCoalescing: Sets whether to use write coalescing. If set to true and multiple operations on the same key are
          present in the write-behind queue, only the latest write is done, as the others are redundant.
        * retryAttempts: Sets the number of times the operation is retried in the CacheWriter, this happens after the
          original operation.
        * retryAttemptDelaySeconds: Sets the number of seconds to wait before retrying an failed operation.

        Pinning
        +++++++

        Use this element when data should remain in the cache regardless of resource constraints.
        Unexpired entries can never be flushed to a lower tier or be evicted.

        This element has a required attribute (store) to specify which data tiers the cache should be pinned to:
        * localMemory: Cache data is pinned to the local heap (or off-heap for BigMemory Go and BigMemory Max).
        * inCache: Cache data is pinned in the cache, which can be in any tier cache data is stored.

        Example:
            <pinning store="inCache"/>

        Cache Extension
        +++++++++++++++

        CacheExtensions are a general purpose mechanism to allow generic extensions to a Cache.
        CacheExtensions are tied into the Cache lifecycle.

        CacheExtensions are created using the CacheExtensionFactory which has a
        <code>createCacheCacheExtension()</code> method which takes as a parameter a
        Cache and properties. It can thus call back into any public method on Cache, including, of
        course, the load methods.

        Extensions are added as per the following example:

             <cacheExtensionFactory class="com.example.FileWatchingCacheRefresherExtensionFactory"
                                 properties="refreshIntervalMillis=18000, loaderTimeout=3000,
                                             flushPeriod=whatever, someOtherProperty=someValue ..."/>

        Cache Decorator Factory
        +++++++++++++++++++++++

        Cache decorators can be configured directly in ehcache.xml. The decorators will be created and added to the CacheManager.
        It accepts the name of a concrete class that extends net.sf.ehcache.constructs.CacheDecoratorFactory
        The properties will be parsed according to the delimiter (default is comma ',') and passed to the concrete factory's
        <code>createDecoratedEhcache(Ehcache cache, Properties properties)</code> method along with the reference to the owning cache.

        It is configured as per the following example:

            <cacheDecoratorFactory
          class="com.company.DecoratedCacheFactory"
          properties="property1=true ..." />

        Distributed Caching with Terracotta
        +++++++++++++++++++++++++++++++++++

        Distributed Caches connect to a Terracotta Server Array. They are configured with the <terracotta> sub-element.

        The <terracotta> sub-element has the following attributes:

        * clustered=true|false - indicates whether this cache should be clustered (distributed) with Terracotta. By
          default, if the <terracotta> element is included, clustered=true.

        * copyOnRead=true|false - indicates whether cache values are deserialized on every read or if the
          materialized cache value can be re-used between get() calls. This setting is useful if a cache
          is being shared by callers with disparate classloaders or to prevent local drift if keys/values
          are mutated locally without being put back in the cache.

          The default is false.

        * consistency=strong|eventual - Indicates whether this cache should have strong consistency or eventual
          consistency. The default is eventual. See the documentation for the meaning of these terms.

        * synchronousWrites=true|false

          Synchronous writes (synchronousWrites="true")  maximize data safety by blocking the client thread until
          the write has been written to the Terracotta Server Array.

          This option is only available with consistency=strong. The default is false.

        * concurrency - the number of segments that will be used by the map underneath the Terracotta Store.
          Its optional and has default value of 0, which means will use default values based on the internal
          Map being used underneath the store.

          This value cannot be changed programmatically once a cache is initialized.

        The <terracotta> sub-element also has a <nonstop> sub-element to allow configuration of cache behaviour if a distributed
        cache operation cannot be completed within a set time or in the event of a clusterOffline message. If this element does not appear, nonstop behavior is off.

        <nonstop> has the following attributes:

        *  enabled="true" - defaults to true.

        *  timeoutMillis - An SLA setting, so that if a cache operation takes longer than the allowed ms, it will timeout.

        *  searchTimeoutMillis - If a cache search operation in the nonstop mode takes longer than the allowed ms, it will timeout.

        *  immediateTimeout="true|false" - What to do on receipt of a ClusterOffline event indicating that communications
           with the Terracotta Server Array were interrupted.

        <nonstop> has one sub-element, <timeoutBehavior> which has the following attribute:

        *  type="noop|exception|localReads|localReadsAndExceptionOnWrite" - What to do when a timeout has occurred. Exception is the default.

        Simplest example to indicate clustering:
            <terracotta/>

        To indicate the cache should not be clustered (or remove the <terracotta> element altogether):
            <terracotta clustered="false"/>

        To indicate the cache should be clustered using "eventual" consistency mode for better performance :
            <terracotta clustered="true" consistency="eventual"/>

        To indicate the cache should be clustered using synchronous-write locking level:
            <terracotta clustered="true" synchronousWrites="true"/>
        -->
    <!--
        Default Cache configuration. These settings will be applied to caches
        created programmatically using CacheManager.add(String cacheName).
        This element is optional, and using CacheManager.add(String cacheName) when
        its not present will throw CacheException

        The defaultCache has an implicit name "default" which is a reserved cache name.
        -->
    <defaultCache maxEntriesLocalHeap="0" eternal="false" timeToIdleSeconds="1200" timeToLiveSeconds="1200">
        <terracotta/>
    </defaultCache>

    <cache name="cars" maxEntriesLocalHeap="1000" eternal="true"/>

    <!--
        Sample caches. Following are some example caches. Remove these before use.
        -->
    <!--
        Sample cache named sampleCache1
        This cache contains a maximum in memory of 10000 elements, and will expire
        an element if it is idle for more than 5 minutes and lives for more than
        10 minutes.

        If there are more than 10000 elements it will overflow to the
        disk cache, which in this configuration will go to wherever java.io.tmp is
        defined on your system. On a standard Linux system this will be /tmp"
        -->
<!--
    <cache name="sampleCache1"
           maxEntriesLocalHeap="10000"
           maxEntriesLocalDisk="1000"
           eternal="false"
           diskSpoolBufferSizeMB="20"
           timeToIdleSeconds="300"
           timeToLiveSeconds="600"
           memoryStoreEvictionPolicy="LFU"
           transactionalMode="off">
        <persistence strategy="localTempSwap"/>
    </cache>
-->
    <!--
        Sample cache named sampleCache2
        This cache has a maximum of 1000 elements in memory. There is no overflow to disk, so 1000
        is also the maximum cache size. Note that when a cache is eternal, timeToLive and
        timeToIdle are not used and do not need to be specified.
        -->
<!--
    <cache name="sampleCache2" maxEntriesLocalHeap="1000" eternal="true" memoryStoreEvictionPolicy="FIFO"/>
-->
    <!--
        Sample cache named sampleCache3. This cache overflows to disk. The disk store is
        persistent between cache and VM restarts. The disk expiry thread interval is set to 10
        minutes, overriding the default of 2 minutes.
        -->
<!--
    <cache name="sampleCache3"
           maxEntriesLocalHeap="500"
           eternal="false"
           overflowToDisk="true"
           diskPersistent="true"
           timeToIdleSeconds="300"
           timeToLiveSeconds="600"
           diskExpiryThreadIntervalSeconds="1"
           memoryStoreEvictionPolicy="LFU"></cache>
-->
    <!--
        Sample Terracotta clustered cache named sampleTerracottaCache.
        This cache uses Terracotta to cluster the contents of the cache.
        -->
<!--
    <cache name="sampleTerracottaCache"
           maxBytesLocalHeap="10m"
           eternal="false"
           timeToIdleSeconds="3600"
           timeToLiveSeconds="1800">
        <terracotta/>
    </cache>
-->
    <!--
          Sample xa enabled cache named xaCache
        <cache name="xaCache"
               maxEntriesLocalHeap="500"
               eternal="false"
               timeToIdleSeconds="300"
               timeToLiveSeconds="600"
               diskExpiryThreadIntervalSeconds="1"
               transactionalMode="xa_strict">
        </cache>
        -->
    <!--
          Sample copy on both read and write cache named copyCache
          using the default (explicitly configured here as an example) ReadWriteSerializationCopyStrategy
          class could be any implementation of net.sf.ehcache.store.compound.CopyStrategy
        <cache name="copyCache"
               maxEntriesLocalHeap="500"
               eternal="false"
               timeToIdleSeconds="300"
               timeToLiveSeconds="600"
               diskExpiryThreadIntervalSeconds="1"
               copyOnRead="true"
               copyOnWrite="true">
            <copyStrategy class="net.sf.ehcache.store.compound.ReadWriteSerializationCopyStrategy" />
        </cache>
        -->
    <!--
          Sample, for Enterprise Ehcache only, demonstrating a tiered cache with in-memory, off-heap and disk stores. In this example the in-memory (on-heap) store is limited to 10,000 items ... which for example for 1k items would use 10MB of memory, the off-heap store is limited to 4GB and the disk store is unlimited in size.
        <cache name="tieredCache"
               maxEntriesLocalHeap="10000"
               eternal="false"
               timeToLiveSeconds="600"
               overflowToOffHeap="true"
               maxBytesLocalOffHeap="4g"
               diskExpiryThreadIntervalSeconds="1">
            <persistence strategy="localTempSwap"/>
         </cache>
        -->
    <!--
          Sample, for Enterprise Ehcache only, demonstrating a restartable cache with in-memory and off-heap stores.
        <cache name="restartableCache"
               maxEntriesLocalHeap="10000"
               eternal="true"
               overflowToOffHeap="true"
               maxBytesLocalOffHeap="4g"
             <persistence strategy="localRestartable"/>
         </cache>
         -->
</ehcache>

Spring Configuration

There are three things that are needed to enable the Spring caching abstraction when using Spring Java configuration.

  • @EnableCaching – Enables Spring’s annotation-driven cache management capability, similar to the support found in Spring’s XML namespace.
  • CacheManager – Spring’s central cache manager SPI
  • EhCacheManagerFactoryBean – FactoryBean that exposes an EhCache CacheManager instance

Below is the code:

package com.javaninja.spring.ehcache;

import org.springframework.cache.CacheManager;
import org.springframework.cache.annotation.EnableCaching;
import org.springframework.cache.ehcache.EhCacheCacheManager;
import org.springframework.cache.ehcache.EhCacheManagerFactoryBean;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;

/**
 * @author norris.shelton
 */
@Configuration
@ComponentScan
@EnableCaching
public class SpringContext {

    /**
     * Creates a Spring cache manager.
     * @return spring cache manager
     */
    @Bean
    public CacheManager cacheManager() {
        return new EhCacheCacheManager(ehCacheManagerFactoryBean().getObject());
    }

    /**
     * Create the Ehcache manager factory bean.  Set the bean shared property to shared so other users in this
     * classloader can access it.
     * @return Shared Ehcache manager factory bean
     */
    @Bean
    public EhCacheManagerFactoryBean ehCacheManagerFactoryBean() {
        EhCacheManagerFactoryBean ehCacheManagerFactoryBean = new EhCacheManagerFactoryBean();
        ehCacheManagerFactoryBean.setShared(true);
        return ehCacheManagerFactoryBean;
    }
}

Service Code

Below is the service class that I added the caching to. Note the @Cacheable annotation. The simplest @Cacheable call takes the name of the cache to use for the method. The method parameters will form the basis for the cache key and the return object will be the cache value.. I added some logging so that we can use the log to verify that the method was not accessed because there was a cache hit.

package com.javaninja.spring.ehcache;

import org.apache.commons.lang3.RandomStringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.cache.annotation.Cacheable;
import org.springframework.stereotype.Service;

import java.util.LinkedList;
import java.util.List;

/**
 * @author norris.shelton
 */
@Service
public class CarService {

    private Logger logger = LoggerFactory.getLogger(getClass());

    private List<Car> cars = new LinkedList<>();

    /**
     * Creates a new car.
     * @return Car object with values populated.
     */
    public Car createCar() {
        Car car = new Car();
        car.setMake("Make-" + RandomStringUtils.randomAlphabetic(10));
        car.setModel("Model-" + RandomStringUtils.randomAlphabetic(10));
        car.setVin("Vin-" + RandomStringUtils.randomAlphanumeric(30));

        // Add to my list of all cars
        cars.add(car);
        logger.info("added car to list {}", car);

        return car;
    }

    @Cacheable(value = "cars")
    public Car getCar(String vin) {
        logger.info("inside ");
        Car car = null;
        for (Car car1 : cars) {
            logger.info("iterating over car");
            if (car1.getVin().equals(vin)) {
                car = car1;
                logger.info("found car {}", car1);
            }
        }
        return car;
    }
}

Maven Test Dependencies

I added the following dependencies to enable the proper test support.

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-test</artifactId>
    <version>4.2.4.RELEASE</version>
</dependency>
<dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.12</version>
</dependency>

Junit test

I created a Junit test class. It pre-loads all of the cars into the service’s car list. The test calls the method twice. The first time the cache is empty and every item has to be iterated. The second time, the cache is fully warm and there is no iteration.

package com.javaninja.spring.ehcache;

import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;

import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;

/**
 * @author norris.shelton
 */
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = SpringContext.class)
public class TestCarService {

    private Logger logger = LoggerFactory.getLogger(getClass());

    @Autowired
    private CarService carService;

    private String vin;

    /**
     * Creates cars to look for.
     */
    @Before
    public void before() {
        Car car;
        for (int i = 0; i < 10; i++) {
            car = carService.createCar();
            assertNotNull(car);
            assertNotNull(car.getMake());
            assertNotNull(car.getModel());
            assertNotNull(car.getVin());

            // store the value of the last car
            vin = car.getVin();
        }
    }

    /**
     * Tests the get car method.
     * @throws Exception
     */
    @Test
    public void testGetCar() throws Exception {
        // the first time the get cars method is called, the cache is empty
        logger.info("beginning of first run");
        Car car = carService.getCar(vin);  // find the last car added
        assertNotNull(car);
        assertEquals(vin, car.getVin());
        logger.info("end of first run");

        // the second time the get cars method is called, the cache is fully populated
        logger.info("beginning of second run");
        car = carService.getCar(vin);  // find the last car added
        assertNotNull(car);
        assertEquals(vin, car.getVin());
        logger.info("end of first run");
    }
}

Below is the log for the test run. You can clearly see that during the first run, the list of cars has to be iterated over. During the next run, the method is not entered at all. The method call results in a cache hit.

10:07:53.156 [main] INFO  c.j.spring.ehcache.TestCarService - beginning of first run
10:07:53.160 [main] INFO  c.j.spring.ehcache.CarService - inside 
10:07:53.160 [main] INFO  c.j.spring.ehcache.CarService - iterating over car
10:07:53.160 [main] INFO  c.j.spring.ehcache.CarService - iterating over car
10:07:53.160 [main] INFO  c.j.spring.ehcache.CarService - iterating over car
10:07:53.160 [main] INFO  c.j.spring.ehcache.CarService - iterating over car
10:07:53.161 [main] INFO  c.j.spring.ehcache.CarService - iterating over car
10:07:53.161 [main] INFO  c.j.spring.ehcache.CarService - iterating over car
10:07:53.161 [main] INFO  c.j.spring.ehcache.CarService - iterating over car
10:07:53.161 [main] INFO  c.j.spring.ehcache.CarService - iterating over car
10:07:53.161 [main] INFO  c.j.spring.ehcache.CarService - iterating over car
10:07:53.161 [main] INFO  c.j.spring.ehcache.CarService - iterating over car
10:07:53.161 [main] INFO  c.j.spring.ehcache.CarService - found car com.javaninja.spring.ehcache.Car@5456afaa[make=Make-WmlGJCSagi,model=Model-dYdOzzPNHi,vin=Vin-zfXislLcOjxlPWMNBHy6RnmvS5xAWy]
10:07:53.163 [main] INFO  c.j.spring.ehcache.TestCarService - end of first run
10:07:53.163 [main] INFO  c.j.spring.ehcache.TestCarService - beginning of second run
10:07:53.163 [main] INFO  c.j.spring.ehcache.TestCarService - end of first run

Summary

You now have the information needed to enable the base Spring cache abstraction using Ehcache as the implementation.

The example project for this blog entry is located on GitHub at sheltonn / spring-ehcache

February 29th, 2016

Posted In: Ehcache, Java, java ninja, Javaninja, Spring

Leave a Comment

Spring Data JPA is my favorite way to interact with the database in Java. It makes short work of defining queries. I get to spend my time on business logic instead of keeping the compiler happy. I didn’t know how to use Spring-Data-JPA when I have two entity managers. It boiled down to a couple of minor changes. Normally you would have to add the following to your XML to enable the repositories.

<jpa:repositories base-package="com.twinspires.cam"/>

This doesn’t work because it can’t find the EntityManager named entityManager. That is because in my example, I have two entity managers. Here is how they are defined:

<bean id="rwEntityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
	  p:dataSource-ref="readWriteDataSource"
	  p:packagesToScan="com.javaninja.jpa"
	  p:jpaVendorAdapter-ref="jpaVendorAdapter"/>

<bean id="roEntityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
	  p:dataSource-ref="readOnlyDataSource"
	  p:packagesToScan="com.javaninja.jpa"
	  p:jpaVendorAdapter-ref="jpaVendorAdapter"/>

This requires that I have two jpa:repositories entries like the following:

<jpa:repositories base-package="com.twinspires.cam" entity-manager-factory-ref="roEntityManagerFactory"/>
<jpa:repositories base-package="com.twinspires.cam" entity-manager-factory-ref="rwEntityManagerFactory"/>

Now that I have two entity managers wired up over the same packages and classes, I have another problem. I need to indicate via my code which entity manager I need to use for this call. I do this by adding @Transactional with the name of my transactionManager. As an extra note, when I declare my transaction manager and add the entity manager that it works on, I also add a qualifier for it. This creates an alias for that transaction manager. This allows me to note that the ro transaction manager works on this code instead of having to say the roTransactionManager works on this code. Here is the transaction manager declaration:

<bean id="rwtransactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
	<property name="entityManagerFactory" ref="rwEntityManagerFactory" />
	<qualifier value="rw"/>
</bean>
<bean id="roTransactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
	<property name="entityManagerFactory" ref="roEntityManagerFactory" />
	<qualifier value="ro"/>
</bean>

This makes it easy to use the alias in my repository:

@Transactional(value = "ro")
Customer findByThisAndThat(String this, String that);

February 27th, 2016

Posted In: Java, java ninja, Javaninja, Spring, Spring Data, Spring Data JPA

Leave a Comment

I had come up with a Springframework Junit configuration for Spring Batch that worked pretty well. here. This worked pretty well, but I wanted to have the ability to have transactions to rollback my test data for test repeatability. After much tinkering, this is what I have come up with.

Spring Batch Test Maven Dependencies

I added the following dependencies to write the Spring Batch tests.

<dependency>
    <groupId>org.springframework.batch</groupId>
    <artifactId>spring-batch-test</artifactId>
    <version>${spring.batch.version}</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-test</artifactId>
    <version>${spring.version}</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.12</version>
    <scope>test</scope>
</dependency>

Spring Batch Test Utilities

Spring Batch provides the JobLauncherTestUtils to make it easier to test jobs. The gist of a job test class is:

JobExecution jobExecution = jobLauncherTestUtils.launchJob();

To test a step, you provide the step name to the lanuchStep method.

JobExecution jobExecution = jobLauncherTestUtils.launchStep("step1")

My spring test context defined the JobLauncherTestUtils.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">

    <import resource="classpath:applicationContext.xml"/>

    <bean id="jobLauncherTestUtils" class="org.springframework.batch.test.JobLauncherTestUtils"/>
</beans>

Job and Step Test

My Spring Junit test class is fairly simple. This is no different than any other Job or Step test class. NOTE: I was not able to get transactions to work for a Job or a Step.

package com.javaninja.batch;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.batch.core.BatchStatus;
import org.springframework.batch.core.ExitStatus;
import org.springframework.batch.core.JobExecution;
import org.springframework.batch.test.JobLauncherTestUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;

import java.util.List;

import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertTrue;

/**
 * @author norris.shelton
 */
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration
public class TestJobAndStep {

    @Autowired
    private JobLauncherTestUtils jobLauncherTestUtils;

    @Test
    public void testJob() throws Exception {
        commonAssertions(jobLauncherTestUtils.launchJob());
    }

    @Test
    public void testStep1() throws Exception {
        commonAssertions(jobLauncherTestUtils.launchStep("step1"));
    }

    private void commonAssertions(JobExecution jobExecution) {
        assertNotNull(jobExecution);

        BatchStatus batchStatus = jobExecution.getStatus();
        assertEquals(BatchStatus.COMPLETED, batchStatus);
        assertFalse(batchStatus.isUnsuccessful());

        ExitStatus exitStatus = jobExecution.getExitStatus();
        assertEquals("COMPLETED", exitStatus.getExitCode());
        assertEquals("", exitStatus.getExitDescription());

        List<Throwable> failureExceptions = jobExecution.getFailureExceptions();
        assertNotNull(failureExceptions);
        assertTrue(failureExceptions.isEmpty());
    }
}

Testing Step-scope components

I was able to have more success with transactions when testing Step-scope components like the JPA-related reader and writer. By using the StepScopeTestExecutionListener in combination with TransactionalTestExecutionListener, I was able to get transactions to work correctly.

The JpaPagingItemReader has the following methods that you need to be concerned with:

  • open – opens the output source.
  • read – reads the data.
  • close – closes the entity manager.

The JpaItemWriter provides the write method that handles all of the writing duties, including flushing the data.

The reader and writer test class is:

package com.javaninja.batch;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.batch.core.StepExecution;
import org.springframework.batch.item.ItemStreamException;
import org.springframework.batch.item.database.JpaItemWriter;
import org.springframework.batch.item.database.JpaPagingItemReader;
import org.springframework.batch.test.MetaDataInstanceFactory;
import org.springframework.batch.test.StepScopeTestExecutionListener;
import org.springframework.batch.test.StepScopeTestUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.TestExecutionListeners;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import org.springframework.test.context.support.DependencyInjectionTestExecutionListener;
import org.springframework.test.context.transaction.TransactionalTestExecutionListener;
import org.springframework.transaction.annotation.Transactional;

import java.util.LinkedList;
import java.util.List;

import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.fail;

/**
 * @author norris.shelton
 */
@RunWith(SpringJUnit4ClassRunner.class)
@TestExecutionListeners({DependencyInjectionTestExecutionListener.class,
                         StepScopeTestExecutionListener.class,
                         TransactionalTestExecutionListener.class})
@Transactional
@ContextConfiguration
public class TestReaderAndWriter {

    @Autowired
    private JpaPagingItemReader<CamAffiliateEntity> itemReader;

    @Autowired
    private JpaItemWriter<CamAffiliateEntity> itemWriter;

    @Test
    public void testReader() {
        StepExecution execution = MetaDataInstanceFactory.createStepExecution();
        int count = 0;
        try {
            count = StepScopeTestUtils.doInStepScope(execution, () -> {
                int numStates = 0;
                itemReader.open(execution.getExecutionContext());
                CamAffiliateEntity camAffiliateEntity;
                try {
                    while ((camAffiliateEntity = itemReader.read()) != null) {
                        assertNotNull(camAffiliateEntity);
                        assertNotNull(camAffiliateEntity.getAffiliateId());
                        assertNotNull(camAffiliateEntity.getName());
                        assertNotNull(camAffiliateEntity.getChannelId());
                        numStates++;
                    }
                } finally {
                    try { itemReader.close(); } catch (ItemStreamException e) { fail(e.toString());
                    }
                }
                return numStates;
            });
        } catch (Exception e) {
            fail(e.toString());
        }
        assertEquals(12, count);
    }

    @Test
    public void testWriter() throws Exception {
        List<CamAffiliateEntity> usStateEntities = new LinkedList<>();
        CamAffiliateEntity usStateEntity;
        for (int i = 0; i < 100; i++) {
            usStateEntity = new CamAffiliateEntity();
            usStateEntity.setAffiliateId(i);
            usStateEntity.setName("TEST-DELETE-" + i);
            usStateEntity.setChannelId(13);  // test
            usStateEntities.add(usStateEntity);
        }

        StepExecution execution = MetaDataInstanceFactory.createStepExecution();
        StepScopeTestUtils.doInStepScope(execution, () -> {
            itemWriter.write(usStateEntities);
            return null;
        });
    }
}

Summary

This summary provided me with the ability to run by Spring Batch JPA project for the reader and writer repeatedly without having test data-related problems.

The entire project used to write this blog is located on GitHub sheltonn / spring-batch-jpa

February 19th, 2016

Posted In: hibernate, Integration Tests, Java, java ninja, Javaninja, JUnit, Spring, Spring Batch, Test Driven Development, Unit Tests

Leave a Comment

This is a simple Spring Batch project. This implementation will read from a database table and write to a database table via JPA.

Maven Dependencies

The Maven dependency for Spring Batch is:

<dependency>
    <groupId>org.springframework.batch</groupId>
    <artifactId>spring-batch-core</artifactId>
    <version>${spring.batch.version}</version>
</dependency>

Please note that Spring Batch does use the Spring Framework. Spring Batch 3.0.6 corresponds to Spring 4.0.5. Attempting to use a newer version of Spring will cause runtime errors.

You will also need dependencies for communicating with the database.

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-orm</artifactId>
    <version>${spring.version}</version>
</dependency>
<dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-entitymanager</artifactId>
    <version>5.0.7.Final</version>
</dependency>
<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-dbcp2</artifactId>
    <version>2.1</version>
</dependency>
<dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <version>5.1.38</version>
</dependency>

Below is my entire pom.xml.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.javaninja</groupId>
    <artifactId>spring-batch-jpa</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <!--This is the version of Spring that Spring Batch uses-->
        <spring.version>4.0.5.RELEASE</spring.version>
        <spring.batch.version>3.0.6.RELEASE</spring.batch.version>
        <slf4j.version>1.7.13</slf4j.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.springframework.batch</groupId>
            <artifactId>spring-batch-core</artifactId>
            <version>${spring.batch.version}</version>
            <exclusions>
                <exclusion>
                    <artifactId>commons-logging</artifactId>
                    <groupId>commons-logging</groupId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-orm</artifactId>
            <version>${spring.version}</version>
        </dependency>
        <dependency>
            <groupId>org.hibernate</groupId>
            <artifactId>hibernate-entitymanager</artifactId>
            <version>5.0.7.Final</version>
        </dependency>
        <dependency>
            <groupId>org.apache.commons</groupId>
            <artifactId>commons-dbcp2</artifactId>
            <version>2.1</version>
            <exclusions>
                <exclusion>
                    <artifactId>commons-logging</artifactId>
                    <groupId>commons-logging</groupId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <version>5.1.38</version>
        </dependency>

        <dependency>
            <groupId>org.apache.commons</groupId>
            <artifactId>commons-lang3</artifactId>
            <version>3.4</version>
        </dependency>

        <!--
            Logging
        -->
        <dependency>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-classic</artifactId>
            <version>1.1.3</version>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>jcl-over-slf4j</artifactId>
            <version>${slf4j.version}</version>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>log4j-over-slf4j</artifactId>
            <version>${slf4j.version}</version>
        </dependency>

        <!--
            Testing
        -->
        <dependency>
            <groupId>org.springframework.batch</groupId>
            <artifactId>spring-batch-test</artifactId>
            <version>${spring.batch.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-test</artifactId>
            <version>${spring.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.12</version>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.3</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
        </plugins>
    </build>

</project>

Spring and Spring Batch Context

Job Configuration

You will need the standard JobRepository, JobLauncher and TransactionManager (See Database/JPA below).

<!--This repository is only really intended for use in testing and rapid prototyping-->
<bean id="jobRepository" class="org.springframework.batch.core.repository.support.MapJobRepositoryFactoryBean"/>
<bean id="jobLauncher" class="org.springframework.batch.core.launch.support.SimpleJobLauncher"
      p:jobRepository-ref="jobRepository"/>

JpaPagingItemReader

The reading of data from the database via JPA is handled by a JpaPagingItemReader. The JpaPagingItemReader requires the following:

  • EntityManagerFactory – manages the entities (see JPA configuration below).
  • queryString – defines the query used to read the entities. In this case, it reads all entities of a given type.

The configuration for the JpaPagingItemReader is:

<!-- ItemReader which reads data from the database -->
<bean id="itemReader" class="org.springframework.batch.item.database.JpaPagingItemReader"
      p:entityManagerFactory-ref="entityManagerFactory"
      p:queryString="SELECT s FROM CamAffiliateEntity s"/>

JpaItemWriter

The writing of the JPA data is handled by JpaItemWriter. The JpaItemWriter also requires an EntityManagerFactory (see JPA configuration below).

The configuration for the JpaItemWriter is:

<!-- ItemWriter which writes the data to the database -->
<bean id="itemWriter" class="org.springframework.batch.item.database.JpaItemWriter"
    p:entityManagerFactory-ref="entityManagerFactory"/>

JPA Configuration

You will also need the following to configure the Jpa-support to read the records from the database:

  • EntityManagerFactory – manages the entities.
  • JpaVendorAdapter – exposes vendor-specific JPA properties.
  • DataSource – manages database connections.
  • TransactionManager – manages transactions.

Spring Batch requires a transaction manager and so does Spring’s JPA support. In this case, I’m using the same transaction manager for both. I’m not sure if this is correct, but it appears to work.

<bean id="dataSource" class="org.apache.commons.dbcp2.BasicDataSource" destroy-method="close"
      p:driverClassName="com.mysql.jdbc.Driver"
      p:url="jdbc:mysql://database.javaninja.com/batch?zeroDateTimeBehavior=convertToNull"
      p:username="javaninja"
      p:password="javaninja"/>
<!-- JPA EntityManagerFactory configuration -->
<bean id="jpaVendorAdapter" class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"/>
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
      p:dataSource-ref="dataSource"
      p:packagesToScan="com.javaninja.batch"
      p:jpaVendorAdapter-ref="jpaVendorAdapter"/>
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"/>

The entire Spring context is below:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:p="http://www.springframework.org/schema/p"
       xmlns:context="http://www.springframework.org/schema/context"
       xsi:schemaLocation="http://www.springframework.org/schema/beans   http://www.springframework.org/schema/beans/spring-beans.xsd
                           http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd">

    <import resource="classpath:META-INF/spring/batchContext.xml"/>

    <context:component-scan base-package="com.javaninja.batch"/>


    <!--This repository is only really intended for use in testing and rapid prototyping-->
    <bean id="jobRepository" class="org.springframework.batch.core.repository.support.MapJobRepositoryFactoryBean"/>

    <bean id="jobLauncher" class="org.springframework.batch.core.launch.support.SimpleJobLauncher"
          p:jobRepository-ref="jobRepository"/>



    <bean id="dataSource" class="org.apache.commons.dbcp2.BasicDataSource" destroy-method="close"
          p:driverClassName="com.mysql.jdbc.Driver"
          p:url="jdbc:mysql://10.20.13.53/cam?zeroDateTimeBehavior=convertToNull"
          p:username="usr_ig_dev"
          p:password="igaming_developer"/>

    <!-- JPA EntityManagerFactory configuration -->
    <bean id="jpaVendorAdapter" class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"/>

    <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
          p:dataSource-ref="dataSource"
          p:packagesToScan="com.javaninja.batch"
          p:jpaVendorAdapter-ref="jpaVendorAdapter"/>

    <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"/>

    <!-- ItemReader which reads data from the database -->
    <bean id="itemReader" class="org.springframework.batch.item.database.JpaPagingItemReader"
          p:entityManagerFactory-ref="entityManagerFactory"
          p:queryString="SELECT s FROM CamAffiliateEntity s"/>

    <!-- ItemWriter which writes the data to the database -->
    <bean id="itemWriter" class="org.springframework.batch.item.database.JpaItemWriter"
        p:entityManagerFactory-ref="entityManagerFactory"/>

</beans>

Job Configuration

The XML configuration for the JPA batch job is pretty much the same as my other examples.

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:batch="http://www.springframework.org/schema/batch"
       xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
                           http://www.springframework.org/schema/batch http://www.springframework.org/schema/batch/spring-batch.xsd">

    <batch:job id="carJob">
        <batch:step id="step1">
            <batch:tasklet>
                <batch:chunk reader="itemReader" writer="itemWriter" commit-interval="10"/>
            </batch:tasklet>
        </batch:step>
    </batch:job>

</beans>

The entire project used to write this blog is located on GitHub sheltonn / spring-batch-jpa

February 19th, 2016

Posted In: Java, java ninja, Javaninja, Spring, Spring Batch

Leave a Comment

In a previous logback blog entry Hibernate Logging (e.g. JBOSS logging) I showed how to send the JBoss logging to your logback logging configuration.

One thing kept bothering me. If you turned on showSql, these were not captured by your logging. I was fishing through the code and discovered why. If you turn on showSql, it actually sets a variable named logToStdout. Just as the name says, in the code, it logs to STDOUT.

if ( logToStdout ) {
	System.out.println( "Hibernate: " + statement );
}

This results in logging like the following:

Hibernate: insert into cam_affiliate (Channel_ID, Name, Affiliate_ID) values (?, ?, ?)

Logging Sql statements via Logback

A better way is to NOT set showSql to true, but to add the following logger to your Logback configuration.

<!-- Displays the Hibernate SQL statements in your log instead of STDOUT like showSql does-->
<logger name="org.hibernate.SQL" level="DEBUG"/>

This displays logging similar to:

2016-02-19 09:09:30|DEBUG|insert into cam_affiliate (Channel_ID, Name, Affiliate_ID) values (?, ?, ?) ||org.hibernate.engine.jdbc.spi.SqlStatementLogger:92 

February 19th, 2016

Posted In: hibernate, hibernate logging, Java, java ninja, Javaninja, jboss logging, jcl-over-slf4j, log4j-over-slf4j, logback, Logging, Logging configuration, slf4j

Leave a Comment

I’m used to configuring springframework entity managers with a map for the jpaProperties, like the following:

<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
    p:dataSource-ref="dataSource"
    p:packagesToScan="com.javaninja.batch"
    p:persistenceUnitName="persistenceUnit"
    p:jpaVendorAdapter-ref="jpaVendorAdapter">
  <property name="jpaProperties">
  	<props>
		<prop key="hibernate.show_sql">false</prop>
		<prop key="hibernate.dialect">org.hibernate.dialect.MySQL5InnoDBDialect</prop>
		<prop key="jadira.usertype.autoRegisterUserTypes">true</prop>
	</props>
    </property>
</bean>

I’m a big fan of Spring P-notation, but didn’t know how to replace a map. By using a combination of P-notation and util:properties, I was able to create a structure for map data without having to use the notation needed for maps.

    <util:properties id="jpaProperties">
        <prop key="hibernate.show_sql">false</prop>
        <prop key="hibernate.dialect">org.hibernate.dialect.MySQL5InnoDBDialect</prop>
        <prop key="jadira.usertype.autoRegisterUserTypes">true</prop>
    </util:properties>

    <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
          p:dataSource-ref="dataSource"
          p:packagesToScan="com.javaninja.batch"
          p:persistenceUnitName="persistenceUnit"
          p:jpaVendorAdapter-ref="jpaVendorAdapter"
          p:jpaProperties-ref="jpaProperties"/>

February 19th, 2016

Posted In: Java, java ninja, Javaninja, Spring

Tags: ,

Leave a Comment

Spring-Batch provides many types of readers and writers. The previous article, Spring-Batch – Reading and Writing XML provided the configuration needed to read and write XML. In this installment, I will present the configuration needed to read and write CSV files.

Maven Dependencies

You will need the Spring-Batch dependency.

<dependency>
    <groupId>org.springframework.batch</groupId>
    <artifactId>spring-batch-core</artifactId>
    <version>${spring.batch.version}</version>
</dependency>

You do not need a library for the CSV functionality.

The entire pom.xml looks like:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.javaninja</groupId>
    <artifactId>spring-batch-csv</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <!--This is the version of Spring that Spring Batch uses-->
        <spring.version>4.0.5.RELEASE</spring.version>
        <spring.batch.version>3.0.6.RELEASE</spring.batch.version>
        <slf4j.version>1.7.13</slf4j.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.springframework.batch</groupId>
            <artifactId>spring-batch-core</artifactId>
            <version>${spring.batch.version}</version>
            <exclusions>
                <exclusion>
                    <artifactId>commons-logging</artifactId>
                    <groupId>commons-logging</groupId>
                </exclusion>
            </exclusions>
        </dependency>

        <!--
            Logging
        -->
        <dependency>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-classic</artifactId>
            <version>1.1.3</version>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>jcl-over-slf4j</artifactId>
            <version>${slf4j.version}</version>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>log4j-over-slf4j</artifactId>
            <version>${slf4j.version}</version>
        </dependency>

        <!--
            Testing
        -->
        <dependency>
            <groupId>org.springframework.batch</groupId>
            <artifactId>spring-batch-test</artifactId>
            <version>${spring.batch.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-test</artifactId>
            <version>${spring.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.12</version>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.3</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
        </plugins>
    </build>

</project>

Spring and Spring-Batch Context

Job Configuration

You will need the standard JobRepository, JobLauncher and TransactionManager.

<bean id="transactionManager" class="org.springframework.batch.support.transaction.ResourcelessTransactionManager"/>
<!--This repository is only really intended for use in testing and rapid prototyping-->
<bean id="jobRepository" class="org.springframework.batch.core.repository.support.MapJobRepositoryFactoryBean"
      p:transactionManager-ref="transactionManager"/>
<bean id="jobLauncher" class="org.springframework.batch.core.launch.support.SimpleJobLauncher"
      p:jobRepository-ref="jobRepository"/>

FlatFileItemReader

The reading of a CSV is handled by a FlatFileItemReader. You will also need the following to configure the ItemReader to read the lines of CSV data into objects:

  • LineMapper – Maps lines in a flat file to objects.
  • LineTokenizer – Splits lines of text by a delimiter.
  • FieldSetMapper – Maps fields into an object.

The Spring XML configuration for the ItemReader looks like:

<bean id="fieldSetMapper" class="org.springframework.batch.item.file.mapping.BeanWrapperFieldSetMapper"
      p:targetType="com.javaninja.batch.Car"/>
<bean id="delimitedLineTokenizer" class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer
      p:names="make, model, color, doors"/>
<bean id="lineMapper" class="org.springframework.batch.item.file.mapping.DefaultLineMapper"
      p:lineTokenizer-ref="delimitedLineTokenizer"
      p:fieldSetMapper-ref="fieldSetMapper"/>
<!-- ItemReader which reads data from CSV file -->
<bean id="csvItemReader" class="org.springframework.batch.item.file.FlatFileItemReader"
      p:resource="classpath:cars-input.xml"
      p:lineMapper-ref="lineMapper"/>

FlatFileItemWriter

The writing of the CSV file is handled by a FlatFileItemWriter. You will also need the following to configure the item writer to write the objects as lines of CSV data:

  • LineAggregator – Maps the object into a string representation.
  • FieldExtractor – Extracts the specified properties of a bean in the specified order.

The Spring XML configuration for the ItemWriter looks like:

<bean id="fieldExtractor" class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor"
      p:names="make, model, color, doors"/>
<bean id="lineAggregator" class="org.springframework.batch.item.file.transform.DelimitedLineAggregator"
      p:fieldExtractor-ref="fieldExtractor"/>
<!-- ItemWriter which writes the data in CSV format -->
<bean id="csvItemWriter" class="org.springframework.batch.item.file.FlatFileItemWriter"
      p:resource="file:csv/cars.xml"
      p:lineAggregator-ref="lineAggregator"/>

The entire Spring context looks like:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:p="http://www.springframework.org/schema/p"
       xmlns:context="http://www.springframework.org/schema/context"
       xsi:schemaLocation="http://www.springframework.org/schema/beans   http://www.springframework.org/schema/beans/spring-beans.xsd
                           http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd">

    <import resource="classpath:META-INF/spring/batchContext.xml"/>

    <context:component-scan base-package="com.javaninja.batch"/>

    <bean id="transactionManager" class="org.springframework.batch.support.transaction.ResourcelessTransactionManager"/>

    <!--This repository is only really intended for use in testing and rapid prototyping-->
    <bean id="jobRepository" class="org.springframework.batch.core.repository.support.MapJobRepositoryFactoryBean"
          p:transactionManager-ref="transactionManager"/>

    <bean id="jobLauncher" class="org.springframework.batch.core.launch.support.SimpleJobLauncher"
          p:jobRepository-ref="jobRepository"/>



    <bean id="fieldSetMapper" class="org.springframework.batch.item.file.mapping.BeanWrapperFieldSetMapper"
          p:targetType="com.javaninja.batch.Car"/>

    <bean id="delimitedLineTokenizer" class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer"
          p:names="make, model, color, doors"/>

    <bean id="lineMapper" class="org.springframework.batch.item.file.mapping.DefaultLineMapper"
          p:lineTokenizer-ref="delimitedLineTokenizer"
          p:fieldSetMapper-ref="fieldSetMapper"/>

    <!-- ItemReader which reads data from CSV file -->
    <bean id="csvItemReader" class="org.springframework.batch.item.file.FlatFileItemReader"
          p:resource="classpath:cars-input.xml"
          p:lineMapper-ref="lineMapper"/>



    <bean id="fieldExtractor" class="org.springframework.batch.item.file.transform.BeanWrapperFieldExtractor"
          p:names="make, model, color, doors"/>

    <bean id="lineAggregator" class="org.springframework.batch.item.file.transform.DelimitedLineAggregator"
          p:fieldExtractor-ref="fieldExtractor"/>

    <!-- ItemWriter which writes the data in CSV format -->
    <bean id="csvItemWriter" class="org.springframework.batch.item.file.FlatFileItemWriter"
          p:resource="file:csv/cars.xml"
          p:lineAggregator-ref="lineAggregator"/>

</beans>

Job configuration

The XML configuration for the CSV batch job is pretty much the same as any other example.

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:batch="http://www.springframework.org/schema/batch"
       xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
                           http://www.springframework.org/schema/batch http://www.springframework.org/schema/batch/spring-batch.xsd">

    <batch:job id="carJob">
        <batch:step id="step1">
            <batch:tasklet>
                <batch:chunk reader="csvItemReader" writer="csvItemWriter" commit-interval="1000"/>
            </batch:tasklet>
        </batch:step>
    </batch:job>

</beans>

Running a job is the same way as in the XML example. Spring-Batch – Reading and Writing XML

The entire project used to write this blog is located on GitHub sheltonn / spring-batch-csv

February 18th, 2016

Posted In: Java, java ninja, Javaninja, Spring Batch

Leave a Comment

Once you have a Spring Batch application written, how do you test it?

This is a follow-on of Spring-Batch – Reading and Writing XML

Testing a Spring Batch Job or a Step

I used the following dependencies to write the Spring Batch tests.

<dependency>
    <groupId>org.springframework.batch</groupId>
    <artifactId>spring-batch-test</artifactId>
    <version>${spring.batch.version}</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-test</artifactId>
    <version>${spring.version}</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.12</version>
    <scope>test</scope>
</dependency>

Spring Batch provides JobLauncherTestUtils to make it easier to test jobs. The gist of a job test class is:

JobExecution jobExecution = jobLauncherTestUtils.launchJob();

To test a step, you provide the step name to the launchStep method.

JobExecution jobExecution = jobLauncherTestUtils.launchStep("step1")

My Spring test context defined the JobLauncherTestUtils

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">

    <import resource="classpath:applicationContext.xml"/>

    <bean id="jobLauncherTestUtils" class="org.springframework.batch.test.JobLauncherTestUtils"/>
</beans>

My Spring Junit test class is fairly simple.

package com.javaninja.batch;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.batch.core.BatchStatus;
import org.springframework.batch.core.ExitStatus;
import org.springframework.batch.core.JobExecution;
import org.springframework.batch.test.JobLauncherTestUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;

import java.util.List;

import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertTrue;

/**
 * @author norris.shelton
 */
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration
public class TestJobAndStep {

    @Autowired
    private JobLauncherTestUtils jobLauncherTestUtils;

    @Test
    public void testJob() throws Exception {
        commonAssertions(jobLauncherTestUtils.launchJob());
    }

    @Test
    public void testStep1() throws Exception {
        commonAssertions(jobLauncherTestUtils.launchStep("step1"));
    }

    private void commonAssertions(JobExecution jobExecution) {
        assertNotNull(jobExecution);

        BatchStatus batchStatus = jobExecution.getStatus();
        assertEquals(BatchStatus.COMPLETED, batchStatus);
        assertFalse(batchStatus.isUnsuccessful());

        ExitStatus exitStatus = jobExecution.getExitStatus();
        assertEquals("COMPLETED", exitStatus.getExitCode());
        assertEquals("", exitStatus.getExitDescription());

        List<Throwable> failureExceptions = jobExecution.getFailureExceptions();
        assertNotNull(failureExceptions);
        assertTrue(failureExceptions.isEmpty());
    }
}

Testing Spring Batch Step-scope objects (readers and writers)

Spring Batch provides a StepScopeTestExecutionListener to allow it to inject Step-scope items into your test class via the normal @Autowired.

Readers and writers have three methods that you need to be concerned with.

  • open – Opens the output source.
  • read or write – reads or writes the data, respectively.
  • close – Flushes and closes the output source.

Spring Batch provides a MetaDataInstanceFactory to create a step execution with default parameters.

Spring Batch also provides a StepScopeTestUtils to assist in testing steps with the objects that they would have in scope during their step. To write the test, you will need to implement the doInStepScope method. This method requires a Callable to be implemented. I implemented mine as a Lambda-expression.

Please note that when you call the read method, it will read one item. When you call the write method, it will write the entire collection of provided data.

This all comes together in my test class.

package com.javaninja.batch;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.batch.core.StepExecution;
import org.springframework.batch.item.ItemStreamException;
import org.springframework.batch.item.xml.StaxEventItemReader;
import org.springframework.batch.item.xml.StaxEventItemWriter;
import org.springframework.batch.test.MetaDataInstanceFactory;
import org.springframework.batch.test.StepScopeTestExecutionListener;
import org.springframework.batch.test.StepScopeTestUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.TestExecutionListeners;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import org.springframework.test.context.support.DependencyInjectionTestExecutionListener;

import java.util.LinkedList;
import java.util.List;

import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertTrue;
import static org.junit.Assert.fail;

/**
 * @author norris.shelton
 */
@RunWith(SpringJUnit4ClassRunner.class)
@TestExecutionListeners({DependencyInjectionTestExecutionListener.class, StepScopeTestExecutionListener.class})
@ContextConfiguration
public class TestReaderAndWriter {

    @Autowired
    private StaxEventItemReader<Car> itemReader;

    @Autowired
    private StaxEventItemWriter<Car> itemWriter;

    @Test
    public void testReader() {
        StepExecution execution = MetaDataInstanceFactory.createStepExecution();
        int count = 0;
        try {
            count = StepScopeTestUtils.doInStepScope(execution, () -> {
                int numCars = 0;
                itemReader.open(execution.getExecutionContext());
                Car car;
                try {
                    while ((car = itemReader.read()) != null) {
                        assertNotNull(car);
                        assertNotNull(car.getMake());
                        assertNotNull(car.getModel());
                        assertNotNull(car.getColor());
                        assertTrue(car.getDoors() > 0);
                        numCars++;
                    }
                } finally {
                    try { itemReader.close(); } catch (ItemStreamException e) { fail(e.toString());
                    }
                }
                return numCars;
            });
        } catch (Exception e) {
            fail(e.toString());
        }
        assertEquals(100000, count);
    }

    @Test
    public void testWriter() throws Exception {
        List<Car> cars = new LinkedList<>();
        Car car;
        for (int i = 1; i < 10001; i++) {
            car = new Car();
            car.setMake("make" + i);
            car.setModel("model" + i);
            car.setColor("color" + i);
            car.setDoors(i);
            cars.add(car);
        }

        StepExecution execution = MetaDataInstanceFactory.createStepExecution();
        StepScopeTestUtils.doInStepScope(execution, () -> {
            itemWriter.open(execution.getExecutionContext());
            itemWriter.write(cars);
            itemWriter.close();
            return null;
        });
    }
}

February 18th, 2016

Posted In: Java, java ninja, Javaninja, JUnit, Spring Batch, Unit Tests

2 Comments

In Spring-Batch – Reading and Writing XML there was a note on the JobRepository in the src/main/resources/applicationContext.xml. The note clearly states that the Class that was used isn’t ready for prime time. The problem is that this job repository is stored in memory only. This is a problem if a job has to be restarted. As a reference, here is the snippet around the JobRepository

    <!--This repository is only really intended for use in testing and rapid prototyping-->
    <bean id="jobRepository" class="org.springframework.batch.core.repository.support.MapJobRepositoryFactoryBean"
          p:transactionManager-ref="transactionManager"/>

If that isn’t the real-deal, then what is?

Spring-Batch JobRepositoryFactoryBean

The MapJobRepositoryFactoryBean extends AbstractJobRepositoryFactoryBean. The only other class that extends it is JobRepositoryFactoryBean. This uses a database to store the job related information. This is how it is declared.

<bean id="jobRepository"
    class="org.springframework.batch.core.repository.support.JobRepositoryFactoryBean">
    <property name="dataSource" ref="dataSource" />
    <property name="transactionManager" ref="transactionManager" />
    <property name="databaseType" value="mysql" />
</bean>
  • dataSource is a normal database connection.
  • transactionManager is also a standard database transaction manager.
  • databaseType indicates the type of database. See org.springframework.batch.support.DatabaseType to determine the correct type. The values can be entered in lower-case.

Spring Batch Schema tables

Another thing you will need to do is add the database-related tables to store the information.

<jdbc:initialize-database data-source="dataSource">
    <jdbc:script location="org/springframework/batch/core/schema-drop-mysql.sql" />
    <jdbc:script location="org/springframework/batch/core/schema-mysql.sql" />
  </jdbc:initialize-database>

The scripts follow the pattern of schema-drop-{databaseType}.sql and schema-{databaseType}.sql. There are scripts for lots and lots of different databases. The scripts are located in spring-batch-core jar in org/springframework/batch/core.

The following database types are supported and have scripts:

  • db2 – IBM DB2
  • derby – Apache Derby
  • h2 – H2
  • hsqldb – HSQL Database Engine
  • mysql – MySQL
  • oracle – Oracle (10g)
  • postgresql – PostgreSQL
  • sqlf – Pivotal SQLFire
  • sqlite – SQLite
  • sqlserver – Microsoft SQL Server
  • sybase – Sybase

There is your answer.

February 18th, 2016

Posted In: Java, java ninja, Javaninja, Spring Batch

One Comment

Next Page »
LinkedIn Auto Publish Powered By : XYZScripts.com