home‎ > ‎Programming‎ > ‎Comparing NoSQL Data Stores‎ > ‎

Coherence

A zipped version of my Coherence project is attached below.

For installing coherence, download the zip (currently ofm_coherence_generic_12.1.2.0.0_disk1_1of1.zip)  from Oracle Coherence. Please note that you will need an Oracle Tech Net userid / password (or will have to register to get it).

Unzipping will result in one jar file (currently coherence_121200.jar). To run the installer type:

java -jar coherence_121200.jar

and follow the instructions.

In my case the folder after installation looked like this

The jar that needs to be added to your local repository can be found under the coherence/lib folder and is called coherence.jar

Coherence is probably one of the more complex projects to setup, so please follow the instructions below carefully (miss a single setup and your results will vary)


Project setup

As can be seen from the pom.xml, the three dependencies are jquantlib, coherence and testng. Coherence and jquantlib were added in manually.

IMPORTANT: Make sure that every single Run/Debug instance include the following VM options
-Dtangosol.coherence.ttl=0 -Dtangosol.coherence.clusteraddress=238.11.21.33

Setting time to live (TTL) to zero would mean that no packets from this cluster would reach out to any other cluster within your network. But that alone is not enough, you are still susceptible to receiving packets from other cluster.

To guard against this latter issue, specify a multicast address (which typically falls in the 224-239.X.X.X range)

When running tests within this project, if you do not include these parameters, you will not connect to the running cluster and simply run tests within your own local cluster

Server setup

A default cache server needs to be setup and be up and running to run this unit test. The zip include myrun configurations (see .idea\runConfigurations) and is as shown below



You can similarly set up the command line tool, with the Main class being  com.tangosol.net.CacheFactory (obviously, give the config a different name such as "command")

Once you start the cluster by running the above config, the presence of these two lines should tell you that the correct config is being picked up from the classpath
2013-11-12 18:00:25.301/1.502 Oracle Coherence 12.1.2.0.0 <Info> (thread=main, member=n/a): Loaded operational overrides from "file:/W:/java/pricingmetrics/coherence/target/classes/tangosol-coherence-override.xml"
2013-11-12 18:00:25.807/2.008 Oracle Coherence GE 12.1.2.0.0 <Info> (thread=main, member=n/a): Loaded cache configuration from "file:/W:/java/pricingmetrics/coherence/target/classes/pricing-cache-config.xml"

The member set should show only 1 member
MasterMemberSet(
  ThisMember=Member(Id=1, Timestamp=2013-11-12 18:00:30.367, Address=192.168.0.6:8088, MachineId=17675, Location=site:,machine:raj-winlaptop,process:5536, Role=IntellijRtExecutionAppMain)
  OldestMember=Member(Id=1, Timestamp=2013-11-12 18:00:30.367, Address=192.168.0.6:8088, MachineId=17675, Location=site:,machine:raj-winlaptop,process:5536, Role=IntellijRtExecutionAppMain)
  ActualMemberSet=MemberSet(Size=1
    Member(Id=1, Timestamp=2013-11-12 18:00:30.367, Address=192.168.0.6:8088, MachineId=17675, Location=site:,machine:raj-winlaptop,process:5536, Role=IntellijRtExecutionAppMain)
    )

and it should end up with this line

Started DefaultCacheServer...

Configuration files

NOTE: XML based configurations, though supported, have been improved upon with features such as config setup via code. I have stuck to XML as that's the bit I am most familiar with.

There are three important configuration files required to run this project, all sitting under the resources directory and therefore gets automatically added to the classpath (being a maven project)

The override file has this entry which points to the cache config file

                <param-value system-property="tangosol.coherence.cacheconfig">pricing-cache-config.xml</param-value>

The cache config defines two caches: marketdata and tradedata - both backed by a distributed service

    
        <cache-mapping>
            <cache-name>marketdata</cache-name>
            <scheme-name>distributedMarketData</scheme-name>
        </cache-mapping>
        <cache-mapping>
            <cache-name>tradedata</cache-name>
            <scheme-name>distributedTradeData</scheme-name>
        </cache-mapping>

The distributed service define a backing-map  which in turn defines an eviction policy and unit-calculator (as a local-scheme)

The cache config also points to the serializer being used and the associated config file

        <serializer system-property="tangosol.coherence.serializer">
            <instance>
                <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                <init-params>
                    <init-param>
                        <param-type>string</param-type>
                        <param-value>pricing-pof-config-internal.xml</param-value>
                    </init-param>
                </init-params>
            </instance>
        </serializer>

Every object that needs to be stored int he backing-map needs an entry in this pof-config file

Data Population

To reiterate what was said in Project setup, make sure you have added the following VM options to all your run configurations
-Dtangosol.coherence.ttl=0 -Dtangosol.coherence.clusteraddress=238.11.21.33

I have used simple cache "puts" to put the data into the respective cache (did not use putAll nor did I use anything more sophisticated as an entry processor). This is to because I did not want to bias the system as all others were also being analyzed in plain vanilla out-of-the-box mode

For market data
  • testInsetTestMarketData - will insert one test data (for effective date 15-May-1998)
  • testInsertMarketDataForDates - will insert market data for two other dates
For trade data
  • testInsertTestTradeData - will insert the one test trade (for effective date 15-May-1998)
  • testNextTradeData - will insert a trade with random parameters (within tolerance)

Querying and Filtering

Though CQL is now available for SQL like queries, I have stuck to the traditional route of Filters to query the cache

Coherence filters help to identify the cache items that are of interest. I have used ValueExtractors in conjunction with Filters
.
One of the ongoing issue with Coherence is that the date is represented internally as a timestamp. So even though I inserted a market data object for the 15-May1998, internally this added some random timestamp elements to it. This meant that the EqualsFilter returned no values


This forced me to use a BetweenFilter instead.


If a "pure" date object is required (without locale complication, so 15-May1998 would mean 15-May-1998 in Hong Kong, New York and London) then I strongly suggest the use of a bespoke Date object within your project. You can extract it and convert to Java time of Joda time (or any other time of your choice)


ċ
Raj Subramani,
13 Nov 2013, 10:03