Power Of Java 8 Default Method In Spring Controller


Imagine your application has multiple controllers (for Quite a long period of time) for each domain object (EntityOneController, EntityTwoController etc) having its own hierarchy and now a common endpoint has to be added in each controller incrementally, having every thing same however differing in very little aspect.


Lets say that the common endpoint would expose swagger detail for that each entity. One option would be to write the same logic one by one in each controller, however that would duplicate lots of code in every controller.


Java 8  Default Method To The Rescue

Here is the Java 8 Default Interface with Spring @Controller Annotation, as can be seen here it has common implementation, but implementors can provide their custom behavior as well.



public interface SwaggerController { 
  String getName(String foo); 
  @RequestMapping(value = "/swagger/{foo}", method = RequestMethod.GET) 
  default String hyperschema(Model model, @PathVariable String foo) { 
    model.addAttribute("name", getName(foo)); 
    return "welcome"; 

Here is one of the Implementations

public class EntityOneController extends EntityOneBaseController implements SwaggerController { 
  @Override public String getName(String name) { 
    return "Entity1 " + name; 

Endpoint URL would be /entity1/swagger

Here is another implementations

public class EntityTwoController extends EntityTwoBaseController  implements SwaggerController {
 @Override public String getName(String foo) { 
    return "Entitiy2 " + foo; 

Endpoint URL would be /entity2/swagger

We have achieved the same functionality, without duplicating code!



Fixed Delay Scheduling with Quartz


With this (Fixed Delay) kind of scheduling there is always a same fixed delay between termination of one execution and commencement of another, as can be shown in the following image.


Java supports this kind of scheduling inherently through java.util.Timer and java.util.concurrent.ScheduledExecutorService, however achieving fix delay using quartz is not that straight forward (Specially when misfires are considered)

One might consider the easy approach, of rescheduling itself from within the execute method of quartz job.


Here is the implementation, you would keep the reschedule method in util class as static method.


However, a better approach would be to use Quartz listeners:


Here is the implementation of the listener


Here is the usage :


Here is the code for FixdedDelayJobData.java and FixedDelayJobListener.java

Scalable Java Thread Pool Executor

Ideally from any Thread Pool Executor, the expectation would be the following

  • An initial set of threads (core threads pool size) created up front, to handle the load.
  • If the load increases, then more threads should be created to handle the load up to max threads (Max pool size)
  • If the number of threads increases beyond Max Pool Size, then Queue up the tasks.
  • If Bounded Queue is used, and the queue is full then bring in some rejection policy.

The following diagram depicts; only initial threads are created to handle tasks (When load is very low).


As more task come in, more threads are created to handle the load (Task queue is still empty), if total number threads created is less than max pool size.


Task Queue getting filled up if total number of tasks is more than total number of threads (initial + extended)


Unfortunately, Java Thread Pool Executor (TPE) is biased towards queuing rather than spawning new threads, i.e., after the initial core threads gets occupied, tasks gets added to queue, and after the queue reaches its limit (Which would happen only for bounded queue), extra threads would be spawned. If the queue is unbounded then extended threads won’t get spawned at all, as depicted in the following image.


1==> Initial Core threads were created to handle the load

2==> Once there are more tasks than number of core threads, queue starts getting filling up, to store the tasks

3==> Once the queue is filled, extended threads are created.

Here is the code in TPE, which has problem


We have got couple of work around:

Work Around #1

Set the corePoolSize and maximumPoolSize to same value and set allowCoreThreadTimeOut to true.


  • No coding hack required


  • There is no real caching of threads as threads are getting created and terminated quite often.
  • There is no proper scalability.

Work Around #2

  • Override the offer method of delegator TransferQueue, and try to provide the task to one of the free worker threads, return false if there is no waiting threads,
  • Implement custom RejectedExecutionHandler to always add to Queue.


Refer this implementation for more detail


  • TransferQueue ensures that threads are not unnecessarily created, and transfers the work directly to waiting thread.


  • Customized rejection handler cannot be used, since it is used to insert the the tasks to queue.

Work Around #3

Use custom queue (TransferQueue) and override the offer method to do the following

  1. try to transfer the task directly to waiting thread (if any)
  2. If above fails, and max pool size is not reached then create extended thread by returning false from offer method
  3. otherwise insert into queue


Refer this implementation for more detail.


  • TransferQueue ensures that threads are not unnecessarily created, and transfers the work directly to waiting thread on the queue.
  • Custom Rejection Handler can be used.


  • There is a cyclic dependency between Queue and Executor.

Work Around #4

Use Custom Thread pool executor, specially dedicated for this purpose, It uses LIFO scheduling as described in Systems @ Facebook scale

Lightweight Workflow Like Execution Using Dexecutor

Dexecutor can be used very easily for workflow like cases as depicted in the following diagram.


Dexecutor instance is created using DexecutorConfig, which in turn requires ExecutionEngine and TaskProvider, Default Implementation of ExecutionEngine uses ExecutorService, so lets create a Dexecutor Instance first (source code can be found here):

private static ExecutorService buildExecutor() {
   ExecutorService executorService = Executors.newFixedThreadPool(ThreadPoolUtil.ioIntesivePoolSize());
   return executorService;
private Dexecutor<String, Boolean> buildDexecutor(final ExecutorService executorService) {
   DexecutorConfig<String, Boolean> config = new DexecutorConfig<>(executorService, new WorkFlowTaskProvider());
   return new DefaultDexecutor<>(config);

TaskProvider comes into action, when it is the time to execute the task, for this example we will have simple implementation WorkFlowTaskProvider

public class WorkFlowTaskProvider implements TaskProvider<String, Boolean> {

  private final Map<String, Task<String, Boolean>> tasks = new HashMap<String, Task<String, Boolean>>() {

  private static final long serialVersionUID = 1L;
    put(TaskOne.NAME, new TaskOne());
    put(TaskTwo.NAME, new TaskTwo());
    put(TaskThree.NAME, new TaskThree());
    put(TaskFour.NAME, new TaskFour());
    put(TaskFive.NAME, new TaskFive());
    put(TaskSix.NAME, new TaskSix());
    put(TaskSeven.NAME, new TaskSeven());

 public Task<String, Boolean> provideTask(final String id) {
 return this.tasks.get(id);

For simplicity we have implemented Task for each of the tasks (1..7), those can be found here, Most of the task implementations are same except for TaskTwo (if task 2 result is TRUE then tasks 3 and 4 would be executed otherwise task 5 would be executed) and TaskFive (If task 5 is executed (not skipped) then task task 6 would be executed).


TaskFive (TaskThree, TaskFour and TaskSix) overrides shouldExecute() method, to signal if the task should be executed or skipped.


Next step is to build the graph


If WorkFlowApplication is executed, following output can be observed.

Output if TaskTwo result is false

Executing TaskOne , result : true
Executing TaskTwo , result : false
Executing TaskFive , result : true
Executing TaskSix , result : true
Executing TaskSeven , result : true

Output if TaskTwo result is true

Executing TaskOne , result : true
Executing TaskTwo , result : true
Executing TaskFour , result : true
Executing TaskThree , result : true
Executing TaskSeven , result : true




Take Migration Process To Next Level Using Dexecutor

You have Data Migration process, which updates the Application from version X to X+1, by running Migration Scripts (each script consists of sequence of instructions) sequentially, to bring the application to a desired state.


The synchronous process is causing delays leading to unproductive wait times and dissatisfaction from users. There is a need for process to decrease the scripts execution time by running tasks in parallel where ever applicable to come to desired state.

Driving Forces

The following are driving forces behind Dexecutor.

  • Supports Parallel execution, conditionally may revert to sequential execution (provided such logic is provided)
  • Ultra light (Version 1.1.1 is 44KB)
  • Ultra fast
  • Distributed Execution supported
  • Immediate/Scheduled Retry logic supported
  • Non-terminating behaviour supported
  • Conditionally skip the task execution


Incorporate Dexecutor into your script execution logic, additionally distribute the execution using Infinispan, Hazelcast or Ignite. Here is the sample application which demonstrate this functionality, fork it and have fun 🙂

Dexecutor can be used in this case easily by adding an Algorithmic logic on top of Dexecutor which builds the graph based on table names. Lets assume the following scripts:

Script 1 ==> operates on Tables t1 and t2 and takes 5 minute
Script 2 ==> operates on Tables t1 and t3 and takes 5 minute
Script 3 ==> operates on Tables t2 and t4 and takes 5 minute
Script 4 ==> operates on Tables t5 and t6 and takes 5 minute
Script 5 ==> operates on Tables t5 and t7 and takes 5 minute
Script 6 ==> operates on Tables t6 and t8 and takes 5 minute

Normally these scripts are executed sequentially as follows.

Script 1  5 minutes
Script 2  5 minutes
Script 3  5 minutes
Script 4  5 minutes
Script 5  5 minutes
Script 6  5 minutes

Total time 30 minutes 

In sequential case, total execution time would be 30 minutes, However if we could parallelize the script execution, make sure scripts are executed in right sequence and order, then we could save time, decreasing the total execution time to just 10 minutes.

       +----------+                       +----------+
       | Script 1 |                       | Script 4 |             ==> 5 minutes
  +----+----------+--+               +----+----------+-----+
  |                  |               |                     |
  |                  |               |                     |
+-----v----+   +-----v----+     +----v-----+        +------v---+
| Script 2 |   | Script 3 |     | Script 5 |        | Script 6 |   ==> 5 minutes
+----------+   +----------+     +----------+        +----------+

Total Time 10 minutes

Using Dexecutor, we just have to write the algorithm which facilitates building graph using the API exposed by Dexecutor, and rest would be taken care by Dexecutor.  MigrationTasksExecutor implements that algorithm, considering the SQLs in the migration scripts. Since table names in the SQL plays a crucial role in building the graph, we need an efficient, ultra light and ultra fast library to extract table names out of SQLs, and hence we would use sql-table-name-parser, use it by adding the following dependency in your POM.


And of course, Dexecutor should be added as dependency as well


The graph, that would be built, considering the migration script is the following.



As can be seen here node base1, base3 and base 4 runs in parallel and once, one of them finishes its children are executed, for example if node base1 is finished then its children base2 and app3-1 are executed and so on.

Notice that for node app2-4 to start, app1-4 and app2-1 must finish, similarly for node app3-2 to start, app3-1 and app2-4 must finish.

Just Run this class to see how things proceed.


We can indeed run dependent/independent tasks in easy and reliable way with Dexecutor.


Multi-Node Distributed Execution Using Hazelcast and Dexecutor

We will try to execute Dexecutor in a distributed mode using Hazelcast. For the demo we would be setting up multiple Hazelast nodes on single machine.

Refer Introducing Dexecutor, to get an introduction on Dexecutor  and to understand the problem we would solve in a distribute fashion. In short:

We would be distributing the execution of dexecutor tasks on Hazelcast compute nodes in a single machine.

To do that one of the nodes would act as master and submit the tasks to Hazelcast compute nodes to be executed by other Hazelcast compute nodes using Dexecutor.

Here are the steps to do that :

Step 1: Add dexecutor-hazelcast dependency


Step 2: Get an Instance of Hazelcast IExecutorService from Hazelcast

 Config cfg = new Config();
 HazelcastInstance instance = Hazelcast.newHazelcastInstance(cfg);
 IExecutorService executorService = instance.getExecutorService("test");

Step 3 : Create Dexecutor using IExecutorService

if (isMaster) {
  DefaultDependentTasksExecutor<Integer, Integer> dexecutor = newTaskExecutor(executorService);

private DefaultDependentTasksExecutor<Integer, Integer> newTaskExecutor(IExecutorService executorService) {
  DependentTasksExecutorConfig<Integer, Integer> config = new DependentTasksExecutorConfig<Integer, Integer>(
  new HazelcastExecutionEngine<Integer, Integer>(executorService), new SleepyTaskProvider());
  return new DefaultDependentTasksExecutor<Integer, Integer>(config);

  private static class SleepyTaskProvider implements TaskProvider<Integer, Integer> {

  public Task<Integer, Integer> provideTask(final Integer id) {
     return new HazelcastTask(id);

Step 4: Execution

Open three terminals and execute the following :

Terminal #1

 mvn test-compile exec:java -Djava.net.preferIPv4Stack=true -Dexec.mainClass="com.github.dexecutor.hazelcast.Node" -Dexec.classpathScope="test" -Dexec.args="s node-A"

Terminal #2

 mvn test-compile exec:java -Djava.net.preferIPv4Stack=true -Dexec.mainClass="com.github.dexecutor.hazelcast.Node" -Dexec.classpathScope="test" -Dexec.args="s node-B"
Terminal #3
 mvn test-compile exec:java -Djava.net.preferIPv4Stack=true -Dexec.mainClass="com.github.dexecutor.hazelcast.Node"  -Dexec.classpathScope="test" -Dexec.args="m node-C"

Here is the Execution

Here is the Complete Node Implementation