Tuesday, May 31, 2011

Maven Releases - To Run or Not to Run (tests)

Firstly, I think Maven is great (yeah I said it). Yes it can be a bit of a pain in the back side at times, but who/what isn't. Now since I've got that out of my chest, a quick look at maven releases. Specifically running tests during maven releases.

Maven has a release plugin that can be used to produce and publish release artifacts. When you have a maven based project, simply running;

mvn release:prepare release:perform

Will present a series of questions that would then end up producing and publishing release artifiacts for your project(s). Leaving the details out of the 'prepare' and 'perform' phases out of this write-up (as it is sufficiently documented), this post will dwell on running tests during these releases.

When one runs the above, maven would run all tests by default. This is because both phases runs golas that executes tests. 'perform' by default executes 'clean verify' goals while 'perform' by default executes 'deploy'. This means if you have 100s of tests, they will run twice.

While this seems completely normal, it can be argued that once a preparation has been done, i.e. All source code compiled, tests passed, scm tags created, it really isn't necessary to run the same tests again. While there are compelling reasons as to why you still want to run tests, this was one of the things that was bugging us when we run maven releases. Specially when you run preparation and perform together, it seems reasonable to be able to avoid the tests for the second time. When a project has 100s of integration/functional tests that gets executed as part of a release, this means a lot of time spent in running tests that we already know that passed.

And it so happens the release plugin does provide the capability to avoid tests running during the perform phase if one wants to do so. It's all down to the plugin configuration.

    maven-release-plugin
    2.1
    
        deploy -Dmaven.test.skip=true
    


The above configuration would make sure that tests don't run as part of the 'perform' phase of your maven release, and there by saving you as many minutes it takes to run the tests. While this might not be a preferred choice (not running tests in the perform phase), the plugin is configurable for us to make it play how we want it to play in case the second test run is seen as avoidable.

Thursday, February 24, 2011

Multiple Web Applications - One Spring Context

Its normal to have multiple web applications deployed as a complete solution that serve a product or enterprise. And if these web applications use spring, its normal to see multiple spring contexts associated to each of these web applications. This is because although spring beans are in fact singletons, they are not "single"ton per vm under multiple class-loaders. When multiple web applications are deployed (typically in a J2EE container or servlet container) each web application has its own class-loader. And when spring beans under each of these web applications get loaded, they are singletons within the class-loader.

While this is perfectly acceptable, it would be nice (and beneficial) to have these spring beans to be singletons for all web applications. This would mean that the web applications would be sharing the same spring context. Spring does provides this capability out of the box and is easily configurable.

One of the pre-requisite to enable this capability is however to have all web applications packaged and deployed as a EAR. This means the deployment container would need to be a J2EE container (like WebLogic, JBoss, Glassfish etc) and not simply a servlet container like Tomcat.

The key to this deployment model is the class-loader hierarchy that we get from a EAR deployment. An EAR deployment which would typically have multiple web applications (WAR files) and shared application library (with multiple JAR files) will work on a class-loader hierarchy. Below the standard system/bootstrap class-loaders for application servers, an EAR would have a top-level class-loader (say this is the Application Class-loader) and a bunch of class-loaders as children. These child class-loaders are associated to the web applications. And its standard in a EAR deployment to package all jar files that are shareable by all web applications under a directory to which the class-path is set by the META-INF file. All classes loaded by the Application class-loader are visible to the web application class-loaders. But if a web application contains any jar file under its own lib directory, they wont be accessible to the Application class-loader and certainly not to the other web applications within the EAR.

So with regards to sharing spring beans, this means that we can place jar files for all spring beans in the shared application library. While these then get loaded by the Application class-loader, each web application will have access to them hence resulting a shared spring context - Not really. For spring, this is still not complete to achieve a shared context.
A web application is spring configured through its web.xml either using a ContextLoaderListener or a ContextLoaderServlet (depending on the servelt API implemented by your container). Typically we'd use the 'contextConfigLocation' where we specify the location of our spring bean configurations.


contextConfigLocation
/WEB-INF/my-application-servlet.xml

Assuming the above spring configuration consists of beans specific to the web application concern (validators, controllers), to have bunch of beans share a single spring context, we use the 'locatorFactorySelector' and 'parentContextKey'. We simply add the following into our web.xml(s)


locatorFactorySelector
classpath:common-beans.xml


parentContextKey
commonContext

The above would mean that you would have a file called common-beans.xml in the classpath for the web application, which has the following bean configured;



classpath:service-beans.xml






The above configuration defines the 'commonContext' bean. This is a bean representing the 'ClassPathXmlApplicationContext', which is a convenience class used to build application contexts. The list passed into the constructor is a list of configuration file locations, of which the beans will be loaded from the definitions given in the configuration files.
With a configuration like the one above in each web application of a EAR, all web applications will share the same beans configured through the 'commonContext' bean.

This approach can bring few advantages on your deployment architecture, maintenance and development;

From a deployment architecture point of view,
  • If using hibernate in the mix of things, we can benefit from a single SessionFactory. If caching is configured, the cache would be applicable for all web applications and it would be one cache. Saving your heap usage.
  • Reduces the classes to load and hence saving on your permgen.

From a maintenance and development point of view,
  • Common beans can be configured in one place one configuration file that would be used by others. There's no need to duplicate the bean declarations in multiple spring configuration files.

As mentioned previously, this is only if the deployment model is EAR based. If all web applications are deployed as their own WAR files, there's not concept of class-loader hierarchy to achieve the above.

While the decision on whether to EAR or Not is a separate set of notes, if an EAR packaging method is decided for an application, spring does provide the capability reap benefits from the decision.

Useful links on the subject;
http://download.oracle.com/javaee/1.4/tutorial/doc/Overview5.html
http://java.sun.com/developer/technicalArticles/Programming/singletons/

Saturday, January 29, 2011

Spring transactions readOnly - What's it all about

When using spring transactions, it's stated that using 'readOnly' provides the underlying data layers to perform optimisations.
When using spring transactions in conjunction with hibernate and using the HibernateTransaction manager, this translates to optimisation applied on the hibernate Session. When persisting data, a hibernate session works based on the FlushMode set on the session. A FlushMode defines the flushing strategy that synchronizes database state with session state.
We look at a class with transactions demarcated as follows
@Transactional(propagation = Propagation.REQUIRED)
public class FooService {
    
    public void doFoo(){
        //doing foo
    }

    public void doBar() {
        // doing bar
    }
}
The class Foo is demarcated with a transaction boundary. And all operations in Foo will have the transactional attributes specified by the @Transactional annotation at the class level. An operation within a transaction (or starting a transaction) would set the session FlushMode to AUTO and is also identified as a read-write transaction. This would mean, that the session state is sometimes flushed to make sure the transaction doesn't suffer from stale state.
However, if in the above example, we'd have the doBar() simply performing some read operations through hibernate, we wouldn't want hibernate trying to flush the session state. And the way to tell hibernate not to do this is through the FlushMode. In this instance the above example turns out as follows
@Transactional(propagation = Propagation.REQUIRED)
public class FooService {
    public void doFoo() {
        // do foo
    }

    @Transactional (readOnly = true)
    public void doBar() {
        // do bar 
    }
The above change in doBar() forces the session FlushMode to be set to NEVER. This would mean that we wont have hibernate trying to synchronise the session state within the scope of the session used in this method. After all it would be a waste to perform session synchronisation on a read operation. One thing to note in this configuration is that we are indeed spawning a new transaction. This happens by applying the @Transactional annotation.
However, this is only true if doBar() is called from a client who has not initiated or participated in a transaction. In other words, if doBar() is called within doFoo() (which has started a transaction), then the readOnly aspect wouldn't have any affect on the FlushMode. This is due to the fact that @Transactional uses Propagation.REQUIRED as the default propagation strategy and in this instance it would participate in the same transaction started by doFoo(). Thereby not overriding any of its transaction attributes. 
If for some reason doBar() needs to still have readOnly applied within an existing read-write transaction, then the propagation strategy for doBar() would need to be set to Propagation.REQUIRES_NEW. This forces the existing transaction to be suspended, and create a new transaction, which also sets the FlushMode to NEVER. However once it exists the new transaction, it would continue the first transaction and would also have the FlushMode set to AUTO (Following the transaction propagation model). However, I cant think of a scenario which would need such a configuration though.

While the readOnly attribute can also provide hints to underlying jdbc drivers, where supported, the implications of this attribute can vary based on the underlying persistence framework (Hibernate, Spring JPA, Toplink or even raw JDBC) in use. The above details aren't necessarily true for all approaches, it's only valid in the spring+hibernate land.

Another way of looking at all of this is probably asking as to 'Do we really need to spawn a transaction and then mark it as read-only for a pure read operation at all'? The answer is probably 'No'. And 'Yes' its best to avoid transactions for pure read operations. However this can come handy in the approach one picks to demarcate transactions. For example take a service implementation that has its transactional properties defined at a class level, which is configured to be read-write transactions. If majority of the operations of this service implementation shares these transactional properties, it makes sense to have these properties applied at a class level. Now if there are couple of operations that are actually read operations, the readOnly attribute comes handy in configuring only those operations as @Transactional (readOnly = true). This is still not perfect in the premise of creating a transaction for a read operation. So on this example another configuration might be @Transactional (propagation = Propagation.SUPPORTS, readOnly = true), which would mean a new transaction will not be created if one does not exist (relatively more efficient) and also has the readOnly applied for other optimisations (possibly on the drivers).

It all boils down to the fact that spring has the readOnly attributes as an option to use. But, the when where and why to use it is entirely up to the developer depending on the design to which an application is built. So it is quite useful to know what the 'readOnly' attribute is all about to put it to proper use.

Monday, January 24, 2011

Too many open files

I've had my share of dreaded moments of the 'Too many open files' exception. When you see 'java.io.IOException[java.net.SocketException]: Too many open files' it can be somewhat tricky finding out what exactly is causing it. But not so tricky if you know what needs to be done to get to the root cause.

First up though what's this exception really telling us?
When a file/socket is accessed in linux, a file descriptor is created for the process that deals with the operation. This information is available under /proc/process_id/fd. The number of file descriptors allowed are however restricted. Now if a file/socket is accessed, and the stream used to access the file/socket is not closed properly, then we run the danger of exhausting the limit of open files. This when we see 'Too many open files' cropping up in our logs.
However the fix to the root cause will vary from what's been uncovered. It's easy if it's an error made in your code base (simply because you can fix your own code easily - at least in theory)  and harder if it's a third-party library or worse the jdk (not so worse if its documented though).

So what do we do when this is upon us?
As with any other thing, find the root cause and cure it.
In order to find the root cause in relation to this exception, the first thing that would be nice to find out is, what files are opened and how many of them are opened to cause the exception. The 'lsof' command is your friend here.
shell>lsof -p [process_id]
(To state the obvious the process id is the pid of your java process)
The output of the above could be 'grep'd to find out what files are repeated and is increasing as the application runs.

Once we know the file(s), and if that file/socket is accessed within our code its a no brainer. Go fix the code. This could be something simple as not closing a stream properly like so;
public void doFoo() {
    Properties props = new Properties();
    props.load(Thread.currentThread().getContextClassLoader().getResourceAsStream("file.txt"));
}
The stream above is not closed. This would result in a open file handle against filt.txt.
If its third-party code, and you have access to the code, you have put your self through the some-what painful process of finding out the buggy code in order to work out what can be done about it.
In some situations third party software would require increasing the hard and soft limits applicable to the number of file descriptors. This can be done at /etc/conf/limits.conf by changing the figures for hard and soft values like so;
*   hard   nofile   1024
*   soft   nofile   1024
Usage of '*' is best replaced by user:group for which the change is applicable to.
Some helpful information about lsof

Saturday, September 11, 2010

Not one thread, but many threads to do my work

It's often the case where our code executes in a single thread of execution. Take for an example a web application running on a servlet container. Sure the container is multi-threaded, but as far as one request is concerned, it's one thread that executes all code in its path to complete the request.
This is the norm and is usually sufficient for most cases. Then there are certain use cases where once the application is put together (or during designing it, if one pays good attention to design that is) you feel that part of the logic is simply waiting for their turn of the thread execution although they are perfectly capable in running without any dependency.

Take for an example a search solution that searches multiple data sources in order to present the results to the end-user. Assume that the data sources are one or more relational databases, an index of some sort (think lucene) an external search service. The search solution is expected to take in a user search term and search across all of these data sources and provide an aggregated set of results. And that's exactly what the code would do. The code would search each data source at a time, and say perhaps wrap the results in a common value object and would aggregate them so that they are ready for the user. So in the typical single thread execution path, the thread would execute each search one at a time and finally run them through the aggregation. Concurrently speaking though, this seems a bit wasteful. Each search against a data source has no dependency to each other. The only element that's dependent in this execution path is the aggregation which needs all results for it to do its magic. This means if we run all searches concurrently and then pass them to the aggregator we have a good chance of saving some time.
First thing that would dawn upon about this approach is - multiple threads. We want the search to spawn multiple threads on which each of the data sources would be searched on, and pass the results to an aggregation logic. Hence achieving concurrent execution of searches of which benefits can be very noticeable.

Messing around with multiple threads however, is something most avoid doing due to inherent complexities. Thread pooling, thread termination, synchronisation, deadlocks, resource sharing are a few of those complexities that a developer focused on application development might tend to stay away. But with the inclusion of the java concurrent package in the JDK, things just got simpler. Thanks to Doug Lea, the java.util.concurrent package which includes out of the box thread pooling, queues, mutexes, efficient thread initialisation and destruction, efficient locking etc are basically a comprehensive set of building blocks for application developers to easily introduce concurrency into the application flow.

While the above use case is an ideal candidate for an application developer to adopt a concurrent execution model in the application, a simple framework at - https://code.google.com/p/concurrent-it/ is a simple usage of the API on how to achieve concurrency. This was put together to achieve concurrent tasks in various use cases in a recent project. The code is quite straight forward and a test provided along with this is doing nothing but computing a fibonaci series and factorials. The computations however are exhausted so that its heavy in response times.

Touching the gist of it, the main building blocks employed of the java.util.concurrent package are briefly discussed here.

ExecutorService:
An ExecutorService provides means of initiating, managing and shutting down threads. The jdk provides several implementations of this and a factory utility called Executors provides the necessary utility methods to create them. The API documentation of the different types of ExecutorService is a good point of references (as always). Typically you would gain access to an ExecutorService will be as follows;

ExecutorService execService = Executors.newCachedThreadPool(); 
This would give you an ExecutorService which has a cached thread pool.

Callable:
The Callable interface is similar to the Runnable interface (in the thread api). So implementors of this interface can run on a different thread. The differentiator from the Runnable is that Callable will return a value and may throw an exception (if there's one to be thrown ofcourse).

public class Processor implements Callable {
 
   // implementing the contracted methods
    public AsyncResult call() throws ConcurrentException {
        // implementaitons placed here.
    }
}

Like the run() method of the Runnable interface, the Callable interface has a call() method that implementors are contracted to implement. As per the example above, our implementation returns a AsyncResult type when call() is called. And the call() method is invoked by the ExecutorService. And this implementation can be thought of a task that you'd want to be executed in its own thread.

Putting all together:
Now we've got our task that we want to be invoked in a separate thread (The Callable implementation) and we've got an ExecutorService that dishes our threads and manages the thread execution. Now to kick it all off all we have to do is;

List  tasks = // build a new list of Processor objects
 
List results = execService.invokeAll(tasks);


The invokeAll(Collection) method takes care of submitting all tasks (Processor objects in our case) for execution across multiple threads concurrently. As mentioned above, the call() method is consulted to kick off the execution of each of those threads.

Future:
The next important thing about the invokeAll(Collection) method call is its return value - Future objects. A Future is an holder of the result for an asynchronous process. This is quite a powerful concept where we can now take control of result of multiple threads. A simple get() method provides the means retrieving the desired result. As listed above, the type of the result is what the call() method returns. The get() method however would block if necessary. Thus the result of each task is accessible and can be put to use there after.

Putting all together:
A quick look how all of these lines in respect to the main thread of execution and the multiple threads spawned by the ExecutorService

List  tasks = // build a new list of Processor objects // main thread
List results = execService.invokeAll(tasks); // main thread spawns multiple thread

for (Future<AsyncResult> result : results) {
    AysncResult res = result.get() // main thread. Would block if necessary
    // do something with the result 
}
That's it. Using these building blocks from the API we have a nice straight-forward way to invoke our work concurrently. While the usage of the API is simple, applying concurrency needs to be done carefully. It needs to be thought of while designing the application and testing in terms of load, stress for thread and resource utilisation is a must. Also once implemented, a run through a profiler for how each thread behaves should be in the list of things todo as well. Also worth noting is that the tooling support is not quite mature as yet for the concurrent programming model (i.e. debugging).

The example contains a test that demonstrates the usage of the simple framework and it's accessible here - https://code.google.com/p/concurrent-it/. The test in the example shows the perhaps the most easiest advantage on using the java.util.concurrent package, which is speed-up through parallelism.

Tuesday, March 16, 2010

ThreadLocal - What's it for

ThreadLocal provides a mechanism of storing objects for the current execution of thread. What this means is that you can stick in a object into a ThreadLocal and expect it to be available at any point during the the executing thread. This comes in handy when you have to make some object made available between different layers (i.e. presentation->business->data access) without making them part of the api. For example implementing cross cutting concerns such as security, transactions make use of the ThreadLocal to access transactional, security contexts between layers, without cluttering the application APIs with non-application specific details.


What happens in the ThreadLocal
ThreadLocal holds a custom hash map that holds objects as values and the current thread as the key. This custom map (ThreadLocalMap) is local to the ThreadLocal and is not exposed as part of the API. So it's basically a HashMap carried through the current thread of execution. That's all to it in a nut-shell as to what goes inside the ThreadLocal.


How to make use of a ThreadLocal
A ThreadLocal is generally used through a custom implementation which holds the ThreadLocal. The custom implementation should have a static field of the type ThreadLocal and would generally expose static accessors to store and retrieve objects to and from the ThreadLocal.
Assume you want to store a FooContext object in the ThreadLocal as part of your application.
Following is an example of a basic implementation to use the ThreadLocal;

public class ThreadLocalHolder {

    private static ThreadLocal store = new ThreadLocal();

    public static void set(FooContext context) {
        store.set(context);
    }

    public static FooContext get() {
        store.get();
    }
}

To see how this is used consider a Business layer object called FooBusinessObject and a DAO callled BarDao. You want some method in the business object to initialise a FooContext, do some work and call the DAO. You then want to be able to access the FooContext you created without making part of the API of the DAO. 

public class FooBusinessObject {

    private BarDao barDao;

    public void doFoo() {
        // Some logic that builds a FooContext
        FooContext context = new FooContext();

        // sticks the context into the ThreadLocal.
        ThreadLocalHolder.set(context); 

        // Do more work....
        barDao.processBar();
    }
}

public class BarDao {

    public void processBar() {
        // Get the context from the thread-local
        FooContext context = ThreadLocalHolder.get();

        // do some work...
    }
}  

That's all that's needed to be able to store and retrieve objects in the ThreadLocal. ThreadLocal is used in many frameworks and is a popular choice for use cases of sharing cross cutting objects during the execution of a thread. 


Is there any danger involved?
There are potential dangers that you could face by using ThreadLocals though. However these dangers are not because of merely using the ThreadLocal, but because its not used properly (as with many other things as well). The issues are apparent when we work on a managed environment, such as a application server that maintains a thread pool dishing out threads for incoming requests. This means one request would get hold of a thread, finish the work involved and then the application server reclaims the thread back to the pool. Now if you stuck an object in the ThreadLocal, it doesn't get removed when the thread returns to the pool unless this is cleaned out. So you've now mixed the state of a thread that finished into another executing thread. 
So as part of using this properly, the thread local needs to be cleared out before the execution ends. This is ideally done in a finally block where the ThreadLocal is nullified. 
The above addition to the ThreadLocalHolder exposes a clear method that cleans out the ThreadLocal for this thread.

public class ThreadLocalHolder {

    private static ThreadLocal store = new ThreadLocal();

    public static void set(FooContext context) {
        store.set(context);
    }

    public static FooContext get() {
        store.get();
    }

    public static void clear() {
        store.set(null);
    }
}
And this is cleared as follows;

public class FooBusinessObject {

    private BarDao barDao;

    public void doFoo() {
        try {
            // Some logic that builds a FooContext
            FooContext context = new FooContext();

            // sticks the context into the ThreadLocal.
            ThreadLocalHolder.set(context);            

            // Do more work....
            barDao.processBar();
        }
        finally {
            // clean up
            ThreadLocalHolder.clear();
        }
    }
}

ThreadLocal usage is a very useful technique to provide per thread contextual information. Used properly it's a valuable method that will come handy where applicable. 

Friday, March 5, 2010

Spring Security - A peak under the hood

When I was introduced to spring security last year, I wanted to know what's going on under-the-hood. So I went to the best source to get the information I needed - The Source code. While I did my follow through, run-time step through of the framework, I made notes partially in my notebook and partially in my head (not ideal I know). However this knowledge has helped me do some customisation I'm currently working on and thought I should do a brain (and notepad) dump so that I can come back to this if needed again. So here goes..

Basics on configuring spring security
Configuring spring security for a web-application;
- Add the DelegatingFilterProxy filter in your web.xml
Configure this filter with a filter name 'springSecurityFilterChain'. Reason for this is discussed briefly later in this rant.
- Enable the filter for all requests. You can choose certain url patterns not to go through the filter.
Even if you don't do this, the spring security configuration allows skipping security certain url patterns.
- Create a separate spring configuration which holds all the security configurations.
It's cleaner and easier to maintain the security configurations if its kept separate.
- Make sure the spring security configuration is loaded as part of the context.
- Configure the spring security configuration to suite your requirements
At this point I'm going to leave the configuration details out of this notes. Instead we take a turn towards some of the internals on what goes under the hood of the framework.

The gist of it
The concept of the framework is to;
Define a set of roles(or authorities),
Define a set of resources (in the form of url patterns) and
Apply restrictions to the resources based on the permissions defined. The permissions are based on the roles(or authorities) applicable to a particular user and whether those roles(or authorities) have access to a resource or not.
This could be thought of as a 50k foot view on what spring security provides. Obviously there's much more that the framework provides. But I'd like to think that this is the gist of it.
Now getting onto a bit of details...

Some analogy to start off
At the core of Spring security, servlet filters and the HttpSession is employed to work it's magic. If accessing a secured web resource is analogous to going into a secured room of a building through few doors and coming back to the starting point through the same route, then the filters would be the doors. And there will be some action performed going through and coming out of each of these doors. These actions would remember what happened through these doors by holding it in some sort of a log record - this would be the HttpSession. Finally a check is done if the user is allowed to get in or not and sends the user back on the same route with a success or failure response.
Hence IMHO I believe to understand what spring security does, its useful to know what these filters are, how and what they do.

What's the DelegatingFilterProxy
The entry point to the spring security framework starts from one filter - DelegatingFilterProxy. The filter it self doesn't have a lot going on. As the name describes it, it's just a 'delegating' filter. It initializes looking for a target bean to delegate the filter chain. DelegatingFilterProxy, supports a init-param called 'targetBeanName'. This defines what bean it should delegate to. This is of course optional to configure. By default it would use the filter-name given in the filter config for this filter. And internally it delegates to the 'FilterProxyChain' filter. It finds the 'FilterProxyChain' by means of the filter name and that would need to be 'springSecurityFilterChain'. Once the delegator is worked out, the delegation happens..

What happens when the delegation begins
The FilterChainProxy gets to work when the delegation begins. It uses a pre-initialised map which holds information about a bunch of filters to execute. The filter chain continues invoking all filters defined in the map. Spring security has a pre-defined set of filters registered for the chain by default. This can be changed if needed. However if the default is accepted, the following filters will be in the chain of execution;

  • FilterChainProxy
  • SecurityContextPersistenceFilter
  • SecurityContextHolderAwareRequestFilter
  • ExceptionTranslationFilter
  • ChannelProcessingFilter
So the obvious curiosity at this point would be to know who/what initialises this map of default list of filters. The answer lies in the HttpSecurityBeanDefinitionParser.parse(Element element, ParserContext parserContext) method. This is part of the spring security configuration components. This rather long method registering the above filters (including the FilterProxyChain filter).
Now that the FilterProxyChain has kicked off the chain, each filter performs their responsibilities. The internals of all these filters are best understood by looking at them. However there's one particular filter worth discussing - SecurityContextPersistenceFilter. This filter has a special place in the framework. Before diving into this filter though there's a more important component to pay attention to, which is the SecurityContext.

What is the SecurityContext
The SecurityContext is a simple yet powerful interface that provides a mechanism to store and access Authenticaiton details during a executing thread. Spring security populates and depends on the SecurityContext at its core. When a person is authenticated their authentication details are held in the SecurityContext. The simple interface sets and returns an Authentication instance which extends the characteristics of java security's Pricipal. The application can decide what gets stored in the Authentication instance.
An explanation of the SecurityContext cannot be completed without the mention of the SecurityContextHolder.
As the definition states, SecurityContext presents minimum security information associated with the current thread of execution. The minimum security information is presented by the Authenticaion object it exposes as discussed above. The SecurityContextHolder holds the responsibility of attaching a SecurityContext with the current thread of execution. It does this by employing the ThreadLocal. The SecurityContext provides (as of the time of writing) 3 modes/strategies to attach a SecurityContext to the current thread of execution.
  • ThreadLocal
  • InheritableThreadLocal
  • Global
ThreadLocal; as it suggests implements a standard ThreadLocal implementation of the JDK. ThreadLocalSecurityContextHolderStrategy demonstrates a standard ThreadLocal implementation and holds the SecurityContext in the current thread. This is also the default strategy used if none is explicitly provided.
InheritableThreadLocal; as the API defines it, is a inheritable implementation (InheritableThreadLocalSecurityContextHolderStrategy) of the ThreadLocal. Provides sharing and overriding values between parent and child threads where applicable.
Global; Is a static field based implementation (GlobalSecurityContextHolderStrategy). Hence the SecurityContext held on this strategy is shared by all instances on the jvm. As the code documentation states, this is generally used for rich client applications.
The SecurityContextHolder employs one of this strategies to attach the SecurityContext onto the thread. It provides an API to create, store and clean-up the SecurityContext for any of these strategies used. And the strategy employed is always initialised once (with the exception of some one explicitly setting the strategy) for the jvm.
Now that the SecurityContext is dealt with, we shift attention back to the SecurityContextPeristenceFilter.

Back to the SecurityContextPersistenceFilter
As discussed above, this filter is invoked in the filter chain process and more importantly is the first as well. It's important that this is invoked as this sets up the core needed for others to do their work. This filter makes sure the SecurityContextHolder has the SecurityContext to be attached to the thread.
It also uses the HttpSession (via a HttpSessionSecurityContextRepository) to hold the SecurityContext within the user's HttpSession, so that the SecurityContext can be shared between requests. The repository implementation that manipulates the session creates a session if a current session does not exist based on the configuration attribute 'allowSessionCreation'. This by default is true. As a result you might end up with a lot of HttpSession objects within your heap. This usually is not a problem. But if heap space is of concern then you might want to consider turning off the new session creation option by setting 'allowSessionCreation' to 'false'. However this opens up several other impacts and one would have to ensure that anyone using the SecurityContextHolder should not depend on the SecurityContext being persisted between web-requests. However applicatoins need to ensure that the minimum data is held in the Authentication instances so that the session wont get too heavy.
So this filter focuses on persisting the SecurityContext through the SecurityContextHolder for rest of the filters/components to do their work. One might raise a concern on the approach of storing these values in the HttpSession and thread-local. As discussed above, choosing not to store it in the HttpSession is an option, but one that should be used cautiously. As for the ThreadLocal, concerns would pile up on situations where thread pools are employed to serve web requests in an application server and the cleansing of these ThreadLocals. Spring security takes care of this by ensuring that TheadLocals are only used in the life time of the thread serving a request. The populated ThreadLocal is cleaned up before the thread returns to the pool.
So, one-by-one in the chain, these filters perform their duties to enforce security for the application. Understanding these filters makes it easy to trace down problems and provide your own extensions. May the source be with you!!!