woensdag 30 oktober 2013

Trailing slashes in Jersey REST WebResources

For the REST-purists under us, a resources ending with a slash is not the same as a resource ending without a slash. However, the difference between url://xxxx.com/a and url://xxxx.com/a/ is ignored. The trailing slash is omitted.

When you try to map a resource with a trailing slash using the standard @Path annotation, the methods get mapped to the same endpoint causing an exception.

Glassfish answers during the deployment with:
SEVERE: Following issues have been detected: WARNING: A resource model has ambiguous (sub-)resource method for HTTP method GET and input mime-types as defined by @Consumes and @Produces annotations at Java methods ... These two methods produces and consumes exactly the same mime-types and therefore their invocation as a resource methods will always fail.
In Glassfish which uses the Jersey under the hood, the slash is also ignored. But there is a workaround. You can use regex in the Path-annotation to map resources ending with a slash. In the example at the end, I use regex to map the paths to the methods. But there is something you should know about the path-matching algorithm. The mapping algorithm uses the following rules:
The JAX-RS specification has defined strict sorting and precedence rules for matching URI expressions and is based on a most specific match wins algorithm. The JAX-RS provider gathers up the set of deployed URI expressions and sorts them based on the following logic:
  1. The primary key of the sort is the number of literal characters in the full URI matching pattern. The sort is in descending order.
  2. The secondary key of the sort is the number of template expressions embedded within the pattern, i.e., {id} or {id : .+}. This sort is in descending order.
  3. The tertiary key of the sort is the number of nondefault template expressions. A default template expression is one that does not define a regular expression, i.e., {id}.
In the following example,  test2 is found before test1 because the regex is longer! We check the trailing slash first. When it fails, it falls back to test1.
 

woensdag 28 augustus 2013

MySQL YEARWEEK() function in Java (ISO 8601)

It is not unusual to use MySQL's YEARWEEK() function to create identifiers for weeks within years. The problem is well-known. You need to store the week number for a certain year. Storing only the week-number decouples your data from the real year. You need to store the year too and that's when YEARWEEK kicks in. But beware, there are some pitfalls!

MySQL

The YEARWEEK()-function gives us something like 201304 for the fourth week of 2013. But that's only half the story. Problems arise when you want to know the week-number for 31.12.2012. This could be 201253 or 201301, depending on how you see it. The week could start on monday, or sunday and the first week of the year starts after 4 days or not.

There is an agreement on the week-calculation. It defined as ISO 8601. The first weeks must have at least 4 days and starts on a monday. See http://en.wikipedia.org/wiki/ISO_8601 for more information.

Unfortunately, MySQL uses mode "0" for the week-calculation. It is not the ISO 8601 norm. You must set MySQL to use mode "3" (default_week_format) or pass it as a parameter in the YEARWEEK() function.

ModeFirst day of weekRangeWeek 1 is the first week …
0Sunday0-53with a Sunday in this year
1Monday0-53with more than 3 days this year
2Sunday1-53with a Sunday in this year
3Monday1-53with more than 3 days this year
4Sunday0-53with more than 3 days this year
5Monday0-53with a Monday in this year
6Sunday1-53with more than 3 days this year
7Monday1-53with a Monday in this year

So, the following problems are solved:

gives us 201253.
gives us 201301 which adheres to the ISO 8601. 

Ok, this problem seems solved. The database seems to have the dates correct. But if I want to query the week, I cannot always rely on the database to calculate me the correct date. Sure, I could do round-trips to the server sending a date and receiving the correct week number. But that's slow.

Java

Let's try to rebuild the YEARWEEK() in Java using the ISO 8601 norm. The Calendar-class is not the solution we're looking for. Sure, you can get the week, but you can't get the correct year for that week when you're in the  ISO 8601 mode. For example, for 2012-12-31 you get the week 01 but the year 2012, resulting in 201201 for the last week of the year. Which is of course, incorrect!

The package Joda helps us out and provides the solution. Out of legacy-grounds, the API works with a calendar object. Joda provides us the correct week in the year and the correct year for the that week (even though the real year is different).

I wrote a test-class with the Java-method to generate the values.


I've tested the results agains the MySQL YEARWEEK for 12 years and all seems to work fine!

dinsdag 30 juli 2013

The best DROP-script for Mysql!

There are many situations in which you want to drop all tables in a MySQL database. You can easily drop the schema, but then all the permissions are lost too. There are several solutions out there, but they all require some manual effort. I wanted to get rid off it once and for all and constructed this easy script. With the following evil shell script you can easily drop all tables in your database.

Premisse: it uses a group on table_catalog which may not always be the correct grouping in your database. Please, feel free to adapt!

Just create a textfile like drop.sh and give it appropriate rights to execute. The paste the following content in the file and save it.


Use ./drop.sh root root my_db et voilá.... all tables are gone.

Off course, things can be improved. I need to check the parameters and add some documentation...

vrijdag 26 juli 2013

Tuning suspended AsyncResponse and Thread-pools in Glassfish 4

I am experimenting with Glassfish 4 to prepare in order to move some application from J2EE6 to J2EE7. Glassfish 4 works with J2EE 7 and introduces some new concepts. The one which we are investigating today is the use of @Asynchronous and @Suspended in REST resources.

The use of the asynchronous annotations is pretty well specified for beans but there are some pitfalls when it comes to REST services. Let's go through an example and check the behavior of a REST service with asynchronous methods as we go.

The use case is the following. We define a Resource and one Bean. The resource is a typical REST-resource with only one GET-method. The bean is a normal, stateless session bean which performs a long running task. This bean is pretty straightforward. We do not annotate the methods of the bean as asynchronous.




The first resource we build is pretty simple. We create an @Stateless resource and inject a @Suspended AsyncResponse. AsyncResponse takes care of the asynchronous response when the results become available.


When we open the browser and point to the URL of this resource we will see that the request is blocked for 10 seconds! But that is nothing new. Now, the problem is that we want to know how many threads are available to serve these requests. We are using a stateless bean, so we use the thread-pool of the ejb-container. This has some implications. When we use the http-thread-pool we will basically the same behavior, but we are not interested in the request thread. I want to scale out the ejb-pool. Adding beans to the pool is apparently not enough.

When you press F5 continuously in the browser, you will see something like "INFO: do a long task in thread: __ejb-thread-pool1" in the log. This number counts up and exceeds the threads in the http-thread-pool thanks to the AsyncResponse and @Suspended. But you will see (in a freshly installed domain) that the thread-pool does not exceed the limit of 16 even though you have max. 64 beans in your pool. We need to finetune the thread-pool of the ejb-container. But, you won't find any properties in the administrator console. You need to add them yourself. Open the domain.xml of your domain and add the following lines:

Now, rerun your application. You will see that the thread-pool goes up to 10 when you press F5 in the browser without holding it down. It seems to stagnate on 10 although you kinda specified a max-pool-size of 20. When you continuously press F5 you will suddenly see the threads go up to 20 before throwing an java.util.concurrent.RejectedExecutionException. Nice, but what the hell happened?

Let's dig deeper in the documentation of the thread pools:

thread-core-pool-size: Specifies the number of core threads in the EJB container’s common thread pool. The default value is 16. Great, there we have our number 16. Setting this to 10 oder 100 will change the actual number of threads doing some work.

thread-max-pool-size: Specifies the maximum number of threads in the EJB container’s common thread pool. The default value is 32. Nice, increasing this to 100 will be the maximum number of threads we can use? Yes and no. You have to consider the default-value of thread-queue-capacity.

thread-queue-capacity: Specifies the size of the thread pool queue, which stores new requests if more than thread-core-pool-size threads are running. The default value is the Integer.MAX_VALUE.

Here starts the confusion. Your queue-capacity is way too high. Pressing F5 will never reach MAX_VALUE, so your core-pool-size never change nor scale. You must limit your capacity first before the thread-pool is scaled with maximal max-pool-size threads. In our example, we will scale when we reach 25 waiting requests. It will scale up to 20. When all threads are used in parallel the container throws an exception.

In the past SUN declared correctly: "That is exactly how it is supposed to behave. First the threads grow to coreSize, then the queue is used, then *if* the queue fills up then the number of threads expands from coreSize to maxSize. Hence if you use an unbounded queue the last part never happens. This is all described in the documentation. If you want an unbounded queue but more threads then increase the core size. Otherwise consider whether a bounded queue is more suitable to your needs."

Some extra information can be found here https://java.net/jira/browse/GLASSFISH-17735 and http://docs.oracle.com/javase/6/docs/api/java/util/concurrent/ThreadPoolExecutor.html.

woensdag 8 mei 2013

Mounting Samba-share in Ubuntu

My DVD-reader gave up on me some time ago. I have two laptops with Ubuntu and wanted to use the DVD-reader from the other laptop on the first one. You can easily share the DVD-drive on one laptop using the properties from the file explorer. On the other side, you can see your laptop and the shared drive. But but but, you can't access the drive from some applications because they do not support Samba directly (which is rather evident). So, you need to mount the drive on a local directory. Just type:

sudo mount -t cifs -o username=yourname,password=yourpassword //your-ip-here/sharename mymount

The mymount directory must exist. Just create one with mkdir. I had trouble using the hostname instead of the IP. With the direct IP address, everything worked as planned. When you look into your mymount-directory you will see the content of your share. You can use this directory in every application.

Ubuntu 12.10, 13.04 problem with libdvdnav4

I recently updated to 12.10 and made the jump to 13.04 shortly after that. Unfortunately, Handbrake refused scanning DVD due to an error in libdvdnav. Apparently something went wrong during the dsitribution upgrade. To get libdvdnav and Handbrake working again, just run:

sudo /usr/share/doc/libdvdread4/install-css.sh

Restart Handbrake and your DVDs can be read again....

maandag 15 april 2013

Moving on....

I am currently moving my old blog entries to this new blog. Some old posts will not be transferred because time and software also moved on. The old blog address will be abandoned and disappear in the near future.

ActiveMQ and Network Interfaces

When you are using ActiveMQ as a message broker, the need can arise to bind the queue listener to a specific network-interface. Although this is not very common and some argue that the OS should do the routing, you can face certain situations in which your software decides which network-interface SHOULD or MUST be used for the communication.

The reason could be that some communication-channels are expensive and the machine needs to switch network-interfaces on the fly because of you business logic. When the servers are not under your control you cannot simply change the OS routing tables.

There is a simple trick in ActiveMQ to select the network-interface to use, although you cannot find it in the manual -- but it is in the code. Just use the following scheme for the binding:

tcp://remote-queue-ip:61616@network-interface-ip:61616

You can also use other protocols, of course.

A basic outline of a workflow system using J2EE and CDI


For one of my customers, we needed a very simple workflow-framework which coordinates the flow between methods inside a J2EE6 application on Glassfish. The business required to implement the following simple logic:
  1. Create a job.
  2. When step 1 was successful, then start a job (which affects some thousand objects) and send an email
  3. End a job when step 2 was successfull.
  4. Mark the job as failed when one of the steps couldn't be completed due to the implemented business-logic.
  5. Provide a handler to handle exceptions.
Taking a fully fledged workflow-system was way over the top to solve this problem. The workflow will also remain unchanged; and when it is changed, the code needed to be altered to. We needed to come up with a simple workflow-management which is bound to transitions and methods.
So, first we decided to create an observer the change the transitions of the workflow-steps. This is basically a very simple method which alters the status and stores it in the database. The next step was to create some annotations to add to the methods which needed to be executed during a certain step. We annotate our workflow methods as follows:

    @Asynchronous
    @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
    @Workflow
    @WorkflowStep(from = Status.IDLE, to = Status.PROGRESS)
    public void startJob(
            @WorkflowActionHandler
            @Observes(notifyObserver = Reception.ALWAYS,
            during = TransactionPhase.AFTER_SUCCESS)
            final JobWorkflowEvent event) {}

This straight-forward approach is very clean and simple. Basically it says that this method will be executed after the transition from a Status.IDLE to Status.PROGRESS. Although we use an enum here, you could take any arbitrary integer. Using the CDI annotations we get some additional power. This method will only be executed after the success of the previous step. Here you can combine the full power of CDI with some basic workflow concepts to create a nice system.
Now remains the problem of the transition handling. The transitions are handled in an interceptor which is marked by the @Workflow annotation.

    @Interceptor @Workflow
    public class WorkflowInterceptor {
    }

This interceptor does not do much more than changing the transition state when the method pinged by CDI has the correct workflow-steps. It uses some reflection to figure that out and allows access to the method. 
To handle exceptions, we introduce a new annotation:

    @Asynchronous  
    @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
    @Workflow @WorkflowException
    public void excJob(
            @WorkflowActionHandler 
            @Observes(notifyObserver = Reception.ALWAYS, 
            during = TransactionPhase.AFTER_FAILURE) 
            final JobWorkflowEvent event) {}

Whenever one step fails unexpected, control is transferred to this method where certains actions can be executed. We need this annotation to prevent other methods to pick this event up. 
Using CDI together with some custom annotations really did the job and works fine.

JPA, Criteria and custom objects


Suppose you have a SQL-statement where you select custom fields and or the result of some functions. To make it more interesting, we want to combine this with a group-by statement and an ordering. Let's us for example say you want to execute the following query in MySQL:

SELECT uuid, GROUP_CONCAT(field1, ':', field2), 
  MIN(ss_date_started), 
  MAX(ss_date_ended) 
FROM my_table 
WHERE key = 'xxx' 
GROUP BY group_key 
ORDER BY id ASC

The functions GROUP_CONCAT, MIN and MAX are availble in MySQL. The intention of the GROUP_CONCAT is to get some information out of all the group-rows with issuing additional selects. The question is how we can create the criteria for this query and how we can retrieve the results. 
 
We want the results to be stored in the following object:
 
public class MyGroup implements Serializable { 

    private String uuid;
    private Map<String, String> map = new HashMap<>();
    private Date dateStarted;
    private Date dateEnded; 

    public MyGroup() {
    }
}

In the map we want to store the GROUP_CONCAT strings as a map to avoid other SELECTs. First of all, we get the CriteriaBuilder and the root of the query. The MyEntityForTheTable is the entity which is mapped to the table. This object will act as the base to map our fields.

final CriteriaBuilder cb = 
  this.getEntityManager().getCriteriaBuilder();

final CriteriaQuery< ... > cq = ...; // we'll talk about this later

final Root<MyEntityForTheTable> root = 
  cq.from(MyEntityForTheTable.class); 
The WHERE clause is pretty simple:
cq.where(cb.equal(root.get(MyEntityForTheTable_.key), key));
The GROUP-BY and the ORDER-BY is even more simple:
cq.groupBy(root.get(MyEntityForTheTable_.groupKey)).
   orderBy(cb.asc(root.get(MyEntityForTheTable_.id)));

Now we need to construct the function calls. We extract the MIN and MAX date out of the group using by writing the following expressions:

cb.function("MIN", Date.class, 
    root.get(MyEntityForTheTable_.dateStarted))

cb.function("MAX", Date.class, 
   root.get(MyEntityForTheTable_.dateEnded)) 
We start with the GROUP_CONCAT in MySQL which can be written as:
cb.function("GROUP_CONCAT", byte[].class, 
    root.get(MyEntityForTheTable_.field1), 
    cb.literal(":"), 
    root.get(MyEntityForTheTable_.field2))

First of all, we cannot take String.class to read out the results. MySQL returns the string as byte-code, so we need to map it to a byte[]. The following arguments are the parameters for the function. The first and the last is the result of the columns. We also want to have a semicolon between the two values which can be achieved by cb.literal(String) function.
Now we need to execute the queries and map it to an object. We can use the construct() method to instantiate a new object inside our query. Unfortunately, our main domain object does not have the proper constructor. So we wrap the POJO class in a new class and add the specific constructor.
 
public static class WrappedGroup extends MyGroup {

  public WrappedGroup() { }

  public WrappedGroup(
     final String uuid,
     final byte[] serviceStatus,
     final Date dateStarted,
     final Date dateEnded) { }
}

We make sure that this constructor has the byte[] parameter. In this constructor we can convert the byte-array to a string. The you can convert the string to a map of your choice. So, we are almost done. Here is the complete code:

final CriteriaBuilder cb = 
  this.getEntityManager().getCriteriaBuilder();

final CriteriaQuery<WrappedGroup> cq = 
  cb.createQuery(WrappedGroup.class);

final Root<MyEntityForTheTable> root = 
  cq.from(MyEntityForTheTable.class); 
And the select part looks like;
cq.select(cb.construct(WrappedGroup.class,
   root.get(MyEntityForTheTable_.uuid), 
   cb.function("GROUP_CONCAT", byte[].class, 
      root.get(MyEntityForTheTable_.field1), 
      cb.literal(":"), 
      root.get(MyEntityForTheTable_.field2)),
   cb.function("MIN", Date.class, 
      root.get(MyEntityForTheTable_.dateStarted)),
   cb.function("MAX", Date.class, 
      root.get(MyEntityForTheTable_.dateEnded)))); 

We then use a TypedQuery to get the results, and we are done:

final TypedQuery<? extends MyGroup> query =
   this.getEntityManager().createQuery(cq).
   setFirstResult(startResult).
   setMaxResults(maxResults);

The resulting list can be cast to List<MyGroup>.

Et voilá, we have a very dynamic query which we can easily refactor and adapt.

Startup script for Ubuntu


When you need to start a service (like Glassfish or Play) in Ubuntu, you can take this script as a template:


#! /bin/sh
case "$1" in
    start)
 ... execute a shell script here ...
        ;;
    stop)
       ... execute a shell script here ...
        ;;
    restart)
       ... execute a shell script here ...
        ;;
esac
exit 0

Make sure to make your scripts executable. In order to get it running during startup, add a link to the init.d directory and register the service-script to make it run as first service (depends on your system which priority you assign, take care here, I just took 01 as an example) 

cd /etc/init.d
sudo ln -sf  ~/your-script.sh your-script.sh
sudo update-rc.d your-script.sh defaults 01 01

Reboot and you'll see that your services get started.

Setting the UTF8 character set in Ubuntu or Debian in EC2


The EC2 AMI sometimes have the wrong character set. This could be very annoying in for example the German regio where characters like ä und ö are displayed incorrectly. Issue the following commands to switch your Ubuntu/Debian server to the correct character set:

Make sure these variables are set:

export LANGUAGE=en_US.UTF-8
export LANG=en_US.UTF-8
export LC_ALL=en_US.UTF-8

And don't forget to reconfigure the character set.

locale-gen en_US.UTF-8
dpkg-reconfigure locales

ImageMagick, jMagick and Glassfish on a EC2 machine


Recently I had to roll out a REST-Webservice on Glassfish which used jMagick as a library. Now, ImageMagick is a C++ library and is available in Java through JNI. The installation on a standard EC2-Ubuntu AMI is pretty straightforward once you know the tricks. I executed the following steps to get things up and running:
  1. Make an EC2-Instance, attach an EBS and install JDK7 and GFv3.2.1 (these are the version which I installed)
     
  2. Install ImageMagick using sudo yum install ImageMagick. You can't use apt-get or aptitude since Amazon prefers yum.
     
  3. Next thing is to install jMagick. You can download the latest from ftp://ftp.imagemagick.org/pub/ImageMagick/java/. Use wget to get the RPM (32 or 64 bit - I used the 64bit version).
    wget ftp://ftp.imagemagick.org/pub/ImageMagick/java/jmagick-6.4.0-3.x86_64.rpm
  4. Install the RPM using sudo yum  jmagick-6.4.0-3.x86_64.rpm
     
  5. When all goes well, you should be able to see a handful of Magick-files using ls /usr/lib64/libM* (which were installed in step 2) and of course the Java library ls /usr/lib64/libJMagick.so
  6. There should also be an accompagning JAR file for the library. Check if /usr/lib64/jmagick-6.4.0.jar is there.
     
  7. Now we get to the Glassfish part. Suppose your domain is called "domain1". Copy the JAR into the /lib directory of your domain so it will get loaded during the startup of Glassfish.
     
  8. Point your browser to the admin-console or open the domain.xml. You need to add a JVM-parameter to get this running. In the console go to the configurations / server-config / JVM-settings and then the JVM-options tab and add the following line:

    -Djmagick.systemclassloader=no


    or add in the domain.xml the line

    <jvm-options>-Djmagick.systemclassloader=no</jvm-options>
    This line prevents jMagick of using the system-class-loader as you might have expected. Not adding this line leads to errors.
     
  9. Restart your domain and deploy the application. The System.loadLibrary("JMagick"); in Magick.java runs as planned. (http://jmagick.svn.sourceforge.net/viewvc/jmagick/trunk/src/magick/Magick.java?revision=91&view=markup
We can now redeploy our application in Glassfish at will because the class is loaded by Glassfish during the startup. When not, you can expect exception when redeployen when your jMagick.JAR is in your project. The classloader will tell you that the classes are already loaded when you deploy a second time.
PS: when running Maven tests outside the container, do not forget to specifiy the -Djava.library.path=/your-path-to/libJMagick.so. 
PS2: if you install JMagick using a repository, then the JMagick.so files are stored in the /usr/lib64/jni directory. Copy the libJMagick.so file to /usr/lib64 and restart your domain. There is no need to add java.library.path to the JVM parameters in the domain.xml (something which apparently doesn't function well).

Copying symlinks


JDK7 offers a whole new set of methods to work with symbolic links. This was in pre-JDK7 times pretty difficult and often you had to execute barebone copy statements through the runtime. There is one method which I missed in the API. You can open or follow the link, or test whether your file is a link, but you cannot "copy" the link itself. You need to resolve the original link and create a new link as illustrated in the class below.

The class could surely be optimized, but it gives you an idea on how the symbolic links function.

public class LinkFileVisitor extends SimpleFileVisitor<Path> {

    private Path fromPath;

    private Path toPath;

    public LinkFileVisitor(final Path fromPath, final Path toPath) {
        this.fromPath = fromPath;
        this.toPath = toPath;
    }

  public static void copy(final Path from, final Path to)
            throws IOException {

        // make sure it is a directory
        if (!Files.isDirectory(from)) {
            throw new IllegalArgumentException(
                    String.format("%s is not a directory", from.toString()));
        }

        Files.walkFileTree(from, new HashSet(),
                Integer.MAX_VALUE, new LinkFileVisitor(from, to));

    }

    @Override
    public FileVisitResult preVisitDirectory(final Path dir,
            final BasicFileAttributes attrs)
            throws IOException {
        Path targetPath = toPath.resolve(fromPath.relativize(dir));
        if (!Files.exists(targetPath)) {
            Files.createDirectory(targetPath);
        }
        return FileVisitResult.CONTINUE;
    }

    @Override
    public FileVisitResult visitFile(Path file, BasicFileAttributes attrs)
            throws IOException {
         if (Files.isSymbolicLink(file)) {
            Path link = toPath.resolve(fromPath.relativize(file));
            if (Files.exists(link, LinkOption.NOFOLLOW_LINKS)) {
                Files.delete(link);
            }
            Files.createSymbolicLink(link, Files.readSymbolicLink(file));
        } else {
            Files.copy(file, toPath.resolve(fromPath.relativize(file)),
                    StandardCopyOption.REPLACE_EXISTING);
        }
        return FileVisitResult.CONTINUE;
    }
}






Is a @Singleton bean really a singleton?


Well, as you might have expected from the title, the answer is "yeah well, not really". A singleton in the J2EE world is not a singleton as you know it from the normal Java world. In a non-J2EE environment you typically write something like this to generate a singleton:

public class MySingleton {

     final private static MySingleton instance = new MySingleton();

     protected String name = "whatever";

     private MySingleton() {}

     public static MySingleton instance() {
          return instance;
     }
}

You can get exactly one instance of this bean and you could access the fields directly or through getter and setters. There is only one instance of this object and all information is stored there. This pattern is well known to every programmer. The demand was great to port this pattern to the J2EE world. A new type of bean was invented, the Singleton bean. The idea behind all this is to provide a single instance of a particular bean within your application. You can construct a singleton bean with the following code:

     @LocalBean
     @Singleton
     @Startup
     public class MySingleton {

     }

First we declare it as a local bean, and we load it during startup. The @Singleton annotation makes a singleton out of this class. But is this class really instantiated once when using the @LocalBean annotation? No, it is instantiated twice! Let's test this out with by adding a static field in the bean. 

    static int i=1;

    public MySingleton() {
        System.out.println("INSTANCE: " + i++);
    } 

You will see in the logs that you get two different instances. If you print out the reference of the class, you will notice that one of the instances is a proxy object. The proxy object is instantiated too and the constructor is instantiated again. When you write some kind of initialisation logic inside the constructor, you will run into somekind of unwanted behaviour because it executes your code twice where you expect that the constructor is executed only once. So, don't use the constructor to set up your data but use the @PostConstruct annotation to execute a method with your initialisation logic in it.

    @PostConstruct
    public void init() {
     ... do your init here ....
    }         

You will see that the proxy object does not execute this code (which is obvious of course because the code is not in the constructor anymore). Another pitfall might be that during some kind of rapid prototyping action you store some state-data in a non-private field in the singleton or a stateful session bean. When you do not provide a getter/setter for the field, the field-value of the injected bean will always be null because the proxy gets injected and not the "real" instance. The proxy object does not store state. Let's test this:

     @LocalBean
     @Singleton
     @Startup
     public class MySingleton {
         protected String name = "whatever";
         .....
         public String getName() {
              return this.name;
         }
     }

We inject this bean in another session: 

     @LocalBean
     @Stateless
     public class MyTest {

          @EJB private MySingleton mySingleton;
          public void test() {
               // mySingleton.name --> null; (proxy)
               // mySingleton.getName() ---> "whatever" (real object)
     } }
The injected reference is the proxy object without state. Getting the value of the field directly must always return null when it is not initialised. The real state is maintained in the singleton. You can get the "real" data when you use the method "getName()" because the proxy object tunnels your method and not the field. This is the reason behind the fact that you may not use fields directly in session-beans. 
Well, as you might have expected, it is really a bad idea to get the field from a singleton or other bean directly (in general, it is an anti-pattern in J2EE). Try to encapsulate your data with nice getters and setters and keep in mind that your objects can get dynamic proxies depending on the annotations you add to the class.