Thursday, December 24, 2015

NTFS Data Recovery - a very nice tool

I stumbled accross NTFS Data Recovery while trying to recover a PST file from MS Outlook. The file had been truncated to zero length after a system crash and I wanted to see if I could recover it. In the end, I didn't get the file back, mainly because I found a backup of the PST file and didn't need to it anymore; however, I must say that because of the generosity of LSoft Technologies Inc (http://www.file-recovery.com), which allows a free version to be dowloaded, I was able to learn a lot about file systems. In addition, the program came with some great reading material on how to recover files. In particular, a document  called "How to recover NTFS (Freeware Guide)", which can be found under the program's installed menu's item: "Documentation".

My use case was as follows:

Start Active@ File Recovery



Press the "Search" button in the menu bar's middle.


Enter the file format (*.pst), press "Find" and then navigate to the file using the explorer like view. From here, one can either inspect the MFT record or the raw data of the file.


When inspecting the file's data one should refer to Microsoft's documentation about PST files (Outlook Personal Folders (.pst) File Format). The formentioned document about file recovery contains some very helpful information about MFT records. 

Below, is the beginning of the raw data. Notice the marker "!BDN" at the beginning of the PST's header.



One thing, before starting your attemp to recover your lost file, I  would read the "How to recover NTFS (Freeware Guide)" first. There are some "Don'ts", like don't install any new software onto the same drive, you should read before starting.

Sunday, November 8, 2015

Practical Tips and Tricks with Spring Integration (part one) attempted transcription of webinar

Learning Spring Integration by looking at  various messaging system XML configurations, in particular ones that make heavy usage of the namespace features, can be instructive but at the same time frustrating because one cannot easily develop a solid intuition about how the internals of the messaging system works. Looking at the underlying implementation often helps but can be somewhat discouraging as well because one often gets the feeling that one is trying to look at an elephant through a key hole because understanding code can be difficult if one doesn't know the intentions and thoughts of the developers who wrote the code.

What follows below are notes, or an attempted transcription, of the first few minutes of a Spring Pivotal Webinar from Oleg Zhurakousky called "Practical Tips and Tricks with Spring Integration", which I found incredibly helpful and full of amazing insights. Spring already has a lot of great documentation and probably doesn't need anymore, especially from someone who is an outsider, but while listening to the webinar for the first time, I realized that every sentence that Oleg said was packed with more meaning than I could absorb in real time. So I replayed it several hundred times and came up with the following crude copy of his explanations.

Error Handling related to messaging systems. 
2:42 
Message systems like any other system can produce errors. Very often the process of handling messages relies on messaging itself. The first example demonstrates this by showing how an error channel can be defined on the components, gateway and inbound adapters, which serve as entry points into a messaging flow.

At this point one should be clear of the distinction between components which serve as entry points into messaging flows, gateway and channel-adapters, and components which serve as message flow handlers within a flow; namely service-activators, transformers, filters, etc.

3:23 
A question which is often asked is: why isn’t there an error channel attribute on message flow handlers? An analogy which helps to understand this is that of exception handling where a messaging flow is equivalent to a Java block of code encapsulated by a try-catch block, where the try serves as an entry point into the block of code and each line of code within the try-catch is analogous to a message handler. An exception can happen anywhere within the try-catch block and when it happens, the "try" then delegates the flow to the catch block, which is essentially the error channel. Giving each message handler its own error channel attribute, or its own error handling routine, would be like supplying each statement within a try-catch block with its own try-catch, which would only complicate the picture.

Essentially components that serve as entry points into messaging flows are like try-catch blocks and also serve as components, which mark the boundaries or scopes of the messaging follows.  Error handling blocks of try-catch statements can be arranged right after each other, sequentially, or nested within each other to produce different behaviors. This can also be accomplished in a message flows as well and later it will be shown how to partition or segment message flows to accomplish that which we accomplished with traditional try-catch error handling.

4:56 Sample one.
The first configuration, sample, has a messaging gateway, which is our ErrorDemoGateway, and when we invoke the method on the gateway, the gateway sends the message to the inputChannel, which is a splitter, which splits the message and sends it to the processingChannel.  From the processChannel, the message is going to be processed by the filter and the filter is going to validate that the message's payload's length is greater than four. If it is, then it will be allowed to proceed and if not, it will raise an exception because of the attribute “throw-exception-on-rejection” is set to true. The successfully validated message will then go to the loggingChannel, which is basically a logging-channel-adapter, which will log the massage.

The explicit definition of the channel loggingChannel can be removed because it will be automatically created. Right now, our gateway does not define an error logging channel. So when we execute the code, we shall see that the exception gets caught in the caller’s code, which called the interface’s method.

To avoid that the caller has to deal with the exception and instead be gracefully informed that an error has occurred, we define a channel called “processErrorChannel” and point the gateway’s error-channel attribute to it. When an exception occurs the gateway will see that the error-channel attribute is explicitly define and will send the error message to the error channel to give the caller another chance to correct the message. In the transformer, which is subscribed to the error channel, the payload of the original message will be wrapped in hash tags to indicate that the message was in error.

Looking at the caller code, which gets the gateway from the application context and which calls its method with two strings, one which is too short, we can see from the logging output that one strings passes the filter and the other not by the presence of hash marks.
The exception is no longer propagated back to the caller as it was without the explicit definition of the error channel on the gateway.

8:56 Sample 2
Enterprise Integration Patterns (I assume the speaker was referring to the book) identifies several components, which are state full by nature and which may depend on a predetermined amount of messages coming in before some action would be taken. For example, you might have a flow with an aggregator, which expects three messages but if an error happens in the upstream processing, then the required amount of messages will never reach the aggregator and it in turn will never release the received messages.

10:07
However, by handling the error via a message flow, it is possible to send the message or some message describing the error to the aggregator thus satisfying the release requirements.
In the example, the message gets sent and is split before being sent onto the filter, which will only pass one of the messages onto the aggregator. The other one will be filtered out. This means that the aggregator will be holding onto one message and waiting for the other one, which will never come.

10:43
The application is demonstrated before being fixed, whereby the rejection exception is thrown but the application keeps running. The speaker then shows, using jconsole, that the aggregator has only processed one message. The application is then fixed by setting the gateway’s error-channel attribute to errorChannel.

13:34 Flow Segmentation and Flow Partitioning
Using the same try-catch analogy as we discussed at the beginning let’s look at a slightly different error handling requirement. Depending on the handler that generated the exception or result in the exception you may not want the exception to propagate back to the original entry point of the entire flow.

Using Java’s try-catch analogy it is as if you want to wrap part of the flow downstream in its own try-catch block. Basically, you want to create an independent flow partition or segment and to do this we shall use a technique that allows us to introduce a sub entry point within an existing messaging flow.

We do this by introducing a messaging gateway downstream, which is invoked by a service activator. So let’s look at how we do that. We do it actually quite simply. We have a gateway just like we had before (now called segmentOneGateway). We already have an error channel. This gateway, segmentOneGateway, identifies a request channel called segmentOneChannel, which has a subscriber that is a service activator, which is bootstrapped with a reference to a bean called segmentOne. The bean segementOne is actually another gateway, which is defined further downstream. The only difference here is that this gateway does not identify a service interface as it is no longer needed because it’s bootstrapped with the default interface. Once the service activator invokes the gateway segmentOne, it is as if someone else invoked a gateway and entered a sub partition or another messaging flow. This new flow is actually sitting within some parent messaging flow. We now have a service activator, which is being invoked by the second gateway, segmentOne, and if its throw and exception the exception will be send to it invoking gateway error channel, namely.

TBC
https://youtu.be/RY6dNUL8k6o?t=18

Sunday, October 4, 2015

Configuring JavaMailSenderImpl for SSL as a Spring bean and with Spring Integration

Because my Window's Firewall blocked all outgoing requests on port 587, I had no option but to configure my JavaMailSenderImpl to use SSL instead of TLS. Below are the bean settings.

<beans:bean id="mailSender"
class="org.springframework.mail.javamail.JavaMailSenderImpl">
<beans:property name="username" value="your_mail@gmail.com" />
<beans:property name="password" value="your_mail" />
<beans:property name="protocol" value="smtp"/>
<beans:property name="port" value="465" />
<beans:property name="host" value="smtp.gmail.com" />
<beans:property name="javaMailProperties">
<beans:props>
 <beans:prop key="mail.smtp.ssl.enable">true</beans:prop>      
 <beans:prop key="mail.smtp.auth">true</beans:prop>    
         <beans:prop
                          key="mail.smtp.socketFactory.class">javax.net.ssl.SSLSocketFactory</beans:prop>
       <beans:prop key="mail.smtp.auth">true</beans:prop>
</beans:props>
</beans:property>
</beans:bean>


@Autowired
@Qualifier("mailSender")
private JavaMailSender javaMailSender;

@Override
public void sendAnmeldung(Message message) throws Exception {

    MimeMessage mimemsg = javaMailSender.createMimeMessage();   
    MimeMessageHelper helper = new MimeMessageHelper(mimemsg, true);
        
     helper.setTo("your_recipient@gmail.com");
     helper.setFrom("your_account@gmail.com");
     helper.setText(String.format("Javamail using spring JavaMailSender: %s",  Calendar.getInstance().getTime()));
     helper.setSubject("test email");

      this.javaMailSender.send(mimemsg);

}

The above settings can be used with Spring Integration's out-bound-channel-adapter as follows (of course the prefixes util and int-mail must be bound to their appropriate namespaces properly):

<util:properties id="javaMailProps">
<beans:prop key="mail.smtp.ssl.enable">true</beans:prop>      
<beans:prop key="mail.smtp.auth">true</beans:prop>    
<beans:prop key="mail.smtp.socketFactory.class">javax.net.ssl.SSLSocketFactory</beans:prop>
<beans:prop key="mail.smtp.auth">true</beans:prop>
</util:properties>
 
<int-mail:outbound-channel-adapter id="mails" 
host="smtp.gmail.com"
java-mail-properties="javaMailProps"
password="your_password"
port="465"
                username="your_username@gmail.com"/>

Thursday, October 1, 2015

InstallShield's InstallScript debugger stops working

If you find that your InstallShield's (2014 Premier Version) InstallScript debugger unexpectedly stopped working, then check your MSI project's build settings (Build->Settings) and make sure that the MisExec.EXE Command-Line Argument's edit box is completely empty (As shown below).

After selecting various Log File Options, my debugger stopped working and it wasn't until I COMPLETELY cleared the options that it started working again.


Sunday, September 27, 2015

STMP port 587 blocked by Windows firewall

Recently, I had problems trying to send emails using JavaMail and my Google gmail account. I read in many places that it was most likely my ISP provider, who was blocking my access to the service; however, I didn't believe it because I could send with the same java test program from a different computer within my network, an Apple with OSX, emails. The java test program I used for testing is given below and comes from the tutorialspoint website.

To see in my firewall protocol that the requests were being dropped, I had to first turn my firewall logging on which I did by opening up a command prompt with admin rights and executing the followng:

C:\>netsh advfirewall set allprofiles logging droppedconnections enable
Ok.

Then, I sent a request with the test program and saw in the logfile (%systemroot%\system32\LogFiles\Firewall\pfirewall.log) :

2015-09-27 09:28:37 DROP UDP 192.168.178.58 239.255.255.250 52323 1900 371 - - - - - - - RECEIVE
2015-09-27 09:28:37 DROP UDP 192.168.178.58 239.255.255.250 52323 1900 357 - - - - - - - RECEIVE
2015-09-27 09:34:12 DROP TCP 192.168.178.64 66.102.1.108 58144 587 0 - 0 0 0 - - - SEND

The test program threw the following exception:

>java -classpath .;mail.jar;activation.jar SendEmailUsingGMailSMTP
Exception in thread "main" java.lang.RuntimeException: javax.mail.MessagingException: Could not connect to SMTP host: smtp.gmail.com,
 port: 587;
  nested exception is:
        java.net.SocketException: Permission denied: connect
        at rewards.messaging.client.SendEmailUsingGMailSMTP.main(SendEmailUsingGMailSMTP.java:64)
Caused by: javax.mail.MessagingException: Could not connect to SMTP host: smtp.gmail.com, port: 587;
  nested exception is:
        java.net.SocketException: Permission denied: connect
        at com.sun.mail.smtp.SMTPTransport.openServer(SMTPTransport.java:1972)
..
        at javax.mail.Transport.send(Transport.java:124)
Caused by: java.net.SocketException: Permission denied: connect

I should mention that with wireshark one will not see the requests because the requests don't make it beyond the firewall; however, if one clears the local DNS cache (C:\>ipconfig /flushdns), one can see with wireshark the hostname resolution request. This is nice because one can confirm that the destination address seen in the dropped TCP request is indeed the one associated with the program; the port number is also a good indication that one is looking at the correct request.  


Another useful tool was openssh, which I have installed on my Windows laptop because I'm using Git Bash. With openssl, I could get Google's x509 certificate using the following:

$openssl s_client -connect  smtp.gmail.com:465 -state -tls1

However, what didn't work was this:

$openssl s_client -connect  smtp.gmail.com:587 -state -tls1
Loading 'screen' into random state - done
connect: Bad file descriptor
connect:errno=10013

The dropped request was also logged accordingly in the firewall's protocol similarly to that shown above.

To trouble shoot, I took the following standard snippet of code taken from here:

import javax.mail.Message;
import javax.mail.MessagingException;
import javax.mail.PasswordAuthentication;
import javax.mail.Session;
import javax.mail.Transport;
import javax.mail.internet.InternetAddress;
import javax.mail.internet.MimeMessage;

public class SendEmailUsingGMailSMTP {
   public static void main(String[] args) {
      // Recipient's email ID needs to be mentioned.
      String to = "xyz@gmail.com";//change accordingly

      // Sender's email ID needs to be mentioned
      String from = "abc@gmail.com";//change accordingly
      final String username = "abc";//change accordingly
      final String password = "*****";//change accordingly

      // Assuming you are sending email through relay.jangosmtp.net
      String host = "smtp.gmail.com";

      Properties props = new Properties();
      props.put("mail.smtp.auth", "true");
      props.put("mail.smtp.starttls.enable", "true");
      props.put("mail.smtp.host", host);
      props.put("mail.smtp.port", "587");

      // Get the Session object.
      Session session = Session.getInstance(props,
      new javax.mail.Authenticator() {
         protected PasswordAuthentication getPasswordAuthentication() {
            return new PasswordAuthentication(username, password);
         }
      });

      try {
         // Create a default MimeMessage object.
         Message message = new MimeMessage(session);

         // Set From: header field of the header.
         message.setFrom(new InternetAddress(from));

         // Set To: header field of the header.
         message.setRecipients(Message.RecipientType.TO,
         InternetAddress.parse(to));

         // Set Subject: header field
         message.setSubject("Testing Subject");

         // Now set the actual message
         message.setText("Hello, this is sample for to check send "
            + "email using JavaMailAPI ");

         // Send message
         Transport.send(message);

         System.out.println("Sent message successfully....");

      } catch (MessagingException e) {
            throw new RuntimeException(e);
      }
   }
}



Friday, September 18, 2015

weak ephemeral Diffie-Hellman tomcat6


After upgrading from SUSE 10 to SUSE 11, which encompassed an OpenSSL library upgrade, some HTTPS clients like chrome (Version 45.0.2454.85) or  wget started getting the following error "Server has a weak ephemeral Diffie-Hellman public".  No server changes on the server side, namely tomcat6, had been made.





Attempts to solve the problem by changing the sslEnabledProtocols or sslProtocol attributes of the Connector element in the server.xml shown below were unsuccessful. Also desperate actions such as updating the US_export_policy.jar and local_policy.jar did not help either.

The final solution was to limit the cipher suits by adding the ciphers attribute to the SSL enabled connector. See below

<!-- Define a SSL Coyote HTTP/1.1 Connector on port 443 -->
 <Connector port="443"  SSLEnabled="true"
            protocol="org.apache.coyote.http11.Http11Protocol"
             scheme="https" secure="true"
            clientAuth="want" sslProtocol="TLS"
     ciphers="TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA"

            keystoreFile="${catalina.base}/conf/%KEYSTORE%"
            keystoreType="JKS" keystorePass="%KEYSTOREPASS%"
            truststoreFile="${catalina.base}/conf/%TRUSTSTORE%"
            truststoreType="JKS" truststorePass="%KEYSTOREPASS%"

   />


Apache Tomcat's ciphers come from the under lying JVM, in particular the JSSE. To see which one are available put the following in a java main routine and run it.

/*******************************************/
StringBuilder sb = new StringBuilder();
try {
SSLParameters ssl  = SSLContext.getDefault().getSupportedSSLParameters();

sb.append("CipherSuites:\n");
for(String cs : ssl.getCipherSuites()){
sb.append(cs);
sb.append('\n');
}

sb.append("\nProtocols:\n");
for(String p : ssl.getProtocols()){
sb.append(p);
sb.append('\n');
}

} catch (NoSuchAlgorithmException e) {
e.printStackTrace();
}

return sb.toString();

/*******************************************/

The output should look something like this:

CipherSuites:
SSL_RSA_WITH_RC4_128_MD5
SSL_RSA_WITH_RC4_128_SHA
TLS_RSA_WITH_AES_128_CBC_SHA
TLS_RSA_WITH_AES_256_CBC_SHA
TLS_DHE_RSA_WITH_AES_128_CBC_SHA
TLS_DHE_RSA_WITH_AES_256_CBC_SHA
TLS_DHE_DSS_WITH_AES_128_CBC_SHA
TLS_DHE_DSS_WITH_AES_256_CBC_SHA
SSL_RSA_WITH_3DES_EDE_CBC_SHA
SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA
SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA
SSL_RSA_WITH_DES_CBC_SHA
SSL_DHE_RSA_WITH_DES_CBC_SHA
SSL_DHE_DSS_WITH_DES_CBC_SHA
SSL_RSA_EXPORT_WITH_RC4_40_MD5
SSL_RSA_EXPORT_WITH_DES40_CBC_SHA
SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA
SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA
TLS_EMPTY_RENEGOTIATION_INFO_SCSV
SSL_RSA_WITH_NULL_MD5
SSL_RSA_WITH_NULL_SHA
SSL_DH_anon_WITH_RC4_128_MD5
TLS_DH_anon_WITH_AES_128_CBC_SHA
TLS_DH_anon_WITH_AES_256_CBC_SHA
SSL_DH_anon_WITH_3DES_EDE_CBC_SHA
SSL_DH_anon_WITH_DES_CBC_SHA
SSL_DH_anon_EXPORT_WITH_RC4_40_MD5
SSL_DH_anon_EXPORT_WITH_DES40_CBC_SHA
TLS_KRB5_WITH_RC4_128_SHA
TLS_KRB5_WITH_RC4_128_MD5
TLS_KRB5_WITH_3DES_EDE_CBC_SHA
TLS_KRB5_WITH_3DES_EDE_CBC_MD5
TLS_KRB5_WITH_DES_CBC_SHA
TLS_KRB5_WITH_DES_CBC_MD5
TLS_KRB5_EXPORT_WITH_RC4_40_SHA
TLS_KRB5_EXPORT_WITH_RC4_40_MD5
TLS_KRB5_EXPORT_WITH_DES_CBC_40_SHA
TLS_KRB5_EXPORT_WITH_DES_CBC_40_MD5


Protocols:
SSLv2Hello
SSLv3
TLSv1




Tuesday, August 4, 2015

Download WSDL and include/imported XSD files

To reduce the external build dependencies of my maven web application project, in particular the jaxws-maven-plugin part of my project,  I stored the WSDL file and all its dependencies in my project and configured the plugin wsdlUrls to point to them. As shown below:

<configuration>
   <wsdlUrls>
         <wsdlUrl>${basedir}/src/main/resources/wsdl/SDMXQuery/SDMXQuery_1.wsdl</wsdlUrl>
   </wsdlUrls>
</configuration>

I downloaded all the files with SoapUI (Version 5.0.0) by creating a new SoapUI project with the URL of the WSDL as follows:


Then I clicked on the WSDL in the newly created project (See arrow 1 below), changed to the "WSDL Content" tab, and then pressed the small button (See arrow 2 below) with the tool tip: "Export the entire WSDL and include/imported files to a local directory". 


Then I chose the directory within my project.


After that, everything was nicely stored in the directory and could be configured in the POM file of my project as shown above.


Tuesday, July 28, 2015

jdbc driver with oracle wallets

The processes of converting a jdbc connection as configured in most java application can be confusing. These notes should provide some help and perhaps a better overview.

The first task is to create a wallet. Below, I created two sets of credentials for the same user; RIAD_JAVA. I created two because I wanted to try two types of configurations. One which uses the database connection string and the other which uses a TNS alias.

#first create the wallet 
mkstore -wrl t-riad-wallet -create

#create first of two credentials for user RIAD_JAVA. This will be used for configuration, which needs a tnsname.ora file.
mkstore -wrl t-riad-wallet -createCredential triad RIAD_JAVA secret2015$

#create second set of credentials also for RIAD_JAVA but this time with part of the connection string taken for the java programms externalized properties file. The configuration won't need a tnsname.ora file.
mkstore -wrl t-riad-wallet -createCredential t-riad-dwh-db.tadnet.net:1521/triad.tst.tadnet.net         RIAD_JAVA secret2015$

Before continuing, let me give an over view of what will be shown. 

1. Configuration Using thin client (jdbc:oracle:thin:)

   a.) with Database Configuration String (@hostname:port:service)
   b.) with TNS alias (@anyalias which can be found in tnsname.ora)

2. Configuration Using oci (jdbc:oracle:oci:)



1. Configuration Using thin client (jdbc:oracle:thin:) 


The following jars are needed:
ojdbc6-11.3.0.jar, oraclepki.jar and ucp.jar

a.) with Database Configuration String


Set up the Java application with the following connection string. This should correspond to what was used while creating the second set of credentials above.

jdbc:oracle:thin:/@t-riad-dwh-db.tadnet.net:1521/triad.tst.tadnet.net

Program must be started with the following JVM parameter:

oracle.net.wallet_location


Example:
java -cp .:/global/riad-app/tomcat/lib/ojdbc6-11.3.0.jar:./jlib/oraclepki.jar:./ucp.jar 
-Doracle.net.wallet_location=/tmp/wallet/t-riad-wallet  WalletTest


Note: no tnsname.ora file is needed only the a copy of the wallet created above.


b.) with TNS alias

The above mentioned jar will also be need here. This time the connection string is as follows, which is what was used to create the first set of credentials above.

jdbc:oracle:thin:/@triad

Program must be started with the following JVM parameters:

oracle.net.wallet_location
oracle.net.tns_admin

The following tnsnames.ora will be needed and should exist where oracle.net.wallet_location shown above points to.


 triad =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(Host = t-riad-dwh-db.tadnet.net)(Port = 1521))
    )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = triad.tst.tadnet.net)
    )
  )

Example call:

java -cp .:./ojdbc6-11.3.0.jar:./jlib/oraclepki.jar:./ucp.jar -Doracle.net.wallet_location=/tmp/wallet/t-riad-wallet  -Doracle.net.tns_admin=/tmp/wallet WalletTest



2. Configuration Using oci (jdbc:oracle:oci:)

Here the jdbc connection string looks as follows:

jdbc:oracle:oci:/@triad


Before starting the programm the following enviroment variables are needed:

ORACLE_HOME=/opt/oracle/product/11.2.0;export ORACLE_HOME

LD_LIBRARY_PATH=/opt/oracle/product/11.2.0/lib:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH

Also a tnsname.ora file


and a sqlnet.ora file are need.









Friday, July 24, 2015

Unified Functional Testing from Hewlett Packard behind a NTLM proxy server.

A colleague of mine wanted to work with Hewlett Packard's UFT/QTP version 11 behind a proxy server, which only supported NTLM. The secret to making it work, because UTF version 11 doesn't support NTLM, was to install cntlm (version 0.92.3) from sourceforge between the test suite and the proxy server on the machine, which was hosting the HP program. After setting up the default cntlm.ini file with the proper proxy server host and port information, user name and password, cntlm was then started as follows:

cntlm -c cntlm.ini -u myuser@mydomain

When UTF asked for the proxy information, localhost and 3128, the port which cntlm was configure to listen on, were entered in the screen below:



Sunday, July 5, 2015

Creating a maven plugin with java 5 annotations rather than javadoc tags

Actually, creating a plugin with the Maven Plugin Tools, is something, thanks to the plugins themselves, which is relatively trivial; however, because it took me a long time to get my plugin working properly with Java 5 annotations rather than Javadoc Tags, I decided to write it down.

I must confess that, sadly enough, Maven has never been a strength of mine so that it was no surprise that my plugin didn't immediately function properly. The problem I faced was in the passing of the parameters to my Mojo during execution of my newly created plugin.

When converted to Java 5 annotation, the class "outputDirectory" attribute, in the default created Mojo,  was always null even though I had the proper configuration in my host project's pom.xml

<configuration>                     
    <outputDirectory>C:\Users\wnpr\Downloads</outputDirectory>         
 </configuration> 

To make sure that we all have the same starting point let me begin, by showing how I used the Maven Integration for Eclipse plugin, M2E, to create a new Maven project as shown below.


Then I selected the "maven-archetype-mojo" as a  temple for my new project,


  and finally gave my plugin a groupId and artefactId.


What follows is a view of the Maven project from within eclipse. As you can see, the project already contains a Mojo class, called MyMojo, which was created from me


Every plugin needs a plugin.xml file, which is generated by the maven-plugin-plugin and which can be found the the META-INF\maven folder of the generated jar file. By default, maven-plugin-plugin generates the plugin.xml file from the Mojo Javadoc Tags found in the Java code. These tags look like this:

/** 
 * Goal which touches a timestamp file. * 
 * @goal touch 
 * 
 * @phase process-sources 
 */

If you, as I did, would rather use Java 5 annotations, such as:

@Mojo( name = "touch" )
public class MyMojo
    extends AbstractMojo {

   @Parameter(defaultValue = "c:\\", required=true)
    public File outputDirectory;

   @Parameter(defaultValue = "Hello World!!!!", required=true)
    public String greeting;
.
.
}

for you plugin.xml generation, then you will have to rely on another plugin; namely: maven-plugin-annotations.

To do this you will have to add an additional dependency, to your plugin project's pom.xml.

<!-- dependencies to annotations -->
<dependency>
   <groupId>org.apache.maven.plugin-tools</groupId>
   <artifactId>maven-plugin-annotations</artifactId>
   <version>3.4</version>
    <scope>provided</scope>
</dependency>    


and override Maven core's default-descriptor execution phase to process-classes as follows:

  <build>
    <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-plugin-plugin</artifactId>
        <version>3.4</version>
        <executions>
          <execution>
            <id>default-descriptor</id>
            <phase>process-classes</phase>
          </execution>
          <!-- if you want to generate help goal -->
          <!-- 
          <execution>
            <id>help-goal</id>
            <goals>
              <goal>helpmojo</goal>
            </goals>
          </execution>
           -->
        </executions>
      </plugin>
      </plugins>
    </build>

It wasn't until this last change that the @Parameter annotated attributes of my Mojo were finally set properly during execution of my plugin.  Both of these additions are well described here; however, when I read them for the first time, I wasn't smart enough to understand them; as often is the case, and needed to waste a day before I finally came to my senses.

If you get a message like this:

Error extracting plugin descriptor: 'Goal: touch already exists in the plugin descriptor for prefix: junk
[ERROR] Existing implementation is: org.wmmnpr.junk_maven_plugin.MyMojo

then be sure to delete the old Javadoc Tags.

Good luck!

Thursday, February 26, 2015

Converting Spring boot standalone application to a web application

In the "Producing a SOAP web service" chapter of the "Getting Started" Guide from Spring, one creates a web service, which eventually runs as a standalone application. In the section Make the application executable, one reads:

"Although it is possible to package this service as a traditional WAR file for deployment to an external application server, the simpler approach demonstrated below creates a standalone application."

This blog is about the necessary changes actually needed to make the referred to web service available in a application server.

A good starting point, is to use maven archetype for a web application. This will provide the necessary structure needed for a web  application i.e. WEB-INF folder.

First add a ContextLoaderListener to your web.xml in the WEB-INF folder in order to instantiated a Spring container which will be shared by all Servlets and Filters. Because it was desirable to use an annotation style of configurations, an AnnotationConfigurationWebApplicationContext was instantiated (see contextClass context-param below) instead of the default XmlWebApplicationContext.

<web-app ..>

 <context-param>
     <param-name>contextClass</param-name>
     <param-value>org.springframework.web.context.support.AnnotationConfigWebApplicationContext
  </param-value>
 </context-param>
 <context-param>
    <param-name>contextConfigLocation</param-name>
    <param-value>hello</param-value>
 </context-param>
 <!-- Creates the Spring Container shared by all Servlets and Filters -->
 <listener>
  <listener-class>org.springframework.web.context.ContextLoaderListener
 </listener-class>

 </listener>
   <!-- take especial notice of the name of this servlet -->
    <servlet>
        <servlet-name>webservice</servlet-name>
        <servlet-class>org.springframework.ws.transport.http.MessageDispatcherServlet</servlet-class>
     <init-param>
       <param-name>transformWsdlLocations</param-name>
       <param-value>true</param-value>
     </init-param> 
    </servlet>

    <servlet-mapping>
        <servlet-name>webservice</servlet-name>
        <url-pattern>/*</url-pattern>
    </servlet-mapping>

</web-app>

For the webservice, a MessageDispatcherServlet was also configured in the web.xml file as shown above. Make sure that the "transformWsdlLocations" attribute of the web service Servlet is set to true during instantiation.

The MessageDispatcherServlet, because it has not been otherwise indicated in the web.xml file, expects a  configuration file called  "webservice-servlet.xml" (servlet-name plus "-servlet"), which should be located in the WEB-INF folder. The contents of the file are shown below.

<beans>
  <sws:annotation-driven/>

   <sws:dynamic-wsdl id="countries" portTypeName="CountriesPort"locationUri="/ws/countries">
       <sws:xsd location="/WEB-INF/classes/countries.xsd" />
   </sws:dynamic-wsdl>

</beans>

With those changes done, it's now time to change our focus to the WebServiceConfig.java class also described in the guide. Here, the method for the bean definition "messageDispatcherServlet" should commented out as it's no longer needed. Everything else can stay the same.

The last thing that is needed is to add the countries.xsd file, as created in the "Getting Started Guide", to the application as a resource so that it can be found by the "countriesSchema" bean, which is instantiated in the WebServiceConfig class.

Once these steps have been completed, it should possible to deploy the war to an application server like Tomcat or run it using the maven "tomcat6-maven-plugin" plugin.

The code is available here as a zip.

Good luck.

Wednesday, February 25, 2015

Parsing boring old CSV files with Java

It sometimes seems as if Java (JEE) skipped specifying a robust mechanism for handling CSV files. For instance, there is nothing like a schema description (XSD or DTD), which exists for XML files. Perhaps it was just too trivial, but I've found despite the existence of more "robust" formats like XML and JSON, there is still a lot of CSV files being used, which probably has a lot to do with the ubiquitness of Excel. But whatever the reason it would be nice to have a schema description of a CSV file, which could be used as sort of a interface description to bridge the gap between developers and business analysts.

I discovered something from Digital Preservation, which has created a very nice looking schema language for the CSV format; however, unfortunately, it was not well suited for Java. The library is written in Scala and comes with a Java bridge; however, at the time of writing this blog, the bridge didn't allow one to get much information about the exact errors and their positions because the result of the parsing was returned as a long, loosely formatted string.

There are other libraries, like Jackson CSV and Super CSV from sourceforge.net, which seem less sophisticated or ambitious than Digital Preservation's one; however, I wasn't impressed by them because the problem of bridging the gap between technical and business people doesn't seem to have been addressed. The building up of the CSV structure is largely done in Java code at run-time and is for a non Java person almost unintelligible.

So, for my last task, which involved parsing 3 types of CSV files, I resorted to using the apache common's  CSV parser and Java Annotations. What I did was to create a Pojo with all the columns contained in the CSV. Then, I invented some simple annotations which I used to decorated the Pojo with and which the business analyst could read and understand without much difficulty.

Below is a small example of one of these Pojos. You might laugh that I passed around a Java source code file as a CSV  file description but it worked well and people didn't seem to have problems finding what was relevant for them and understanding it. At first, it might look unintelligible; however, but with a little patience one can read through it.

Each Java class defines a CSV line, here one with 4 columns however in reality it was up to 396 columns. The lines' column order was described by the annotation @CsvColumnOrder (with Java reflection it's not possible to determine the order in which Field are declared; therefore, the extra annotation). The expected type of each column is described within the class definition as class attribute with further annotations like CsvChecked, CsvIsinCode, etc...


DbtPojo.java 
@CsvColumnOrder({
"IsActive",
"ExternalCode_ISIN",
"AmountIssued",
"PaymentDate"
});

public class DbtPojo {

  @CsvChecked(regex="^[YN]{1}", required=true)
   public String IsActive;

  @CsvIsinCode
  @CsvUnique 
  public String ExternalCode_ISIN;

  @CsvNumber(decimal = true)
  public String AmountIssued;

  @CsvDate
  public String PaymentDate;


}



Using reflection, it is possible during parsing to get information about relevant Pojo's fields by looking for the presence of  annotations. For instance, if the second column of a line has been returned by the Apache parser, then from the CsvColumnOrder annotation, one knows which field of the Pojo needs to be checked, namely "ExternalCode_ISIN".   With a reference to the Java Field, one can check for the presence of certain annotations by calling getAnnotation(Annotation class reference). If the annotation is present, here CsvIsinCode, one can react appropriately for a ISIN code.

I've included the code as a zip for those who are interested in more detail.

The reader should not be mislead into thinking that the CSV values from the files are being parsed into an instance of a Pojo as described above. In fact, the classes are never even instantiated. They are only used for their annotations and field definitions; that is for meta data purposes.