Edit[04/2011]: Since I seem to get the odd google hit on this it is worth pointing out that Justin Searls has since updated the plugin to include a lot more useful configuration options and also made it available in the maven central repo. So you can just use it straight away by following the instructions.
Edit[05/2011]: and now there is also this awesome site where you can try out jasmine and even do it in coffee script - all in your browser. Happy pandas all around!
As the amount of javascript in our project has grown quite considerably, it was time to get rid of this black spot in our test coverage. And since I'm getting more and more used to test-drive the code I'm writing, it was especially annoying to have to regress to trial-and-erring for the javascript part of our code.
I was overwhelmed by the choice of testing frameworks but eventually settled on jasmine, as that seems to be the most widely supported one with a reasonable syntax.
As all testing that is not automated might as well not exist it was important that it was possible to integrate it into our Teamcity setup. Thankfully, seraching for jasmine and maven quickly pointed me to jasmine-maven-plugin.
This plugin builds a html test runner that includes all javascript files in your project and then gets accessed with htmlunit and can run during the maven test phase, requiring no changes to our Teamcity build configuration.
Since we have a bunch of library and legacy javascript code that takes offence to being included directly I needed a way to filter which files actually got included in the test. There was some slightly out-of-date fork on github that included changes to configure inclusion and exclusion patterns which I was able to adapt to the current version (Which is available here).
The nice thing here is that all files are still copied to the target directory and the exclusions only affect what files get loaded in the test runner which proved useful soon enough.
I wasn't really happy just expecting all files to be loaded in the tests since that is not likely how it would happen in the deployed code. I am hoping to have the test specs define what other files are required by a specific functionality. Since all files get copied it is easy enough just loading them from inside the tests. I'm not quite happy with that yet as globals don't get reset between different jasmine Specs but it should be easy enough to add that functionality at some point.
Another nice feature of the maven plugin is the inclusion of other javascript artifacts. This way you can deploy libraries like jquery to your local maven repository and have them automatically included in your test runner.
So, with that said, this is what I currently have in our maven configuration:
Most of the options are explained very well in the main jasmine-maven-plugin README. I've only changed the plugin version to our own locally deployed version and made use of the new include option. This will only include files named *_spec.js in the test runner. The specs will then load additional files themselves.
Friday, November 26, 2010
Wednesday, October 20, 2010
Oh, you got me there, cygwin
I wonder if there's an actual technical reason for this. Especially since the auto option cannot possibly work with those constraints. Hilarious.
$ u2d --help
u2d is part of cygutils version 1.4.4
converts the line endings of text files from
UNIX style (0x0a) to DOS style (0x0d 0x0a)
Usage: u2d [OPTION...] [input file list...]
Main options (not all may apply)
-A, --auto Output format will be the opposite of the autodetected source
format
-D, --u2d Output will be in DOS format
--unix2dos Output will be in DOS format
-U, --d2u Output will be in UNIX format
--dos2unix Output will be in UNIX format
--force Ignore binary file detection
--safe Do not modify binary files
Help options
-?, --help Show this help message
--usage Display brief usage message
--version Display version information
--license Display licensing information
Other arguments
[input file list...] for each file listed, convert in place.
If none specified, then use stdin/stdout
$ u2d -U messages_de.properties
u2d: cannot accept any conversion type argument other
than --u2d (--unix2dos, -D) when the program is called with this name
$ d2u -U messages_de.properties
messages_de.properties: done.
Thursday, September 16, 2010
Fun with firewalls
One of the applications I've been working on uses Spring Webflow and, at a few crucial points in the application, renders data into a JavaScript dialog. Spring offers a JavaScript library to help with handling AJAX requests, including support for handling redirect-on-POST inside aforementioned JavaScript dialogs.
So, in one particular case, you would click on a button, an asynchronous request would be made to enter a new view-state in Webflow and, after redirecting, the content of the response would get rendered in a JavaScript pop-up.
Eventually, one or two of our customers complained about this button not working. This was puzzling since it didn't affect the vast majority of our customers. Screenshots from the customers' machines showed no errors or gave any clue as to what was broken. We had a hunch it might have something to do with security settings but had no way to verify that or reproduce the behaviour on our end. For the time being we just gave up, as we were already considering doing away with the pop-up anyway.
A few weeks later I was digging through Spring's JavaScript code for some other reason and noticed how the asynchronous redirecting is handled. Since sending a HTTP redirect status would mean that the browser would redirect the complete window, Spring checks for Ajax requests and in case it finds one, puts a special header into the response and answers with a normal 200 status. Spring's JavaScript then checks for that special header and, if it thinks the response should be rendered in a pop-up, follows the redirect by making another Ajax request and displays the response in the pop-up. It's pretty clever and gives a lot of flexibility.
Out of curiosity I wanted to know what would happen if that special header didn't make it to the browser. I fired up Fiddler and replaced our server's response with a version with Spring's headers removed. I clicked on the button and - nothing happened! I was pretty happy that I figured out a way to reproduce the problem but I still wasn't sure this was actually the cause. Because why would anyone want to filter HTTP headers?
To be certain, I changed the JavaScript to show a descriptive error message to the user in case the headers went missing. This wouldn't actually fix the functionality but at least the user would get some feedback on clicking the button. The code eventually found its way onto our servers and a few months later we had a customer mail in a screenshot of the new error message. And with a little back and forth with their IT person we found out that it was indeed the firewall of the customer filtering out headers. Eventually they changed their settings so that they could use our application without problems.
Which really only leaves the question of 'Why?' It's understandable that non-IT companies with a small IT staff just rely on the default settings of whatever firewall package they bought. But as a developer of said firewall, what security do I gain by filtering response headers? I can see how filtering specific request headers makes sense at least in obscuring the internal network to the outside. But what harm could possibly come in a response header that couldn't otherwise also come via one of the white-listed headers or just plain via the response body?
Oh, well. There's a ticket in the SWF JIRA and it might be worth a look if you're using Webflow.
So, in one particular case, you would click on a button, an asynchronous request would be made to enter a new view-state in Webflow and, after redirecting, the content of the response would get rendered in a JavaScript pop-up.
Eventually, one or two of our customers complained about this button not working. This was puzzling since it didn't affect the vast majority of our customers. Screenshots from the customers' machines showed no errors or gave any clue as to what was broken. We had a hunch it might have something to do with security settings but had no way to verify that or reproduce the behaviour on our end. For the time being we just gave up, as we were already considering doing away with the pop-up anyway.
A few weeks later I was digging through Spring's JavaScript code for some other reason and noticed how the asynchronous redirecting is handled. Since sending a HTTP redirect status would mean that the browser would redirect the complete window, Spring checks for Ajax requests and in case it finds one, puts a special header into the response and answers with a normal 200 status. Spring's JavaScript then checks for that special header and, if it thinks the response should be rendered in a pop-up, follows the redirect by making another Ajax request and displays the response in the pop-up. It's pretty clever and gives a lot of flexibility.
Out of curiosity I wanted to know what would happen if that special header didn't make it to the browser. I fired up Fiddler and replaced our server's response with a version with Spring's headers removed. I clicked on the button and - nothing happened! I was pretty happy that I figured out a way to reproduce the problem but I still wasn't sure this was actually the cause. Because why would anyone want to filter HTTP headers?
To be certain, I changed the JavaScript to show a descriptive error message to the user in case the headers went missing. This wouldn't actually fix the functionality but at least the user would get some feedback on clicking the button. The code eventually found its way onto our servers and a few months later we had a customer mail in a screenshot of the new error message. And with a little back and forth with their IT person we found out that it was indeed the firewall of the customer filtering out headers. Eventually they changed their settings so that they could use our application without problems.
Which really only leaves the question of 'Why?' It's understandable that non-IT companies with a small IT staff just rely on the default settings of whatever firewall package they bought. But as a developer of said firewall, what security do I gain by filtering response headers? I can see how filtering specific request headers makes sense at least in obscuring the internal network to the outside. But what harm could possibly come in a response header that couldn't otherwise also come via one of the white-listed headers or just plain via the response body?
Oh, well. There's a ticket in the SWF JIRA and it might be worth a look if you're using Webflow.
Tuesday, August 3, 2010
Windows 7, Eclipse, Mintty
I recently switched to Windows 7 at work and so had to reinstall and rethink a couple of things and learned a few new things along the way.
While I was looking for a way to pin a cygwin shell to the taskbar, I found someone mentioning mintty. If only I had known about it earlier. Finally I can have a shell in windows that does copy/paste right. It defaults to paste on middle-click and you can configure it to copy-on-select so you get a nice linuxish behaviour.
And reinstalling eclipse I ran into quite an annoying issue that I wish would have been easier to figure out. If you unpack things into windows' program folder, Windows 7 sets permissions so that you need admin privileges to change anything. This is probably a pretty good idea, I just wasn't aware of it at all.
So now when I tried to install plugins in eclipse, it couldn't put them into its installed directory but instead created a .eclipse folder in my home directory and put stuff there. And apparently a lot of plugins can't really cope with that and just silently fail.
So, if you ever wondered why, even though it says that subclipse is installed, you don't have SVN as a choice of Team Provider, that's why.
And that was some very random and lucky googling that saved me from wasting a few more hours on that problem.
While I was looking for a way to pin a cygwin shell to the taskbar, I found someone mentioning mintty. If only I had known about it earlier. Finally I can have a shell in windows that does copy/paste right. It defaults to paste on middle-click and you can configure it to copy-on-select so you get a nice linuxish behaviour.
And reinstalling eclipse I ran into quite an annoying issue that I wish would have been easier to figure out. If you unpack things into windows' program folder, Windows 7 sets permissions so that you need admin privileges to change anything. This is probably a pretty good idea, I just wasn't aware of it at all.
So now when I tried to install plugins in eclipse, it couldn't put them into its installed directory but instead created a .eclipse folder in my home directory and put stuff there. And apparently a lot of plugins can't really cope with that and just silently fail.
So, if you ever wondered why, even though it says that subclipse is installed, you don't have SVN as a choice of Team Provider, that's why.
And that was some very random and lucky googling that saved me from wasting a few more hours on that problem.
Friday, April 23, 2010
Upgrading to Spring 3
So we decided it was time to upgrade our project to Spring 3. The last time I gave this a try it was a pretty bad experience because at the time I was unable to get the upgraded maven packages without switching to the OSGi names. I'm not sure whether that was an issue with our archiva server or Spring but I didn't run into it this time.
This update also including updating Tiles, Junit and Spring Security.
The latter was not really required and upgrading it turned out to be quite a hassle. Though I guess that hassle had to be dealt with at some point anyway.
Spring Security underwent some major refactoring for version 3, changing a lot of the packages around and also making some minor improvements to parts of the API (UserDetails.getAuthentication and some changes to voters). Adjusting to that was mostly just a matter of organizing imports in eclipse and thankfully most of our access of the relevant classes was wrapped at a few key points.
But it turned out that Webflow hasn't been upgraded to support either Spring Security 3 or Tiles 2.1. Their JIRA has tickets for both and I was able to hack something together from that. I'm curious if there is some better way to handle these versioning issues. Explicitly requiring external dependencies (instead of marking them optional) is too inflexible, if the upgrade is minor. Yet it would be nice to see incompatible changes. Maybe one could put meaning into major and minor version numbers? Oh well, versioning is always tough.
In the case of Tiles, actually removing methods from their api instead of just deprecating them and letting them return null would have brought this particular issue to light more quickly, since Webflow 2.0.9 simply wouldn't have compiled. Why even bother with separate api and core/impl packages?
Other maven changes were limited to removing spring-security-core-tiger (yay to less jdk-specific packages) in favor of spring-security-config. I also had to add commons-codec because it seems to have previously been implicitly required from somewhere else and only been needed since some point after I last ran check-dependencies.
In the end all of this this turned out to be less problematic than I had feared. The knowledge gained from the last attempt (i.e. spring-test update also necessitating a junit update) and the good test coverage helped to sort out most issues before even trying to run the application.
This update also including updating Tiles, Junit and Spring Security.
The latter was not really required and upgrading it turned out to be quite a hassle. Though I guess that hassle had to be dealt with at some point anyway.
Spring Security underwent some major refactoring for version 3, changing a lot of the packages around and also making some minor improvements to parts of the API (UserDetails.getAuthentication and some changes to voters). Adjusting to that was mostly just a matter of organizing imports in eclipse and thankfully most of our access of the relevant classes was wrapped at a few key points.
But it turned out that Webflow hasn't been upgraded to support either Spring Security 3 or Tiles 2.1. Their JIRA has tickets for both and I was able to hack something together from that. I'm curious if there is some better way to handle these versioning issues. Explicitly requiring external dependencies (instead of marking them optional) is too inflexible, if the upgrade is minor. Yet it would be nice to see incompatible changes. Maybe one could put meaning into major and minor version numbers? Oh well, versioning is always tough.
In the case of Tiles, actually removing methods from their api instead of just deprecating them and letting them return null would have brought this particular issue to light more quickly, since Webflow 2.0.9 simply wouldn't have compiled. Why even bother with separate api and core/impl packages?
Other maven changes were limited to removing spring-security-core-tiger (yay to less jdk-specific packages) in favor of spring-security-config. I also had to add commons-codec because it seems to have previously been implicitly required from somewhere else and only been needed since some point after I last ran check-dependencies.
In the end all of this this turned out to be less problematic than I had feared. The knowledge gained from the last attempt (i.e. spring-test update also necessitating a junit update) and the good test coverage helped to sort out most issues before even trying to run the application.
Tuesday, April 6, 2010
Working around type erasure in Java
Labels:
java
Due to how generics are implemented in Java, there is no way to determine at runtime, e.g. the type of objects contained in a collection. This can be a bit of an issue if you want to use that type information to look up a specific converter or repository for that type.
What I didn't know was that while the type information is lost on the actual instance, there is still the possibility to get it from the surrounding class declaration. This does require that the instance is declared in a Field and that that Field contains the generic type declaration. Similarly, for classes extending/implementing generic classes or interfaces you can also access type parameters via reflection.
Here's a messy example that hopefully should illustrate both points:
What I didn't know was that while the type information is lost on the actual instance, there is still the possibility to get it from the surrounding class declaration. This does require that the instance is declared in a Field and that that Field contains the generic type declaration. Similarly, for classes extending/implementing generic classes or interfaces you can also access type parameters via reflection.
Here's a messy example that hopefully should illustrate both points:
public class LongToStringList extends AbstractList<String> implements List<String> {
private List<Long> someList = new ArrayList();
@Override
public String get(int index) {
return Long.toString(someList.get(index));
}
@Override
public int size() {
return someList.size();
}
@Test
public void test() throws Exception {
// alternatively this.getClass().getGenericInterfaces()[0]
ParameterizedType superclass = (ParameterizedType) this.getClass().getGenericSuperclass();
assertArrayEquals(new Type[]{String.class}, superclass.getActualTypeArguments());
Field field = this.getClass().getDeclaredField("someList");
ParameterizedType fieldType = (ParameterizedType) field.getGenericType();
assertArrayEquals(new Type[]{Long.class}, fieldType.getActualTypeArguments());
}
}
I'm always a bit scared of reflection and so I only learned about it browsing through my colleague Markus' code and then again while stumbling through some of the code in Spring-Binding (which lead here). The latter can use this to determine which converter to use to map form values from an array into a collection on a bound model. And that is pretty nifty if not without its problems.Tuesday, March 16, 2010
QCon 2010
Our company was nice enough to send my colleague Sebastian and myself out to London to visit QCon. It was my first conference of this kind and it was an overwhelmingly positive experience.
What kept me away all those years probably was the fear that these kind of events just act as a sort of trade fair showing off some products or new technologies. Thankfully, that wasn't the case and the majority of talks weren't about what to work with but about how we work. It was truly inspiring (a word I'm sure will crop up again in this post) in that it reminded and reaffirmed me about the things I like about writing software.
Hugo Rodger-Brown sums it up nicely:
What kept me away all those years probably was the fear that these kind of events just act as a sort of trade fair showing off some products or new technologies. Thankfully, that wasn't the case and the majority of talks weren't about what to work with but about how we work. It was truly inspiring (a word I'm sure will crop up again in this post) in that it reminded and reaffirmed me about the things I like about writing software.
Hugo Rodger-Brown sums it up nicely:
I’m not sure I learned anything totally new – it was more an affirmation of things I’d thought / heard / read about – and a chance to see some of these things out in the wild. The speakers ran from big conference names to academics through some front-line experts, so a real range. I don’t think I attended a single sales pitch, and although a few named products slipped through the net, they were all OSS projects, not commercial products. All-in-all it seemed to stay true to the “for programmers by programmers” promise.
Subscribe to:
Posts (Atom)