Saturday, August 8, 2009

Grails and Groovy - too dynamic for the real world

After using Groovy and Grails on some bigger projects for months I've come to reconsider some of the advantages of this platform. In particular the fact that it's a dynamically typed language.

If you come from a statically typed Java background like I did, it might not even be very obvious in the beginning what dynamic typing implies here. In particular because Groovy is mostly a super-set of Java. In Groovy you can still type "Map map=new HashMap();" and it will work. A bit later you will get lazy and start typing "def map=[:]", which means the same, works the same and is a bit shorter to type. At that point I simply thought "score one for Groovy against Java" and continued.

The downside is that there is almost no compile-time checking in Groovy. This follows directly from some of its features:
  • Methods and fields can be added to objects at run time
  • The type of variables can change at run time
  • Groovy is very forgiving of null values and exceptions, in many cases continuing silently instead of aborting
This means that many of the compile-time checks of statically typed languages are impossible, but also that IDEs like Eclipse cannot provide many of the context-sensitive completion features that we're used to, and that many errors are flagged only at run-time when that particular line of code is reached.

For small single-developer projects with a prototyping/scripting nature this is likely not an issue, in particular when the developer has a lot of experience with Groovy and/or other scripting languages. For larger multi-developer projects the picture changes, even more so when developer experience level varies. In longer-running multi-developer projects maintainability can become a huge issue. Unit tests can help, but are notoriously difficult and time-consuming to create for web applications and applications that interact with many external systems.

When unit tests are lacking and compile-time checks are absent, the result can be trial-and-error programming and a very fragile code base. One example is refactoring, an important part of agile programming. In Java it's normal to refactor constantly, e.g. moving and renaming classes. The IDE will do all the grunt work of renaming corresponding references, and the compiler will help catch any resulting bugs. In Groovy this is not possible; the IDE cannot do the grunt work, so refactoring will be a manual process. Any missed references will not be found at that time, but only when that particular line of code is reached at run-time. I recently started realizing that Groovy offers a very bad trade-off for larger projects.

That you can type "def map=[:]" instead of "Map map=new HashMap();" is nice, and saves maybe 10 seconds at code-writing time. But if that means that hours are lost hunting bugs that would simply not occur in statically typed languages, hours lost searching for what are basically syntactic errors, then the downside becomes obvious in a very painful way.

Luckily this sad tale has a happy ending. For the particular projects that we're running we will simply continue using Grails - its advantages still stand. We will also still use Groovy for simple code snippets and scripting. For anything more complicated we will create Java classes and call them from Groovy. This is fully supported by Grails, and offers a best-of-both worlds escape.

Friday, June 5, 2009

Grails script runner featuring dependency on other projects

One of my favorite uses for Grails is scripting, as I've stated before. However, I stumbled over re-using code from other Grails projects in Eclipse.

With Java projects I'm used to interdependencies - for example larger projects are split up into sub-projects. All external libraries are put into the 'Lib' sub-project and can be used from the others, which makes the other projects look cleaner in Eclipse. To do this you simply use the Eclipse build path for the Eclipse compiler, and build.xml for Ant-driven build and deployment.

With Grails this isn't so straight-forward. In particular because running scripts does not go through build.xml at all. To influence the Eclipse compiler you can still use the build path, and include other Eclipse projects there. To influence what happens when you run a script from the command line you have to use a custom script runner. This isn't so bad because for some other features (enabling GORM) you have to use a custom script runner anyway.

An example is available here: ScriptRunner.groovy. Put it in your project scripts folder together with the scripts you want to run, and invoke it as grails [env] script-runner OtherScript [parameters..]

The magic for this particular purpose is in overrideCompilerPaths, which is mostly copied from _GrailsCompile.groovy in the Grails distribution but has the added line:

src(path: "${basedir}/../OtherProject/src/groovy") // added path

You'll have to add a line for each folder of each project you want to use code from. It isn't very elegant, so please let me know if you've found a better way to do this.

NB If you also use the Grails web app in this project, you'll also have to create a script like this: RunXApp.groovy, and instead of grails run-app use grails run-x-app.

It's a bit of a hassle, but once it's smoothed out you have a scripting environment where:
  • You can map domain classes to the database using only a few lines of code (using GORM and the data source provided by Grails you only have to add domain classes yourself).
  • You can use all the power of Grails, Groovy, Hibernate and Java from your scripts with zero extra setup.
  • You have a dynamically generated web app to view and edit your domain classes at virtually zero extra cost (you only have to run create-controller for each domain class and replace the controller body with def scaffold = default).

Monday, May 11, 2009

Groovy/Grails versus Perl continued

I've had some critical feedback about my previous post, where I compared a lot of programming languages with the Groovy/Grails combination and claimed G/G was the clear winner.

It struck a particular nerve with Perl people, who claimed I had omitted some important developments in their language in the past years. (I hope the PHP people won't find out what I wrote about their language or I'll have to go find my asbestos suit).

Before I go into the details of that discussion, first a brief remark about C#.Net. One of the commenters pointed out that I hadn't really demonstrated G/G's superiority there, merely used some tricks of logical reasoning. While I wouldn't want to discount logic, I have to admit that carey was right. .Net is a very broad platform, and there are some situations where Grails and Groovy cannot replace it. One example is that .Net can be used to make Windows DLLs. Of course no sensible G/G developer would want to make Windows DLLs anyway, but still.

Then Perl. Some people claimed I really needed to take a look at Moose and Mouse, two OO extensions for Perl 5 that can complement/replace the default Perl 5 OO abilities. I said previously that Perl 'lacks elegant OO' and I still stand by that remark. Three different OO subsystems? In one language? It's certainly an accomplishment, but in my view not a positive one. Think of all the subtle errors and maintenance issues that can be created when those subsystems are mixed in a single program.

Others claimed I should have said something about Catalyst. Catalyst is a web framework for Perl that is vaguely Ruby on Rails-like. At first glance it looks good - with a solid set of features on offer. An immense set of features. In facts the developers have managed to support practically every single Perl technology known to man. All kinds of persistence libraries. A broad selection of view/templating modules. No wonder that users complain of its complicated syntax. Like with the three OO frameworks, it shows that developers couldn't agree, couldn't make up their mind on what they really wanted. So instead they added everything in and hoped for the best. In Perl it's even an official language philosophy (There's more than one way to do it, aka Tim Toady). Java also suffers from this 'feature creep', with frameworks such as Spring and AppFuse, that try to support so many different ways of doing things that they lose sight of what developers want: simplicity and productivity.

So - how do I say this tactfully to Perl fans? You're sitting on a cooling body guys. A quickly cooling body. And instead of simply running away you're trying to cover up the evidence, waving Perl's arms and saying: 'see, it still moves!'.

To make real progress sometimes you have to select only the best parts of what you have, and jettison the rest. Groovy and Grails have done that for Java and its jungle of web frameworks. I hope Perl 6 will do the same for Perl, soon.

Update: please see here for Chromatic's well-written response. It's obvious that we don't agree 100% but I hope the combination of viewpoints makes for an interesting read.

Friday, May 8, 2009

Groovy is the new Perl / Java / ...

The past weeks I've been doing some programming with Groovy and Grails, a relatively new programming language/web application framework, and I'm starting to like it a lot. It's not perfect, but I think it can beats it competitors, with one hand behind its back.

In the past I've done a lot of Java. It's a very good language, but it has some downsides:
  • It's not suitable for scripting. Its syntax is simply too verbose, it's strictly Object-Oriented, it has to be compiled, etc.
  • It doesn't have a single best web application framework. There are so many of those in the Java space that is practically impossible to keep up with them, all have pros and cons. If you pick one now you can be sure that it's hopelessly outdated in 3 years. It's getting to be a big mess.
  • It doesn't have a single best Rapid Application Development environment. This is related to the previous point - in my view programming is all about productivity and maintainability. Maintainability of Java programs is mostly good, but productivity is mostly not so hot. The JSF framework is very promising in this area, but because of all the fragmentation and FUD it's gaining traction slowly.
On the other hand Java has one big plus: It's powerful. With all it's standard libraries, myriads of Open Source frameworks and support from all the big vendors (except Microsoft) it's everywhere.

So now we have Groovy/Grails. Let me show you how it beats all its competitors and predecessors:
  • Java. This one's easy. Instead of fighting Java it simply stands on its shoulders. Groovy can do everything Java can do because it's Java-based. You can use all those libraries, if you want all the frameworks too (even though Grails is the way to go). And Groovy is good for scripting. Matched with Grails you get a scripting language with an excellent ORM and dynamically generated web interface thrown in for free.
  • C#.Net. Groovy and Grails are Open Source. Besides C#.Net is mostly a Java wannabe, so if our new favorite can beat Java it can surely beat .Net.
  • Ruby on Rails. Conceptually and feature-wise Ruby/RoR and Groovy/Grails are closely matched. However the fact that Groovy is Java-based is such a strong advantage that it makes Ruby/RoR irrelevant. Grails also offers better run-time performance.
  • Delphi. Even though Delphi has fallen out of fashion it is probably still one of the best RAD environments ever. It is only recently that JSF and Swing-based tools come close in terms of productivity. Grails takes this RAD ability one step further in RoR fashion: an Object-Relational Mapper that is 100% to the point, dynamically generated web interfaces and the ability to fall back to custom coding wherever you want, with full Groovy and Java power available.
  • PHP. PHP is all about hacking together a web interface quickly, without thinking about maintainability, re-using code etc. If requirements change a lot PHP code will quickly become spaghetti. Security is weak. Let's face it: 1995 called and wants its language back.
  • Perl. I've used Perl a lot in the past - it used to be the king of scripting languages. Unfortunately in 2000 the Perl developers lost touch with the real world and started to develop the perfect scripting language, Perl 6. They're still at it, apparently. Meanwhile Perl 5 has become hopelessly outdated. It lacks elegant OO and a good web framework, to name only a few downsides. Its syntax is so compact (there is some truth in the joke that every randomly typed string of characters is a valid Perl program) that maintainability suffers. Groovy has none of these downsides, and - as far as I can see - doesn't miss any of Perl's major features.
Nothing's perfect. So what are some of the things that could push this winner to even greater heights?
  • Documentation. Groovy/Grails have a lot of power, but some of it is very hard to find. There isn't a single place where it's all described in full. Often you have to make do with some basic examples and a lot of Googling.
  • IDE support. It's clear that one of the main concepts behind Grails has been to make an IDE unnecessary to a large extent. Don't Repeat Yourself and all that. You can get by with just a text editor. Also, Eclipse and Netbeans have basic support. Still, it would be nice to have better debugging and real-time syntax checking/refactoring. Here Java still has a clear advantage.
  • Spring-based. I was a bit disappointed when I found out that Grails is Spring-based, because it adds an extra layer of complexity that is (in my view) often not really needed. In practice Spring stays out of the way when you work with Grails so this isn't really a major issue. It does add to the very very long stack traces that you get when something goes wrong though..
  • Grails views are not component-based. Some effective tag library magic is used to give a first impression that is JSF-like, but the full power of JSF with its drop-in visual component libraries, data binding and IDE drag&drop visual design are missing. It's likely that JSF support will be added to a future version of Grails. I'm not yet sure if that will be an improvement (worst-case it will take away a lot of Grail's simplicity and push it back into the shark-invested waters that make up Java's web application framework landscape).
Anyway. If you do a lot of web programming and /or scripting, then you really owe it to yourself to take Groovy and Grails for a spin.

Tuesday, May 5, 2009

Two oo's for success

The past few months I've been working on a web site that may eventually replace Wikipedia as the most popular encyclopedia on the net. Or it may not, in which case I will still have learned a lot. Initially I settled on the domain At the time I thought it was a good name, descriptive and clear, certainly with the .info extension. It was brand new so it took about 6 weeks before Google took it serious and really started to index the pages on the site (this is called the Google sandbox effect). However, by the time the site was out of the sandbox the name had started to fade a bit in my view. In particular the .info extension which I liked a lot but which turned out to have a bad reputation with a lot of people.

I also found out about Squidoo, a wiki-like site set up by Seth Godin, the marketing guru. I've read a number of his books and admire his outlook on things, so it's good that I didn't know about Squidoo before starting out. If I had I might not have started on my project.

Back to domain names - Seth Godin explains the name Squidoo by noting that many successful sites have two o's in the name (Google, Yahoo!, his own Yoyodyne). So he figured that any site with two oo's will be successful.

This was a few weeks ago, and at the same time, when I was browsing through the GoDaddy domain auctions that I've written about previously, I noticed a domain name that really struck a chord. I ordered it, and recently switched over FastFunFacts to the new domain name. It again places me in the Google sandbox, I hope that will not last as long as the previous time. I also hope that Yahoo! won't take offense, because our names have more than a passing resemblance. Of course my site is in quite a different field from Yahoo!, I hasten to add.

Anyway, I welcome you to the new Xayoo.

Tuesday, April 14, 2009

This domain name has already been used as an alias or domain

At least, that's what Google Apps says, so it must be true.

In my previous post I told about this domain that I lusted for, and bought, or thought I had. Now the waiting period is over, and it's indeed fully mine. Nice. So I can recommend GoDaddy auctions after all. In particular the close-out sales. However, when I activated the domain and started adding it as a domain alias to one of my existing Google Apps domains I found out that buying a used domain can have its downsides also.

The previous owner also thought that Google Apps was nice. And because Google Apps is free there's no trigger anywhere to delete a domain once the previous owner is no longer using it. (And no way to force ownership re-verification).

Luckily I use another free service that came to my rescue in this case: EditDNS.Net. I use it for some of my domains for its DNS hosting abilities, but it also has E-mail forwarding powers. Perfect. The interface for editing DNS records at GoDaddy is terrible anyway, so I wasn't sad to switch the domain to EditDNS. And I've suggested the ability to re-verify ownership to Google, so one of these days they may add it.

What if the previous owner had also liked EditDNS and registered his domain there? And had used a bunch of other useful free services? Then my domain would be less and less useful I suppose. Hopefully owners of free services will pick up on this and start adding ways to free abandoned domains!

Where's my domain?

I'm a domain buying novice. If you're a pro at buying domains it will probably have lost a bit of its excitement, but I'm still such a novice that every time I buy a domain I'm still happy like a kid unwrapping a Christmas present. Or Easter egg, to remain in season.

I have been eying up a domain at GoDaddy Auctions for a few days. It's nice, short, exactly what I needed, and it was for sale at a low price. So when I finally made up my mind and clicked 'Buy Now' (and went through the usual hassle of checking out the virtual cart, updating the credit card details in my Paypal account etc.) I was expecting to see that nice domain show up in my account, ready to be fiddled with. Not so.

So what does the 'Buy Now' option in GoDaddy domain auctions actually mean?

It's complicated. But if I understand correctly it means that the domain has entered its final 'locked' state before expiring, and will be yours in five to ten more days. In that time the only one who can take your shiny new toy away from you is the previous owner, who has at that point been neglecting to renew his registration for more than two months, so isn't very likely to intervene. And if he does you're supposed to get your money back from GoDaddy.

The best explanation I've found for the whole process is a blog post by Mike Davidson. Old, but as far as I can see still up to date.

Meanwhile I'm waiting, and waiting. I'm still learning that making Web 2.0 apps is all about now-now-now but also about having patience.

Wednesday, April 1, 2009

Privacy Policy for this Blog

If you require any more information or have any questions about this privacy policy, please feel free to contact me by email at

At this blog, the privacy of visitors is of extreme importance to us. This privacy policy document outlines the types of personal information that are received and collected by this blog and how they are used.

Log Files
Like many other Web sites, this blog makes use of log files. The information inside the log files includes internet protocol ( IP ) addresses, type of browser, Internet Service Provider ( ISP ), date/time stamp, referring/exit pages, and number of clicks to analyze trends, administer the site, track user’s movement around the site, and gather demographic information. IP addresses, and other such information are not linked to any information that is personally identifiable.

Cookies and Web Beacons
This blog does use cookies to store information about visitor preferences, record user-specific information on which pages the user accesses or visits, customize Web page content based on visitor browser type or other information that the visitor sends via their browser.

DoubleClick DART Cookie
  • Google, as a third party vendor, uses cookies to serve ads on your site.
  • Google's use of the DART cookie enables it to serve ads to your users based on their visit to your sites and other sites on the Internet.
  • Users may opt out of the use of the DART cookie by visiting the Google ad and content network privacy policy at the following URL -
Some of our advertising partners may use cookies and web beacons on our site. Our advertising partners include Google Adsense.

These third-party ad servers or ad networks use technology to create the advertisements and links that appear on this blog and send them directly to your browser. They automatically receive your IP address when this occurs. Other technologies ( such as cookies, JavaScript, or Web Beacons ) may also be used by the third-party ad networks to measure the effectiveness of their advertisements and / or to personalize the advertising content that you see.

This blog has no access to or control over these cookies that are used by third-party advertisers.

You should consult the respective privacy policies of these third-party ad servers for more detailed information on their practices as well as for instructions about how to opt-out of certain practices. This blog's privacy policy does not apply to, and we cannot control the activities of, such other advertisers or web sites.

If you wish to disable cookies, you may do so through your individual browser options. More detailed information about cookie management with specific web browsers can be found at the browsers' respective websites.

(Please note that the creation of this policy was prompted by the new Google Adsense privacy policy at I used the handy tool at to generate it, then fixed some grammatical errors by hand. Please feel free to copy this policy and adapt it on your own site).

Monday, March 30, 2009

Squid for Fedora Core 4

Today's post is just a short note - I wish the Fedora Legacy project was still alive. As it is, it's quite difficult to update older Linux servers. There must be millions of them, for example with earlier versions of Fedora Core installed. The hardware still runs fine, the software is stable, up times can measure in years. This is good, the TCO couldn't be lower.. But if you want to update an rpm package to gain some new features you're out of luck.

Obviously one way to fix this would be to upgrade the whole server to a newer version of Linux, like CentOS 5. It's exactly this kind of forced technical migration projects that is the bread and butter of IT revenue, without providing any benefit to users. The system will be off-line for a few days, after which the IT guys will announce that (hopefully) the migration went fine, and the system is now up to date again. Users won't see the difference, but mostly have come to accept this as a given.

Sometimes you're in luck though. Today we managed to take the sources for a newer version of the Squid proxy cache, intended for Fedora Core 8, and rebuild it for Fedora Core 4.

So here's my gift for today:

Here is the original source package that we used:

Let's see if we can postpone that full server upgrade for a few more years..

Friday, March 27, 2009

Best free workplace for freelancers in Amsterdam

I'm writing this from my new office, on the 6th floor of a very good-looking building in the center of Amsterdam, with great views, ample desk space, an excellent cafeteria and your choice of wireless Internet for your laptop or unlimited use of PC or Apple desktops. I'm not the only one. Around me hundreds of others, mostly young and hip, are enjoying the same facilities. Best of all, it's free (except the cafeteria, of course) . No multi-year rent contract or any other obligations. You don't even need a library card. Yes, that's right, I'm at the Central Public Library. If the word library is strongly linked to 'old and stuffy' in your mind, like it was in mine, then you really need to see this.

I'm not sure if it leads to more library books being read, but it sure is a nice public service. Of course it has a few downsides. Gaining access to the wireless LAN is a bit more difficult than I'd expected (it's not anonymous so you first need to register, then configure some unusual network settings on your laptop). Also, in particular in the afternoons it can get quite crowded, with a corresponding effect on the speed of the Internet connection. For the time being, for me it's only a welcome change from my own tiny suburban office. Because of my multi-year rental contract, and because I don't have to share the Internet connection with anyone. But if the recession lasts a few more years I might move here permanently.

Some basics:
Centrale Bibliotheek OBA ODE
Oosterdokskade 143
1011 DL Amsterdam
Customer service 0900-bibliotheek (0900-2425468), 020-5230900 or
Opened daily from 10:00 till 22:00

Network settings for Windows XP: in the authentication properties of the wireless connection set the EAP type to PEAP, disable 'authenticate as computer', enable 'authenticate as guest', and in the PEAP properties disable 'validate server certificate', select EAP-MSCHAP v2 authentication, and configure EAP MSCHAP v2 to disable 'automatically use my windows logon'. This will result in Windows asking for your registered user name and password when making a connection.

Wednesday, March 25, 2009

Favorite Firefox plugins

I know that nobody asked me to, but I'm going to tell you anyway: my list of favorite Firefox plug-ins. Which is incidentally why I haven't tried out Chrome yet for more than a minute. I can't take a browser serious that doesn't have plug-ins yet. In particular something to synchronize bookmarks. And dictionaries in multiple languages. The other plug-ins that I use are more technical and less critical. Let me give you a more structured view:
  • Foxmarks Bookmark synchronizer. I switch computers a lot so I need something like this, and Foxmarks works great. Basic bookmark synchronization hasn't failed me yet, it has a lot of other features that I haven't used yet.
  • United States English Dictionary and Woordenboek Nederlands(Dutch Dictionary). I use the in-browser spelling checker a lot, with wikis, with GMail and now also with Blogger. Indispensable.
  • Quirk SearchStatus. A simple way to show basic web page metrics.
  • Live HTTP Headers. Can display request and reply HTTP headers.
  • Web Developer. I needed a way to sign out of basic HTTP authentication (the kind where the web site does not have a log-in page but instead the browser shows a log-in dialog). Normally browsers don't provide the associated log-out functionality. In fact for Firefox this plug-in seems to be the only way. It can do an enormous amount of other stuff that is probably cool also, mainly related to CSS.
What is your favorite Firefox plug-in?

Monday, March 23, 2009

Stupid yet tempting business ideas

About 5 years ago I started the habit of recording every idea for a new business venture that I came across and that I liked. It takes only a single good one to become a millionaire, right? That hasn't happened for me yet, but I hope that with good execution I may turn some of them into successes. For obvious reasons I won't tell which ones are still on the list that I consider excellent.

Instead today I wanted to mention a few that are on my list because I like them, even though they are not very practical - could even be considered stupid.

First my definition of what constitutes an excellent idea, so that we can compare the stupid ones:
  • The idea should target on-line consumers and small businesses, to enable efficient scalable marketing. Ideas that require a lot of off-line sales effort are less suited to my impatient nature and my growing aversion to large upfront investments.
  • A first prototype should be easy to build (within 2 months), again to limit required upfront investment.
  • It should be clearly feasible to earn money from the idea, e.g. through subscriptions, micro payments or advertising.
  • It should be fun and educational to work on.
Obviously ideas like Twitter and Skype are good recent examples that more or less match my criteria. For me, and probably also for a lot of other experienced IT people, they are very much in the category of 'why didn't I think of that first, I could have hacked that together in a few weeks'.

Here are, without further ado, a few of my stupid ideas:
  • Free DNS hosting (e.g. EditDNS). It's tempting because it seems complicated to make at first sight, but in actual fact it would be pretty easy, and fun to do. DNS is a very critical piece of Internet infrastructure, so how cool would it be to have a popular service in this area? Also, without wanting to insult existing providers like EditDNS, it looks like the competition could use some challenging. Indeed, current providers look like they were thrown together in a few weeks. That's likely because it's also a stupid idea. Stupid because it's hard to make money off it. You compete with non-free DNS hosting that's already very cheap (most ISPs offer DNS hosting as a package with domain registration for about 10 USD per year). Additionally advertisement income is minimal because system administrators set it up only once through your site, and then you have to provide the service for years without a single chance to show an ad.
  • A discussion forum focused on the strange and remarkable one-liners of a popular politician. A fine example would be Geert Wilders, the controversial Dutch politician. It would be relatively easy to open a new discussion thread on the forum ever day with one of his non-conformist statements. It would be satisfying to see large amounts of people try to be the first one to post in support of - or in 100% opposition of - that daily subject. It would also be pointless because I predict that advertisement income would be minimal - no advertiser in his right mind wants to be associated with heated political debate, in particular related to issues such as immigration policies.
Let me know in the comments whether you think any of these two could be profitable. Or perhaps you have some less-than-optimal business ideas of your own?

Saturday, March 21, 2009

Why do I ignore commercial software?

When I read back my last two posts about wiki software I noticed that I had ignored commercial offerings in this area without really evaluating them, without giving any reasons. Any software architect can tell you that that's not how you should do things. You should evaluate all the options, and be objective about it. When I do consulting for clients, like my last gig at KLM Air France, I am careful to do just that. For various reasons it's often Open Source that gets ignored in places like that. But when my own money is on the line, for personal use or for my own companies, then I automatically focus on Open Source only. For wikis - TWiki, for CRM - SugarCRM, for word /spreadsheet processing - OpenOffice. Why is that?

Part of the reason to ignore commercial software is likely the cost saving. I suspect that for many users of Free/Open Source Software that's an important aspect. It saves money, but perhaps more importantly it also lower the threshold to use something, and removes a lot of hassle. Instead of having to think about payment and delivery you can simply download it and start using it right away. If you find out you don't like it you can select an alternative and try that, without having to feel guilty about investing money in a software package that will now go unused.

The other major aspect of FOSS - that you can inspect, change and redistribute the source code yourself - is less important to me. It's certainly nice, and a cornerstone of what makes FOSS unique, but in practice I rarely use that possibility.

Clearly it's not those two aspects alone. If the only alternative to MS Office was a FOSS package that crashed all the time, and was completely unusable, designed by some blind nerds without any regard for normal human beings, then I'm sure I would buy MS Office. But the alternative is Open Office, a stable piece of software with good usability and some nice features (e.g. PDF export) that MS Office doesn't have by default.

So, after thinking about it some more, I'm sure it must be quality. I've come to ignore commercial software subconsciously because I've learned that for nearly every purpose there's Open Source software that's just as good, and often better. My experience is that most Open Source packages feel as if they were designed for smart people, by smart people with a passion for what they were doing. Commercial software often feels as if it was designed for stupid people, by people who are probably smart, but also wanted to go home at 5 o'clock to be with their families. And by definition they were guided by project managers and marketing departments that value the interests of their company above that of the user.

Commercial products nearly always have some built-in limitations, and some options that were deliberately removed, in order to create the opportunity for a more expensive variant with those options added in. Open Source packages typically have all the options that any normal user could be interested in, and then some.

I'm convinced that in the end many commercial packages will simply disappear, replaced by Open Source alternatives that are simply good enough. It's telling that while commercial vendors like Microsoft have more and more difficulty thinking of significant new features to justify a next release, many of the major Open Source packages have matured so much that they seem to have frozen. The Linux kernel for example has been at version 2.6 for ages already, with regular minor updates but without a version 3.0 or even 2.8 on the horizon. And that's good. After all another important architecture principle is: if it works don't break it.

TWiki vs MediaWiki part 2 - the battle

Ah TWiki. It is great. (See part 1 of this post for some background). Some of the things that I really like about TWiki are very subjective, but I'm going to list them anyway:
  • Its default interface is nice-looking and not too cluttered (big score against TikiWiki).
  • Its file-based storage is easy to understand, and gives me a confident feeling - an admin user with a text editor can access the page data directly to fix things if needed, run scripts, etc. I haven't needed to do that yet, but it's obviously a big plus compared to XWiki and most other wikis. (In case you are wondering - the only major downside as far as I can see is slow search when you have tens of thousands of topics. That bridge is so easy to cross that I don't consider it a downside at all).
  • It's very flexible, with a powerful template-based user interface and a solid collection of plugins.
Now I've started some consulting for a new web site that obviously needed a CMS. It's a bit like a Wikipedia, so I started looking at MediaWiki as an obvious implementation candidate. (It also turned out to be the only serious new candidate. There are various wiki feature comparison matrices on the net, and if you skip commercial candidates like Sharepoint and Confluence, and skip Google Sites because it's limited in many ways then it seems MediaWiki, TWiki and TikiWiki are more or less the main contenders). MediaWiki has some nice features, in particular its internationalization support. For many purposes it may be the best wiki around. It certainly is a serious contender. (I suspect that TWiki is more fun to use and look at, but didn't try to prove it).

Two things killed it for our purpose. It doesn't have sub-wiki's, which surprised me. It doesn't have page-level authorization! I know that authorization in general is a bit against the wiki spirit. The TWiki documentation doesn't hesitate to remind the administrator of that at every possible opportunity, but at least TWiki has the feature. This gives the admin the possibility to choose to use it or not. But the MediaWiki guys thought this wasn't a feature we should use, so they didn't build it in! To me this seems like two species of dinosaurs involved in a Darwinian battle for survival, with one of the dinosaurs saying 'oh, I'll pull out my own claws, I really shouldn't use them because they're not nice'. Should be an easy victory for the other guy.

So far TWiki hasn't let us down yet in this new setting. I takes a bit of work to turn it into a public CMS (instead of the free-for-all edit fest that a wiki is by default), but the result is quite satisfying from a technical standpoint. No ugly hacks needed. Yay.

Friday, March 20, 2009

TWiki vs MediaWiki part 1 - wiki prehistory

Over the past few years I've had the opportunity to test out various wikis, for teams within my previous company, Devinition. I've learned that wikis are great, but that even some of the well-known wiki engines are pretty awful. I'll use a few blog posts to write down my likes and dislikes, so you don't have to go down the road that we did - trying out everything under the sun and ending up with a whole bunch of incompatible page graveyards.

We started out with TikiWiki, back in 2005. TikiWiki certainly has tons of features built-in, but I disliked it from the start because it looked like a typical Open Source 0.1 product. (I assume - and hope for its users - that recent versions have better looks).

Then we implemented an XWiki for the whole company. Primarily because it seemed the Java-based wiki with the best features at the time, and I optimistically thought that with all the Java skills in our company we could do some great integration magic with it. Instead it quickly instilled fear with its horrible user permissions interface. We've had to resort to direct access to the underlying SQL database only once, when some colleagues messed up the permissions beyond normal repair with only a few easy keystrokes. Interesting though it was - trying to get the correct settings back again while puzzling through a meta-meta model - it put XWiki on my avoid-in-the-future-at-all-cost list. However, at that point our teams had already put quite a number of pages into it, so it was really too late to migrate away.

As a kind of in-between snack I used Google Docs with some smaller teams. It works great, but it's not really a wiki replacement of course. Instead it has an excellent spreadsheet feature, and it lets you easily share a few documents with random people. My golden tip of the day is for Google: simply add a 'wiki page' document type to Docs with concurrent editing and support for adsense display and it will run circles around Google Sites and 99% of the existing web hosting.

At that point I also started using a personal wiki. It coincided with reading David Allens 'Getting Things Done'. In my experience they are a great match. A personal wiki is easily flexible enough to act as a store for all your GTD administration (project lists, actions, reference data, etc.) Looking at it from the other side - a wiki can quickly become a mess of random pages and the GTD method provided for useful structure. I'll leave the details of my GTD approach for a next post, but just finish by saying that I used a TWiki, perhaps because it was the next well-known Open Source wiki that I hadn't tried yet.

In part 2 I describe why TWiki is better than all the other wikis, and how it beat MediaWiki in a fierce dinosaur-style battle.

Thursday, March 19, 2009

Selecting a topic for your blog

There are a lot of blogs about blogging, so I've been reading a bit about selecting a goal and a subject for your blog. Conventional wisdom is that you should select a niche topic to stand out from the crowd. If you spend a lot of time at home making beautiful cakes anyway, why not blog about it? Add a nice photo of each cake to your blog post and you can be sure that a select group of cake aficionados will read your every post and drool about it.

Personally I think that niche topics are overrated, very 2007. A blog like that will probably look great, but require a corresponding time investment. You'd better enjoy all the attention those 17 fanatical followers are giving your blog, because that's all you will get with those mile-long posts like '100 tips to get a great glaze'. Needless to say I don't make cakes at home, or any of those easy to visualize things, which also prevents me from going this route. But that doesn't matter. I firmly believe that we live in the Age of Search, and that even if you blog about totally random subjects your posts will be found by readers, in particular if those posts are practical and to the point. So, I will use this reasoning as an excuse for a more or less random(1) selection of topics, and I plan to be amazed by the large numbers of readers flocking to each post. I'll let you know how that works out in a few months.

(1) At least some posts will be about web application development, agile project management and investment.

Wednesday, March 18, 2009

First Post

With the economic downturn it's taking a bit longer to find a new contract than I had anticipated. There's a bright side also - I have some time to fool around, to invest. So I've decided to start blogging. It will be interesting to see if I keep it up when I have less time. Perhaps I will document some of the stuff that I've done in the past, a bit like a programmer adding comments to his code after the product has long finished and is out the door.

An easy prediction: if the downturn continues then blogs will be springing up left and right at an even greater rate than in the past few years. It's good for your visibility, right? My own short term plans, besides checking my inbox for e-mail from recruitment agencies every 5 minutes: I'm working on investment software, and on a Wiki-based public site. More later. First I'm going to fiddle with some Blogger settings.