Wednesday, December 30, 2009

[Tech] Distributed SCM: Playing with Repos

As some may have noticed, I migrated nearly all my projects in the last year from Subversion to Mercurial (and GIT). Step by step, as I am rather conservative with changing to new technologies, particularly when they are at the heart of the project. And changing the SCM is sort of a surgery on the open heart.

However, after nearly a year of experience I must say, SCM was (for me) never easier and more enjoyable than with distributed SCMs, particularly with Mercurial. Excellent documentation, easy and straightforward to use. Yet these days I was asking myself: If I would have to name one outstanding feature that would convince me to change from a centralised system like Subversion to DSCM, what would it be?

The answer might be surprising, but for me it is clearly this: No headache and fear when working with the repository any more. What do I mean with that? Well: I was never a Subversion guru and everytime I needed to do an operation I did not do very often (branching, merging) I was sweating. Should I press the button, am I making a mistake? What exactly do these options mean in the SVN client? Did Eclipse now mess up the local copy? Should I commit? After all, you are always working with the repository. If you mess up, you have a problem, and all team members with you. Not a nice procedure.

But with a DSCM there is no master repository, hence in case of doubt I make a clone, play around with the clone. Should I mess it up, I delete the clone and nothing happened. If everything is fine I push the results. This is for me personally the most essential feature of systems like Mercurial. I can play around even with esotherik plugins and features without the fear to destroy anything. This also makes learning for new users way easier.

What is your opinion?

Tuesday, December 29, 2009

[Pub] Best-Practice Book and the New Year

Some of you might already have noticed, that we were not very active in blogging the last months. The reason is, that (most of us) were heavily involved in finishing our "Best Practice Software Engineering" book that will be available Feb/March 2010. The publisher is Spektrum Akademischer Verlag (Springer), the book is in German.

It was a lot of work and required most of our publishing energy. I believe the result is good and I hope that it will be useful for some of you.

A detailed description of the book can be found at the publishers website.

If you are as enthusiastic as we are, you can even pre-order it via Amazon ;-)

So, for now, I want to thank all readers of our Blog, hope you had a successful 2009 and wish you all the best for 2010. Looking forward to comments from you about our book and upcoming blog posts.

Sunday, December 27, 2009

[Tech] Simple Java Template Engine

Template engines are widely used in Web Frameworks, such as Struts, JSF and many other technologies. Apart from classical Web Framework, template engines can be very useful in integration projects. In an actual integration project that deals with a lot of XML data exchange, I discovered the Java Template Engine Library FreeMarker. This Open Source Library is a generic template engine in order to generate any output, such as HTML, XML and any other user defined output based on your given template.
"[...]FreeMarker is designed to be practical for the generation of HTML Web pages, particularly by servlet-based applications following the MVC (Model View Controller) pattern. The idea behind using the MVC pattern for dynamic Web pages is that you separate the designers (HTML authors) from the programmers. Everybody works on what they are good at. Designers can change the appearance of a page without programmers having to change or recompile code, because the application logic (Java programs) and page design (FreeMarker templates) are separated. Templates do not become polluted with complex program fragments. This separation is useful even for projects where the programmer and the HTML page author is the same person, since it helps to keep the application clear and easily maintainable[...]"
I think HTML is one application area of FreeMarker. Consider 3rd party systems providing APIs consuming XML data or their own data structures. Construct their data format in the code is a grubby approach and furthermore the code becomes not maintainable. Using such a library you can manage your data exchange template outside your code and produce the final data by using the template engine. I see such template engines as classical transformers as in an Enterprise Service Bus:

In the above exmple you see, that you can use placeholders in your template files, which will be replaced by the real data when the transformation takes place. FreeMarker provides enhanced constructs such as if statements, loops and other stuff which can be used in your template files.

Template engines are often used in Web Frameworks, but the usage of template engines is also very useful when you must produce specific output for other systems.

Monday, November 09, 2009

[Tech] Integrate Tests as a Language Featuere?

The blog of Cedric Beust (author of TestNG and captured by Google) is always an interesting read.

His last posting discusses the question if generic test features should be included into the language:

http://beust.com/weblog/archives/000522.html

He mentioned an interesting D feature. Personally I think a tighter language integration is useful for small projects. Nevertheless it should be easy to switch to the best testing tools (as TestNG) in bigger projects.

What do you think?

Sunday, November 08, 2009

[Misc] Subversion turns into an Apache Project: so what?

Since a few days it is official: The Subversion project has submitted to become an Apache project. It seems that the incubation phase will start soon. Now my question: Subversion is conceptually dead, so what difference does that make? Ok, let's discuss this a little more in detail:

The thing is: most developers (even myself) meanwhile understand the concept of DSCM systems and all available projects are stable, fast, have good communities and are reasonably documented. Even tool support (IDEs, ...) is decent. Having understood DSCM I wonder why I would want to go back to a centralised system like SVN. There is no benefit in there for me. If I want to work server based so be it: I can do this with Mercurial for example in BitBucket, with Git in GitHub or by simply installing the DSCM on an arbitrary server that has ssh access.

I had a discussion with an Apache commiter recently about this fact and about the future of Subversion. He believed, that Subversion could (or better should) go through a complete redesign to embrace features provided by distributed source code management (DSCM) systems like Bazaar, Mercurial or GIT. I personally question this future of Subversion. We already have three pretty good systems and a very competitive game is played here since the last two to three years. Subversion would start with a delay of probably three years. Until a stable version of a (partly) distributed SVN is out, all other systems will be settled and far ahead.

There is, however, one major feature DSCM-systems can not provide by definition: this is collaboration via locking. This is an important feature for collaboration on (large) binary files like Photoshop documents, multimedia files, vector graphics and the like. Merging of such documents is practically not possible, but many (software) projects partly rely on a significant number of documents of that sort. Keeping those in a DSCM system is not the best idea. My future scenario for distributed (software) development hence consists of two repository types: a distributed one for sharing text-based files (like sourcecode) and a centralised one that provides versioning and a very good locking (check-in/out) support for (large) binary documents.

Now as I understand it, Subversion is neither really good in locking nor management of binary data either. Also tools-support for non-programmers (who often work with such binary documents) is not so great. So what is the future of Subversion? I believe already today it is pretty much a legacy like CVS. Off course, there are so many projects using Subversion, that we will probably have to deal with it for the next decade (not that I would like to). However, the migration wave has started and most new projects will use one of the mentioned DSCM systems. How could a re-write of Subversion help? Well, the principles are so different that Subversion with distributed characteristics would be either a new project (and I doubt that we need a forth DSCM system, as mentioned above) or would keep a lot of the disadvantages of the old system.

But maybe Subversion could focus on the locking-based approach: this is a very much needed feature for many projects and I also do not see much competition (in the Open Source environment) here. A good repository for binary data could be a reason to stick with Subversion for parts of the development efforts.

Your ideas?

Friday, October 30, 2009

[Tech] 7 Languages in 7 weeks

Dear Readers,

als you are all interested in programming languages I would like to point your interest to this link which I was pointed at (lots of thanks to the source!):

http://rapidred.com/blog/seven_languages

It is from the blog of Bruce Tate we all know as one the Java Experts and his stunning books.

As far as I know he started an interesting project because he was also interested by the polyglot language area. As I heard he raised a vote about the topic and the languages about his next book! The vote together with his opinion lead to the following languages which I would like to comment on:
  • Ruby -> my personal favourite. Perhaps not the coolest now but the expressiveness and the DSLability for me is outstanding.
  • Io -> Possibly the newest and coolest because the vm / object approach looks interesting.
  • Scala -> Good to have Javas crown prince in here because we all have to learn it.
  • Erlang -> My multiprocessor king (even if it struggles with strings. argh). Especially hot in the #nosql database scene.
  • Clojure -> I already posted about the great clojure. I really love it although its really hard to learn.
  • Haskell -> Good that they / he included the right educational functional concept.
  • Prolog -> This surprised me a little. But Bruce writes that he wants to stretch the readers. And I never thought this could go with a nearly 40 year old language.

So have a look on this book at http://pragprog.com. It's is a definite buy for me even if it hasn't been written yet.

Tuesday, October 27, 2009

[Arch] Resource for Software Architecture

I've found a realy good resource for german audience about software architecture hosted on MSDN. On the MSDN Architecture Center you'll find:
  • Actual news and trends about software architecture
  • Basic information about software architecture (concepts, styles, etc.)
  • Podcasts
  • Tool Previews
  • A free english architecture journal
  • Forum and Knowledge Base
  • Tips and Tricks
Advance your architecture skills by looking at this resource. It is worthwhile itself to make a view to this resource.

Tuesday, September 08, 2009

[Conf] Zurich Open Source Jam

On August 13th, more than 50 other people, interested in open source software, attended the 8th Google Open Source Jam in Zurich, which is an informal (bar-camp like) meet-up at Zurich office (also available in other parts of the world) and a perfect opportunity to meet other open source developers as well as Google engineers in a relaxed atmosphere. As it is open to everyone, people held several lightning talks on a great variety of topics:
  • "G-WAN", Pierre Gauthier
  • "Dynamics of Open Source code", Markus Geipel
  • "Involving students in Open Source", Lukas Lang
  • "Open Source in Africa", Michel Pauli
  • "BTstack", Matthias Ringwald
  • "Free Software & basic income", Thomas Koch
  • "NxOS, an OS platform for Lego", David Anderson
  • "Open Source in the Humanities", Tara Andrews
My talk was related to open source student projects, accomplished within the scope of the course "Advance Software Engineering", held at QSE. Four projects were completed successfully in the last two years and got integrated to the codebase:
Similar to Summer of Code, these students have been mentored by experienced open source committers from the Apache Software Foundation and the Codehaus. Developers and students, participating in open source projects themselves, commented a lot on this topic: "I wish, I had something similar when I was a studying", said a Google engineer.

Afterwards, we continued to have interesting discussions. After some time I found myself in an exciting discussion on software engineering at Google. First off, I'd like to mention that employees never make clear statements concerning their work as they are bound to confidentiality. Even though no specific software development process was confirmed, one could identify tendencies:

Don't repeat yourself (DRY). Code and software reuse as a basic principle. The Google Code repository was created as a collaborative platform to manage, document and review free/libre open source software (FLOSS) projects. Indeed, employees spend up to 20% of their time contributing to open source projects.

Don't reinvent the wheel. "At Google we don't reinvent the wheel, we vaporize our own rubber", told me one of the engineers (they use heaps of metaphors like this) meaning that a vast majority of the software in production use is built on top of parts or complete open source libraries. Aside from releasing software like the Web Toolkit, Android, Chromium, etc. back into open source, Google contributes to a diversity of FLOSS projects (e.g. Linux kernel, Apache projects, MySQL, Mozilla Firefox) [1]. However, they keep implementations of key technologies a secret claiming that for instance their webserver, apparently a Tomcat re-write, was "to specific to benefit from" or just don't publish it for competitive reasons [1]. The same goes for Google File System (GFS), BigTable and MapReduce. In a nutshell, scientific publishing [2] of these core technologies at least led to great open source implementations (e.g. Apache Hadoop) which are open to everyone.

[1] A look inside Google's open source kitchen, http://www.builderau.com.au/strategy/architecture/soa/A-look-inside-Google-s-open-source-kitchen/0,339028264,339272690,00.htm
[2] Google Publications, http://research.google.com/pubs/papers.html

Thursday, September 03, 2009

[Process] Distributed Source Code Management and Branching

I am using Mercurial a lot recently (and love it); I really do wonder, why I struggled so long with Subversion. When I first heard the GIT presentation from Linus Torvalds (which is, hm, very entertaining) the whole distributed SCM thing sounded very esotheric for me. However I decided to give it a try, also motivated by the great Chaosradio Express 130 Podcast (German). Yet, I decided to go with Mercurial and not Git; allthough this created some flame-wars within our group, because one of my colleagues is a big Git fan. So be it ;-)

For me Mercurial is a great, easy to install, and pretty easy to understand system. The commandline is really straightforward and the help-texts well written (and interestingly - internationalised). Maybe I follow up another blog post with more details to Mercurial another time.

As easy branching and merging are among the main advantages of the new distributed SCMs, I want to recommend for now the very nice blog-post by Steve Losh "A Guide to Branching in Mercurial". This article provides a good and conclusive introduction to different methods for creating branches with Mercurial and also explains differences to Git and some of it's shortcomings (*g*).

p.s.: For my taste, just one thing is missing: some details on merging.
p.p.s.: Please no comments on my "Git shortcomings" statement, they will be censored out anyway ;-)

Tuesday, September 01, 2009

[Misc] Clojure & Clojure Book Review

It looks like we are living in a fantastic time concerning programming languages. Creating a new language has never been easier then before. With the two great platforms Java and .Net it's not extremely difficult any more to generate intermediate code from the language you are dreaming of. And even the pragmatic bookshelf has a book in writing on "language patterns" to cover this topic.

Thus I assume that nearly every developer is looking out to bet on A) the best horse or to see B) a language from where you can learn the most. Unfortunately the whole world seems to bet on Scala from the sea of hot languages. And indeed Scala is quite cool and might win the race with good reason. Scala is hot stuff for reason. Nevertheless I never got really warm with Scala and it's hard for me to describe why. It's something in the syntax I really can not explain. My dream language must have a code that looks quite perfect. Years ago I had that feeling coding Ruby (and I love Ruby) because the code simply looked great (although I don't like other things in ruby as e.g the clumsy class definitions (there was a project on rubyforge to fix it. Does someone know the name?)).

So I am still constantly looking out for new languages and of course the pragmatic programmers have brought this book into my view:

Stuart Halloway, "Programming Clojure", 2009

So I might share my thoughts on this book and on the language.

For me still Clojure is more attractive then Scala because:

1. It's the toughest edge for your brain.
Perhaps you belong to the same generation as me: I never really used Lisp, Scheme or the dialects alike. But as I am a Java, D and Ruby Coder => Lisp and thus Clojure ("Lisp Reloaded") is the hardest challenge to learn. And according to the last two book reviews you should always go the hard way to learn the most. This might be even true for you.?! For me it turns out that the amount of round brackets is not the problem.

2. Clojure enforces the use of no variables but immutable data structures (as the successful Erlang does!). The code / structures itself mostly is the container for variables you would define use normally. This has two really strong advantages:

1. No variables mean less errors and less to debug
2. No variables and immutable data means no side effects
Hence it's not so easy to write Clojure code with much side effects.

3. Clojure has the strong concurrency concepts - if you really need mutable data - using STM (Software Transactional Memory) / MVCC, Agents, Atoms, giving you ACID Transactions without the "D" (which is only valid for databases).

4. The Java integration is really smart. Have a look:

(new java.util.Random)


simply generates an object you can easily use to work with.

Java and Clojure can call themselves easily vice versa. This might be a really good argument for you to save investments. Clojure doesn't really try to build up all the java libs from scratch; it reuses them in a clever way.

5. Clojure is indeed fast because it generates pure Java byte code.

And of course all the other stuff that a hot language must have:
  • Closures (I still can not believe that they are still discussion this hot feature for Java... it's a shame)
  • List Comprehensions
  • a workaround for tail recursion and currying and other weird stuff as
  • very lazy sequences, trampolining, etc.
What also challenged my mind is that Clojure has only three constructs for flow control (as we learned from Oz a simple and nevertheless powerful language doesn't really need much):
  • An "if" See an example (if (> num 100) "yes" "no"))
  • A "do" which is an iteration of statements - introducing side effects - and thus discouraged in clojure
  • And a "loop/recure" that can be used to build everything you need in flow control (Stuart Halloway calls it the Swiss army knife).
So before I discuss the downside of the language and the book, let me bring up the two points that I loved most in clojure (even if it takes a lifetime to understand them 100%):

1. It has Metadata incorporated in the language right from the start! So you can tie pre- and post-conditions, tests, doc, arbitrary macros to any kind of data or functions. Whow! Do you remember how long it took Java to introduce Annotations? And to me they are still not 100% part of the language but an add on.

2. Clojure has powerful macros and multimethods. This means DSLs are incorporated. So if you loved the way Ruby can build DLSs (and the pragmatic programmers will bring up a nice book on this Ruby DSL topic!) you will get a step further in Clojure.

The book itself uses all hot Clojure features to create a build system called lancelet (every language since Rubys Rake seems to do this a little bit cooler then Ant does).

What I disliked in the book is that it's still difficult for beginners although it is very well written with beginners in mind. One example: I still havn't found the page where I can read a string from the console. And this is a key feature for beginners to test something. The interactive REPL is not enough here. What I mean is that the book has a thousand brilliant examples like fibonnaci. But fib is a two edged example. It just calculates. It creates no emotions in the reader. A better example is the snake game the book creates on a few pages later (here we get input from a keyListener...).

The language itself has two downsides for me:

1. If you are used to the huge Java Collection Lib you are astonished that Ruby boils down all data structure to just a super powerful Array and Hash (and the others are rarely used). That's cool. So you get the impression that Clojure get's on step further in stating that everything is a set (like Lisp stated that everything is a list). But when working with Clojure you are suddenly confronted with not only sets but vectors, lists, maps and trees. Now the book tells you that you always use the set abstraction to work with this. To me this doesn't really help if e.g. the vector notation differs significantly from a set and list definition. I still don't get used to this. but it must be surely my personal inability.

2. The most important drawback is the key feature at the same time: Clojure has an extreme steep learning curve! To become a true expert in Clojure, i.e. to think and dream in Clojure you need to stress every neuron in your brain. So it all boils down to the question: Is it really worth investing half a year in a hot language like Clojure to be able to produce code that is 3 times more accurate imperative Code?

What do you think?

Regards
Stefan Edlich

Links:

And finally have a look at this nice language comparison: Java.next
Make a list. For every topic give points from 0 to 3. What is the most elegant for you?

Monday, August 31, 2009

[Misc] Two Pragprog Books Reviewed

Book Review: The passionate programmer and Pragmatic Thinking & Learning

Recently I am getting more and more attracted by the books from "The Pragmatic Programmers / Bookshelf" (link). So I share my thoughts with a review of three books for you. Here I review:
  1. Chad Fowler, "The Passionate Programmer", 2009
  2. Andy Hunt, "Pragmatic Thinking and Learning", 2008
So let's start:

1) Chad Fowler is well known in the Ruby and Rails scene. So he he shares his visions using 53 chapters. Each with a message for you and explained. These messages look a little like the XP programming message. And sometimes they really read like these XP rules:
  • "29. Learn how to fail" (testing)
  • "18. Automate yourself into into a Job" (daily integration builds)
  • "28. Eight hour burn" (no overtime)
  • ...
But indeed his stories are nice to read and they go far beyond XP rules. They bind personal experience together with passion and and a possible new perspective for you. So his main point is to step out of daily routine, step back, get better and build up new goals for you.

Chad is quite strong in selling his point and most of the points are really fun to read (for example "20. Mind Reader" or "45 You've already lost your job"). So this book is not for experts who have already found their mission in doing independent consultant work for an apache product of which they are a top committer. It's a book for the employee wishing to get motivated and possible building up new perspectives in his career. And of course for beginners that might be reading an XP book at the same time. Chad includes nice actions for each point so that each point can be validated for yourself.

What distracted me a little is the analogy to other jobs. Many writers today cite that they have played (jazz) music in a band. And the challenges in a jazz band are quite the same to a software developer. Chad elaborates a lot on this topic. Even Andy Hunt (see the next review) draws this analogy and many other books (e.g. Presentation Zen by Garr Reynolds) can not leave this point - e.g. be the worst guy in your team - untouched.

Nevertheless it's a fun read if you want to break out and the book should also be recommended in software engineering / programming lectures.

2) Andy Hunt also wrote a remarkable book in combining cognitive sciences with software development. And indeed this is neither a neuroscience book nore a software development book. It is a wonderful walk through topics like:
  • Journey from Novice to Expert
  • This is your Brain
  • Get in your right mind
  • Debug your mind (how your mind works)
  • Learn deliberately
  • Gain Experience
  • Manage Focus
  • Beyond Expertise
This book could also be named "Your brain - The missing manual for software developers". So it's a wonderful guide to understand your brain and how to improve. The book is full of nice graphics, anecdotes and actions the reader should do. Through the book Andy collects Tips which are grouped together at the end of the book in a nice reference card.

If you read this book you will find some topics not new for you as mind maps or wikis. But Andy puts this in a context, gives many advices and he touches a lot points which will be new for you. For example the intense description or the L- and R-Mode helps a lot on how to reflect and use this modes in daily life. And there are a lot of other great new topics you can experience (as morning notes, SQ3R, etc.).

There were really just a few pages that were uninteresting to meas his elaboration on "expect the unexpected" or his way of categorizing generations.

Nevertheless it's a very practical book covering wide ranges of topics as drawing (there are some drawing exercises for you inside) up to yoga techniques. And everything could be applied for your daily life, job or your software development. So this book is a clear buy recommendation and even better a good present for your hacking friend or partner.

Thursday, August 20, 2009

[Arch] UML Tools for Mac OS X

Following up a question I received via Twitter, and the fact, that a significant part of the developer-community is using Macs, I thought this might be a good opportunity to discuss some "UML Options" for the Mac. Now, this article is not meant as a definitive answer, I would hope for some follow-ups by readers in the comments.

Ok lets start: First there is heavy weight stuff, most notable Visual Paradigm. A warning: this is a fat tool. However, among the fat tools it is the one I liked the most. I am not using it any more, but it is generally rather easy to use and very feature rich. However, it is a pretty expensive commercial tool. Yes, they have a "community edition", because it is cool to have a community edition these days. But this one was (when I used it last year) rather a joke. See it as a test-preview.

There are also other commercial tools as well, e.g. Omondo. I have not much idea about this one though. Anyone?

On the other end of the spectrum are tools like UMLet (or Violet), which are also Java-based and work more or less good also on the Mac. These tools are very basic and one should not expect much. They are definitly not suited for "real" projects or commercial application, but can be a nice option e.g. for educational purposes. Sometimes one just needs to create some simple UML diagrams for a presentation, paper or book. For such purposes these tools might be useful. Plus both are Open Source tools.

The probably best free (but not Open Source) UML tool, and the one I would recommend is BOUML and this is sure worth a try. The main issue I have with nearly all free/OS UML tools is, that they are often driven by a single person or just very few developers. Hence the future of the particular tool is always a little unclear. To make things worse, there is no accepted open file-format for UML diagrams, that would allow easy exchangeability of the tool. Hence selecting a UML tool is always sort of a lock-in situation.

Also a consideration could be ArgoUML, which is also an Open Source tool and maybe the oldest one around. Has some issues as all OS tools, but apparently has a functioning community.

Finally there are some more or less general purpose drawing programs, that can be used for technical diagrams like EER or UML models as well (with some limitations) like OmniGraffle or Concept Draw and finally also OpenOffice Draw can be used for general purpose vector-oriented diagrams.

Would be happy about comments, experiences and further suggestions!

Thursday, July 02, 2009

[Tech] Monitor your WS calls

If you develop applications, which consumes web services from other applications or integration platforms, debugging can often be very deflating. If you don't use the correct debugging tools, you don't see the generated SOAP messages which are delivered between the parties.

A very useful tool is the Open Source SOAP monitoring tool from predic8. The tool does the same as the TCP monitor from Axis, but provides a more user friendly UI and more settings and features:
  • Monitoring of SOAP and HTTP messages
  • Rule based SOAP routing
  • XML formatting and syntax highlighting for SOAP messages
  • Interception and modification of messages
  • HTTP chunking
  • HTTP 1.1
  • Loading and saving of configurations
  • Rich graphical User Interface
  • Resending of messages
The monitor acts as a proxy. Therefore your client application must send the SOAP/HTTP messages to the proxy monitor, which delegates the messages to the real endpoint. A Quick Starter Guide is also available.

[Pub] Mule Tutorial

In the current issue of the Java Magazin I published a tutorial to develop loose coupled systems with Mule. The tutorial illustrates the usage of an Enterprise Service Bus in an airport domain, where different airport systems communicate with each other over the ESB. In the example I use a set of important Enterprise Integration Patterns and show how these patterns are implemented in Mule. Some patterns I used are:
  • Event Driven Consumer
  • Content Based Router
  • Filter
  • Transformation
  • Message Splitter
The transports and connectors I used from Mule are:
  • JMS (Active MQ as message broker)
  • Quartz Transport
  • File Transport
  • XMPP transport for instant messaging
The source code of the tutorial can be downloaded here.

Have Fun!

Monday, June 29, 2009

[Misc] Hot deployment with Mule 3 M1

Some interesting news from the Open Source ESB Mule. The first milestone from the third version of Mule is out and comes with a major important feature: Hot Deployment

What is the meaning of hot deployment?

Hot deployment is a process of deploying/redeploying service components without having restart your application container. This is very useful in a production environment when you have multiple applications connected over the enterprise service bus without having to impact users of applications.

Check out the example on the mule homepage.

Thursday, June 18, 2009

[Misc] Resilient Services & Software Engineering

I recently read the interesting paper by Brad Allenby and Jonathan Fink "Toward Inherently Secure and Resilient Societies" published in Science August 2005 Vol. 309 and surprisingly enough, free to download. This paper is apparently "inspired" by the attack to the World Trade Center, however discusses resilience of important systems our modern societies are depending on in a more general way. The authors definition of resilience is:
"Resiliency is defined as the capability of a system to maintain its functions and structure in the face of internal and external change and to degrade gracefully when it must."
The further state that:
"[...] the critical infrastructure for many firms is shifting to a substantial degree from their physical assets, such as manufacturing facilities, to knowledge systems and networks and the underlying information and communications technology systems and infrastructure.

[...] the increased reliance on ICT systems and the Internet implied by this process can actually produce vulnerabilities, unless greater emphasis is placed on protecting information infrastructures, especially from deliberate physical or software attack to which they might be most vulnerable given their current structure."
The authors apparently have more physical infrastructure in mind (like physical network backbones and the like), however, I am a little bit more worried on the pace certain type of pretty fragile IT services becomes a foundation for our communication and even business models.

I wrote in a recent blog post about my thoughts on Twitter, which became even more important considering the latest political issues in Iran and the use of this communication infrastructure in the conflict. Twitter is (as we know from the past) not only a rather fragile system, it is additionally proprietary and has in case of failure no fallback solution in place.

But Twitter is not the only example: many of the new "social networks" are proprietary and grow at a very fast speed, and we wonder how stable the underlying software, hardware and data-management strategy is. Resilience is apparently no consideration in a fast changing and highly competitive market. At least not until now.

But not only market forces are troubling these days, also political activities that can effect large numbers of systems. Consider the new "green dam" initiative, where Chinese authorities demand each Windows PC to have a piece of filter software pre-installed that should keep "pornography" away from children. This is of course the next level of Internet censorship, but that is not my point here. My point is, that this software will be installed probably an millions of computers and poses a significant threat to the security of the Internet in case of security holes.

Analysis of the green dam system already reveal a number of serious issues. For example Technology Review writes about potential zombie networks, Wolchok et al. described a serious of vulnerabilities. Now this is not the only attempt in that direction. Germany for example is discussing "official" computer worms that are installed by the authorities on computers of suspects to analyse their activities. France and Germany want to implement internet censorship blocking lists of websites. The list of the blocked websites are not to be revealed and it is questionable who controls the infrastructure. Similar issues can be raised here.

I believe, that also software engineering should start dealing with resilience of ICT services and describe best-practices and test-strategies that help engineers to develop resilient systems, but also to allow to assess the risks that are involved in deployed systems. I am afraid we are more and more building important systems on top of very fragile infrastructure and this poses significant risks for our future society. This infrastructure might be fragile on many levels:
  • Usage of proprietary protocols and software that makes migration or graceful degradation very difficult
  • Deployment of proprietary systems to a large number of computers that cannot be properly assessed in terms of security vulnerabilities or other potential misuses, instead of providing the option to deploy systems from different vendors for a specific purpose
  • Single points of failure: many of the new startups operate only very few datacenters, probably even on one single location
  • Inter-dependece of services (e.g. one service uses one or multiple potential fragile services)
  • Systems that can easily be influenced by pressure groups (e.g. centralised infrastructure vs. p2p systems) e.g. to implement censorship
  • Weak architecture (e.g. systems are not scaling)
  • Missing fallback-scenarios, graceful degradation.
Comments?

Saturday, June 13, 2009

[Misc] Technical Debt

Recently I stumbled over a smart blog entry about the 'technical debt' (link).

The idea is quite nice: imagine everyone would have a 'perfect' software system in mind to be build. Well in fact we live in a 'real world' and a 100% perfect project is always a goal but not the current status. But of course we all strive for 100% as we strive for 100% test coverage.

But the fact is that some companies / developers build better code and some build a little worse code. Now imaging if we could measure this 'worsiness'. Of course a 100% accurate and correct measurement is not possible and subjective. But sonar from codehaus tries to go that way.

Their technical debt is shown:
  • in $ (!!! ouch this hurts)
  • in a spider figure
  • in the form of numbers you can drill down
What they do is they measure at least:
  • The Code coverage
  • The Complexity
  • The Code Duplication
  • The Violations
  • The Comments
There might be more measurements to be integrated soon. And you will surely agree that code comments should have a different weight then the code complexity. Should they?! But what I suggested in my comment is, that it would be great if this measurable debt would be a standard for all projects.

Software developering companies could use a low debt as a marketing instrument. And they likely sell more! The buyer will check the technical debt of the software they buy. As a usual procedure. If the debt is low, the product might be a good and changeable investment that can grow.

If the debt is high the vendor has a problem. Vendors might think they can lock buyers in because they don't check the technical debt. But I am sure time will change and tools like this will be standard in IDEs in 5 to 10 years. Even to check projects in multiple languages.

So for me it's time to face the boss with hard dollars he has to pay back. Sooner or later. The later the more expensive. Let's fight for a technical debt / good metrics analysis as a common procedure!

Stefan Edlich

Tuesday, June 09, 2009

[Tech] Cloud Computing

As I think, I already mentioned here, I believe, that Cloud Computing (and Software as a Service, but this is a slightly different topic) are true game changers in our understanding of software infrastructure and development/deployment. Currently things are still quite rough around the edges, but I believe, that in like 3-5 years the default option of application deployment will be in one cloud service or another. Putting iron into the cellar or storage-room will be what it should be in my opinion: mostly a stupid idea ;-)

In the current stream of IT Conversations George Reese talks about practical aspects and experiences with current cloud-services like Amazon S3, Simple DB, virtualisation... Recommended!

Tuesday, May 26, 2009

[Tech] Kent Beck's JUnit Max

JUnit is a testing-framework well known to every Java developer (according ports to other languages exist). Kent Beck and Erich Gamma were the core-developers of JUnit, which was published around 2000 as Open-Source framework. It is fair to say, that JUnit and its ports had a huge influence on quality assurance and can be found in nearly every modern software project.

Now Kent Beck announced a new project JUnit Max. The core concept is "continuous testing". JUnit Max is an Eclipse plugin that starts the unit test execution everytime a class is saved and controls the test-execution ordering according to the classes that are worked on and tests that have failed recently.

In my opinion this seems to be an interesting and logical next step in unit-testing-frameworks. JUnit Max however is not Open Source and follows (in my opinion) a rather strange license model (more can be found on the website). I wonder, whether the additional benefit justifies the license fees and this particular model and more important, how long it will take until this functionality is provided by an Open Source solution...

Tuesday, April 21, 2009

[Tech] What about Maven 3

At thte last Maven Meetup Jason van Zyl talked about the future of Maven and important milestones of Maven 3, including:
  • Support of incremental buildings
  • Changes about the Plugin-API
  • Better Multilanguage support
  • and more
The video and slides about the presentation are available here.

[Misc] Sun and Oracle

Now it finally happened: Oracle bought Sun for 7,4 billion dollar. It sure is a little bit surprising, as the deal with IBM seemed to be settled already. From a developers point of view, the Oracle deal might be better for the community, allthough it also has certain risks.

For IBM Java is strategically very important, insofar Java would have been "save" with IBM. Additionally IBM has developed (similar to Sun) a solid Open Source strategy over the last decade which would also fit to Sun. However, a significant amount of their product lines would have overlapped: Both have Middleware products like Websphere and the Sun Glassfish project portfolio. Both have a series of database products: mySQL at Suns side and of course the DB2 line on IBMs side, and a similar story on the OS front: the probably superior Solaris versus IBM AIX. Finally Sun has the Netbeans IDE as central development tool whereas IBM has Eclipse. I doubt that IBM would have had a lot of interest in doubling all these product lines. Not to mention the Sun hardware.

Now, on the paper Oracle looks much more "compatible" to Sun. True, there are some overlaps in the middleware section. Most "afraid" might be the mySQL folks, as Oracle already showed some hostility against mySQL in the past. Then again, when they own the product, they probably can sell it in their database portfolio in the "low-end" market. Java is also important for Oracle and probably even more important is the operating system Solaris and the Sun hardware and a tight integration to e.g. the Oracle database. With these assets Oracle can offer "end-to-end" solutions starting from hardware, operating system, storage, solutions, database, middleware, web-frameworks and integrated development environment.

What worries me a little bit with Oracle is the lack of experience in the Open Source community. Oracle is in my opinion a rather closed shop compared to IBM and Sun. Maybe Oracle can learn a little bit from Sun's experience here. However, my fazit is, that there is significant potential in the combination of Sun and Oracle (probably more than with Sun/IBM) but also some significant risks in terms of openness and for certain parts of the Sun product line. I am particularly looking forward in terms of the consequences for the Open Source middleware-portfolio, Java and mySQL.

Update: Larry Dignan from zdnet blog writes about mysql:
"Oracle gets to kill MySQL. There’s no way Ellison will let that open source database mess with the margins of his database. MySQL at best will wither from neglect. In any case, MySQL is MyToast."
Well, I would not bet on that (but probably would not start a new project with mySQL either...), but it is for sure an option.

Tuesday, April 14, 2009

[Arch] Maven Best Practices

In this blog of sonatype there is a useful list of best practices with Maven, including:

  • Why putting repositories in your pom is a bad idea
  • Best Practices for releasing with 3rd party snapshot dependencies
  • Maven Continuous Integration Best Practices
  • How to detect if you have a snapshot version
  • Optimal Maven Plugin configuration
  • Adding additional source folders to your maven build
  • Misused maven terms defined
  • How to override a plugins dependency
  • How to share resources across projects
  • How plugin versions are determined
Before you search in mailing lists, look at the list, it will help you.

Saturday, April 11, 2009

[Misc] Open Protocol vs. Twitter: 1:0 ?

In a current ZDNet Blog Posting, Sam Diaz analyses the recent technical issues Twitter has (again). Twitter is growing dramatically in the last months and apparently the Twitter backbone is increasingly in trouble. The same happened already about a year ago.

The analysis of Sam Diaz is of course correct, but in my opinion he still completely misses the point in discussing technical issues why Twitter might or might not catch up with the upcoming demand in the service. The point actually is, that the communication concept of Twitter is appealing to many people, which is good, but in the history of the Internet it was never a good idea to rely on a proprietary protocol in any important communication channel.

So the real question is a much more generic and actually should be: how can we get rid of Twitter as fast as possible and replace it with an open protocol and a scalable distributed architecture, comparable to Email, XMPP chat and the like. There are good reasons why proptietary protocols largly failed on global communication systems like the Internet; those that are still around are a continuous pain in the ...

I confess, I am using Twitter as well, but it is of course a lock-in situation. If you want to follow the interesting stuff, you currently have to use Twitter. However, now we still have time to replace Twitter with something like Laconica (identi.ca) or anything similar down the road. Even better, Twitter might open up it's system and try scaling it that way. However, now is the time to act: Twitter is still a toy, but it is on the way to become a serious communication system we might depend upon in some years. And I believe, no one wants to depend on a communication system that is proprietary and unreliable at the same time.

Friday, April 10, 2009

[Process] Successful Distributed Agile Development

Andy Singleton from Assembla wrote a nice blog entry about success-factors in distributed agile development. He focusses on 6 factors:
  1. Fix-schedules for releases
  2. Continuous Build
  3. Ticketing
  4. Daily Report and Chat
  5. Team-Activity Streams
  6. Recruitement
I am missing automated testing and QS though, which I believe is a cornerstone in distributed development, particularly in combination with a continuous build/integration setup. Read the full article here!

Thursday, April 09, 2009

[Tech] Mavenizing AppEngine!

As I nagged yesterday about the fact, that AppEngine has no proper Maven-build system, already today the guys at Sonatype reacted ;-)

They describe preliminary attempts in how to "Mavenize" AppEngine projects; hope they will be able to fix also the last issues!

Wednesday, April 08, 2009

[Tech] Google AppEngine (and Java)

AppEngine is a rather recent new service from Google. It is probably Google's answer to Amazon's cloud-computing platform, yet targets a very different market. Where Amazon offers a a broad range of services and high flexibility (with the disadvantage of higher administration effort) Google targets web-developers that want to publish web-applications. AppEngine started with a Python environment, now since some days the long anticipated Java-version (Java 5 and 6) is online. Now what are the benefits of using AppEngine?

Java

First of all, it is possible to deploy applications without having to install, administrate and maintain an own server (instance). Google provides a runtime environment (sandbox) into which Python or Java applications can be deployed. Access to these applications is (for clients) only possible via http(s). So this is a feasible approach for web-applications or RESTful services.

An additional advantage is, that Google deals with scaling issues, i.e. it scales the applications dynamically to the demand. This is a significant advantage for startups, that have no clear idea about the number of customers they are going to have or how fast this number is growing. For the scaling to work, though, some restrictions have to be considered. Most notably this concerns the persistence strategy. E.g. applications (and libraries!) are allowed to read files from the filesystem, but are not allowed to write. For all persistence issues, the Google datastore has to be used. However, what is nice with the new Java-sandbox is the fact, that Google apparently tries to follow established standards. For persistence Java developers can use JDO or JPA or a low-level interface for the distributed datastore.

I wonder, however, how logging can be handled in that environment. Logging is usually done to a file or to a JDBC datasource. A JDO logging target I have not seen before; ideas anyone?

Generally spoken, arbitrary Java libraries can be deployed and used in the AppEngine as long as they do not violate the AppEngine sandbox. Also due to the scaling-approach not all libs/frameworks will run unchanged. As yet, it seems not quite clear for example, which Java Web-Frameworks will run seamlessly in the App-Engine. Googles webtoolkit (GWT) should work, other framework communities are currently testing their frameworks for compatibility, e.g. in the Apache Tapestry and the JSF framework Apache myFaces discussions are running on the mailinglists.

Build Automation and Development Process

The development process is, in my point of view, as also with other Google environments like GWT a mixed blessing. Everything is Eclipse centered, which is not really a good thing: Google provides an Eclipse plugin for the AppEngine including a runtime environment for testing applications offline. This is great for daily development activity, but not for a stable build- and testing environment. Unfortunately Maven support (like archetypes) are completly missing at the moment. Google is apparently pretty hostile towards Maven and focuses mostly on IDE integration, which is definitly not a sound way for a modern build automation. IDE "wizard-based" SE approaches usually turn out to be unstable and problematic, particularly in team-projects. This might be nice for a fast hack, but is no basis for a larger project. It seems, that some support is given for Apache Ant though.

Hopefully other developers will provide a Maven integration for the Java AppEngine. With the current approach not even an IDE-less deployment is possible.

Conclusion

So, despite of the build-issues, I believe that the AppEngine is a great option to deploy web-applications in Java or Python. For small applications (small in the sense of "low web-traffic"), the AppEngine is free, after exceeding certain thresholds (CPU, storage, bandwith...) one pays according to the resources needed. Google provides a web-interface to set daily financial limits for individual resources. E.g. one wants to spend a maximum of 5 $ a day for CPU time and so on.

Looking forward to the first experience reports, particularly with web-frameworks like Wicket, Tapestry or Cocoon.

Wednesday, April 01, 2009

[Misc] Operating Systems for Netbooks

Netbooks are different from notebooks and desktops; they are used in other contexts, have smaller screens, not so powerful hardware, less diskspace and so on. Windows is hardly the best system for regular desktops and notebooks (in my opinion) and definitly only a temporary solution for the netbook market. Microsoft even had to reanimate Windows XP for that purpose. In effect, Windows is to fat, too insecure and not optimised for small screens and additionally hardware producers have to pay license fees to Microsoft in a very tight market, where some Euro can make a difference.

Since some months the companies like HP apparently are evaluating Google's Android for Netbooks. Now, I am all for Android; But on mobile phones. I am doubtful that Android is a good solution for netbooks though. Android was designed for the typical smaller scale mobile applications e.g. with only "one application" active on the screen. And resource management that focuses on one active application. Now, a Netbook might have a small screen as well, but from the usability point of view it is probably closer to the notebook than to the mobile phone.

Hence customers expect applications in a style they know them from their desktop, like Office or Internet applications (mail, browser, ...). Using Android reduces the number of suitable applications dramatically, and Android applications written for Android cell-phones will most likely not perform well on Netbook hardware.

Now, Linux was already used on several Netbooks like on the EEE-Series. Why not stick to Linux? Probably adapt an existing distribution like Ubuntu to better fit to that specific environment. Then immediately the whole range of destop applications are available including applications like Open Office.

From the software engineering aspect, I would believe, that it is better not to mix up mobile platforms with Netbooks, in the end either applications have a bad user experience on the specific platform they were not developed for (allthough both use the same OS).

But maybe I am proven wrong?

Wednesday, March 25, 2009

[Tech] HSQLDB Version 1.9 alpha is out

Finally hsqldb 1.9 (alpha, though) is released. This release was announced for, I believe, nearly one year. It seemed to me already that hsqldb is a rather dead projects. I am glad they made the next round, because in a way I still like that system a lot. Sure, Apache Derby is most likely the superior system, and H2 looks very promising too (but is still, as I understand it, a "one man show" without community) however hsqldb has some tiny details that make it very nice: First, it always had a really tiny footprint and was extremly easy to understand and use.

And I particularly liked the feature to fine-tune the memory management, i.e., should the data be stored on disk, purely on memory... and this on a per-table basis. Plus, with one simple command it is possible to write the whole database as SQL statements into a file from which it is also loaded again from that file. A feature that is e.g. missing in Derby. This often turned out handy during development-phase.

Now for version 1.9 they seem to have rewritten significant parts of the software and added an impressive list of new features. What I have to figure out is, if they have finally implemented proper transaction isolation. In my opinion this was (beside the single-threaded kernel) the biggest issue in the previous versions, where dirty read could not be avoided. I am a little bit confused with the announcement(s) now, because they wrote that they have rewritten the core, however, in a forum posting the developers announced, that transaction isolation is not handled in the new release 1.9 but is planned for 2.0. The news announcements on the sourceforge are a little bit confusing for me. Does anyone have a better idea about this issue?

However, good luck for the stabilisation-phase of the new release!

Monday, March 23, 2009

[Misc] Hello World

"Hello World" programs are well known since at least 1978 and often a starting point to get to understand the most basic issues of a new computer language.

Wolfram Roesler collected "Hello World" programs in 421 different computer languages!

Particularly interesting for me was "Hello World" in Chef :-)

Friday, February 27, 2009

[Arch] Cherish your Architecture

Since a few month I am following the world of architecture tools much more and there are interesting things on the way:
  • hello2morrow has changed the evaluation mode for one of it's product: SonarJ (Community Edition) is now free for trial if you have less then 500 classes. I strongly recommend to use a tool like this at least for every new project start. Quality metrics and architecture definitions are now just a few clicks away and do really suppress 'the big ball of mud' every software engineer knows.
  • Another nice product in the field of software quality measurement is sonar from codehaus. It really produces wonderful quality views for projects and I highly recommend to try it out. What disappointed me a little was that Java projects should be Maven alike and the two minute tutorial will not cause you to set sonar up in two minutes (installing, loading and producing metrics in SonarJ is much faster). But nevertheless the output is full of innovative ideas.
It was also interesting to learn that sonar has ideas to integrate something like the architecture rule checking framework macker. You can try this out for yourself and integrate it now into your Ant buildfile if you have a vision about your architecture. Do you have?

Here is an example of an architecture definition given on their website (think of everything being enclosed with XML brackets):

macker
..ruleset name="Simple example"
....access-rule
......deny
........from class="**Print*"
........to class="java.**"
......deny
....access-rule
..ruleset
macker


Do you have an idea of the intention?

Obviously a helpful feature but keep in mind that other tools (like the ones from hello2morrow) allow this definition much faster because you simply draw your architecture.

So check everything out and be aware of your architecture!

Monday, February 09, 2009

[Misc] Managing Commercial Software Projects

At IT-Conversations there is a recent interview with Jon Udell talking to Andy Singleton about "Managing Commercial Software Projects". This interview is highly recommended. Actually I figured that Singleton is following pretty much on the same track as I am. However he is making some bold statements; in projects he is using no phone-conferences ("the more phone conferences in a project the more problems the project is in") or VoIP/Video, no time-estimations are done when not explicitly demanded and he is following Open Source practices of distributed development using mostly asynchronous tool-chains.

He is apparently providing a set of development tools "best-practices" at assembla.com: One of the core concepts there is to assemble people around event-streams (of activities). I think, probably the main idea that I totally subscribe too is this: It is actually much more important to have awareness what others are doing than investing a lot of time into planning efforts.

However, listen to to the full interview, recommended!

Tuesday, February 03, 2009

[Misc] It becomes quiet around BPEL?

BPEL stands for Business Process Execution Language and will be used to execute business processes. But there are other standards which can also be used to execute your business process. What about XPDL - XML Process Definition Language? Nevertheless, many BPM vendors adopt their workflow engine to BPEL in order to survive on the BPM market. It's a mistake. Most of the products provide a BPEL engine as an additional modul, because most of the BPM/Workflow Engine products work successfully without using BPEL. At the same time as BPEL was pushed, the BPMN - Business Process Modeling Notation - was hot discussed, a notation to model business processes. When you look on the BPMN homepage you will find a BPMN to BPEL transformer, describing the mapping of BPMN elements to suitable BPEL elements.

I find an article with the topic "BPEL: Who needs it anayway?" written by Keith Swenson, discussing the usage of BPEL in the industry.
"There are a few vendors who promote BPEL as as the one-and-only-true-way to support BPM. In fact, it is good for some things, but fairly bad at a large number of other things. It is my experience that BPEL is promoted primarily by vendors who specialize in products we might rightly call “Enterprise Application Integration” (EAI). These companies have recently taking to calling their products “Business Process Management”. Potential users should be asking the question “Is BPEL appropriate for what I want to do.” In that aim, there should be a large number of articles discussing what BPEL is good for, and what it is not, but there are very few articles of this nature."
He also mentioned that BPEL supporter make the following assumptions:
  • The people making the processes are programmers
  • The activities in a process only need to send, receive or transform XML data
  • Any standard will be better than no standard
What about the human integration in BPEL? The human integration is not supported in the standard, yet. You can use the WS-BPEL Extension for People, where each vendor implement this on his own way.

In the article he illustrates how to execute a BPMN diagramm directly using XPDL. The diagram is interpreted directly without converseion to another model.

To summarize, I see there are a lot of successfull SOA projects which do not use BPEL. It goes also without BPEL. However, there are scenarios where BPEL is very useful and makes sense, but I will mention, that BPEL is not the "All-Solution" standard. I am strained on the future of BPEL.

Friday, January 23, 2009

[Tech] An easy to use XML Serializer

XML processing is an important part in present software systems, especially when communicate with other software components in the IT infrastructure. Pretty often you must provide your object data as XML. The Open Source market provides a wide range of XML tools, above all XML mapping tools, like Castor, JAXB and others. A very intersting and compact tool, existing since 2004 is XStream hosted on Codehaus. XStream is a proved XML serializing library and provides the following key features:
  • Easy to use API (see example)
  • You do not need explicit mapping files, like other serializing tools
  • Good performance
  • Full object graph support
  • You can modify the XML output
Let us consider a simple business object, person, implemented as a POJO (taken from the XStream homepage):

public class Person {
private String firstname;
private String lastname;
private PhoneNumber phone;
private PhoneNumber fax;
// ... constructors and methods
}

public class PhoneNumber {
private int code;
private String number;
// ... constructors and methods
}
In order to get a XML representation of the Person object we simple use the XStream API. We also set alias names which are used in the output XML.
XStream xstream = new XStream();
xstream.alias("person", Person.class);
xstream.alias("phonenumber", PhoneNumber.class);
String resultXml = xstream.toXml(myPerson)
When creating a new instance of the person object an serialize it via xstream (toXml) we get the following XML result. As we can see our alias names are used.

<person>
<firstname>Joe</firstname>
<lastname>Walnes</lastname>
<phone>
<code>123</code>
<number>1234-456</number>
</phone>
<fax>
<code>123</code>
<number>9999-999</number>
</fax>
</person>

The example illustrates that the framework is very compact and easy to use. Look at the 2 Minute tutorial on the XStream homepage to get more examples. You can also implement custom converter and transformation strategies to adapt XStream to your requirements.

Have fun with XStream.

Tuesday, January 20, 2009

[Pub] Data transformation in an SOA

I published a german on article on JAXCenter about data transformation in Service Oriented Architecture. When different applications talk to each other, you must find a suitable data format which all applications can interpret. For the most cases XML is the first choice, because there is a wide range of tool support and additional standards, like Schema Editors, XPath and the like.

In this article I give an overview about the Open Source Framework Smooks, which can be used for data transformation in SOAs. Smooks provide some interesting features:

  • Data Transformation (XML, CSV, EDI, Java, JSON,..) and custom transformers
  • Java Binding from any data source (CSV, EDI, XML,..)
  • Huge message processing by providing concepts like split, transform or route message fragments. You can also route fragments to different destinations, like JMS, File or databases
  • Message Enrichment
Above mentioned attributes/features are also ideal candiates where an Enterprise Service Bus can help. Rather existing Open Source ESBs, like Mule or JBoss ESB can profit from a technology like Smooks. In last part of the article I describe the Smooks extension for Mule which provides:
  • Smooks Transformer for Mule. The transformation logik is done in Smooks
  • Smooks Router for Mule. The routing logik can be configured in Smooks

Sunday, January 04, 2009

[Misc] Clean Code Developer

Some days ago the recommended .NET Expert Ralf Westphal (with Stefan Lieser) has published in his blog (which is always worth reading) about a new Software Engineering Website called Clean Code Developer (written in german).

The websites name is a little related to the book clean code from Robert Martin I recently reviewed here.

To me this website is stunning in many ways. It is the approach to bring professional development into the minds of all developers. And so it lists and explains the most important development principles on this webpage. Furthermore all tools are listed that help to achieve the goal of good development.

But the website doesn't stop here: a related forum for "clean code" discussion has been set up and they have introduced grades for every developer. So the principles have been rated and put into six grades that are associated to colors (from red to white). So every developer can learn and practice these grades and the included clean coding principles.

For those who have fun doing this you can buy a colored bangle starting from red and buying the others later for 5 €. The idea is to remind yourself always writing clean code. Whether you like the last idea is up to you. But this website really bridges an educational gap: at the university you normally learn programming and something like design patterns in an advanced software engineering course (beside of tons of UML). You nearly never learn good coding principles although many of us will code in
their later career.

So as my new years recommendation I would be happy if you can check out the website given above and join the idea of a clean code awareness.

Let this be one of our answers to the financial crisis.