Latest Publications

Getting Things Done

I’ve tried a handful of GTD utilities over the course of the last 12 months; the three that stayed with me the longest were Tracks, PHP GTD and for the past six months I’ve been going with Thinking Rock. I’ve switched back from a web based solution to a standalone application because it simply is faster and snappier in its handling.

Furthermore, it has all the same features as its web based counterparts, is cross platform and also allows for tracking on the go, for example by setting up an email account which is polled for inbox items.

Here my thoughts about the three utilities:

Tracks 2

To be fair, a new version of Tracks has been released since I last used it. So some of the issues I had with Tracks, might have been resolved. However, since Tracks is a ruby application, I doubt that its performance problems on my hosted environment was resolved – that really is more of a local problem for me, than a product problem. So, if you have a well working ruby on rails environment, I certainly recommend Tracks for a well designed GTD solution.

Why I like Tracks 2:

  • web based
  • very nice interface, very intuitive to use
  • platform independent
  • Android based, mobile application to complement the web based solution

What I disliked about Tracks 2:

  • quite difficult to setup on a common web hoster like mine (Bluehost)
  • very, very, veeeeery slow – at least on Bluehost; particularly on the first access
  • sometimes, lacking extensibility

PHP GTD

This product felt a little like being back in the 90s. For some reason, the UI feels unpolished and that can easily distract from the fact that it’s a pretty powerful platform. There’s quite more to it then meets the eye.

What I like about PHP GTD:

  • Allows highly customized structuring of actions
  • web based
  • well performing, quite easy to set up
  • Actions usually go throught the typical GTD stages

What I dislike about PHP GTD:

  • User interface rather complicated
  • Interactivity (e.g. Ajax) feels like glued on instead of integrated like in Tracks
  • Actions can get lost in elaborate structure of project and sub projects

Thinking Rock

Why I finally ended up working on Thinking Rock:

  • Stand-alone application, cross-platform
  • Very snappy when entering and organizing actions
  • Good reporting capabilities
  • Data entry is quick and easy, goes through the typical GTD stages
  • Separation of
    • somedays: not right now but some time later (including ticklers to remind one),
    • scheduled items: need to be done on a particular date
    • and asap todos: as the name suggests, do as soon as possible
  • If you feel fancy, you can even enter required energy levels and required time for actions and then work them off according to your mood and available time
  • Separation of your own actions and the ones that were delegated to someone else: great for tracking the outcomes and resulting action items of a meeting, for example

Thinking Rock stays quite true to the original GTD definition; this can be a good and a bad thing, I guess.

There’s also a commercial release of Thinking Rock, which basically is the further improved version 3 of the product. So far, I’ve stayed with version 2 and didn’t have the feeling that I’m missing anything. An in-depth comparison of the two releases, can be found on the Thinking Rock website.

Social project management

Staying in touch with up-to-date technology and stuff that’s happening around is becoming more and more of a multi-tasking thing: while back in the days there used to be a batch of magazines and websites to read, suddenly information is coming from all sorts of new sources. I’m not talking blogs here – though strangely enough, blogs seem to be making a comeback again (including mine, I suppose). It rather is a pointer towards social media and its capabilities for information transport.

I’ve spent the last few minutes on Google Plus, immediately stumbling upon some article on the PMI blog. Yeah, nice read – however I’m wondering if this is really helping me gathering information or rather attracting some online form of ADD.

Events these days are intrinsicly woven into our social fabric – for example, attending this year’s confare Mobile Summit and CIO & IT Manager Summit was as much about online social as it was about networking and meeting people. One was even encouraged to blog, tweet and link/like away. Confare ran a contest for the best online posts, the winner getting a tablet if I remember correctly.

I’m a strong proponent of GTD – doing my best to get things off my chest by writing it down immediately and dealing with it one by one. I’ve got to admit I’m still searching for the optimal solution to integrate online social into this work set; particularly because this kind of connection happens very ad hoc and requires immediate attention to stay fresh. Then again, I suppose this will keep evolving just like project management.

Speaking of GTD: I hope to get around on writing up my review of a couple of tools I’ve tried out over the course of the past 12 months. For the time being I’ve settled with Thinking Rock after checking out Tracks, Mantis, iGTD, …

A happy new year and a secure system to you

Lately I’ve been busy doing some security related work. As it turns out, computer attacks have become a lot more sophisticated and back-handed. It’s no longer just about having all the latest updates and security fixes installed but even more about having some sort of intrusion detection mechanism that helps you detect any misbehavior or unusual communication reaching into the internet.

Some lessons learned from detecting and cleaning up after a malware/viruses/trojans/etc.

  • have detection software at hand (there are very good packages like Malware Bytes that can help you sort out a situation when you might be unsure if a computer is infected. A “nothing found” message shouldn’t necessary be taken as a definitive stamp of approval, but if detection tools find something you at least have some kind of confirmation of your suspicions. Other helpful software is Sysinternals’ Process Explorer and RootkitRevealer.
  • do not – I repeat – do not let your computer systems communicate directly with internet services; have a firewall/router/proxy in between that filters and/or logs what is going on. In most instances this is the number one address to go to for any indications of an attack or already taken over system. This will also be very helpful in understanding where an attack came from and how it started and operated. After all, if you don’t learn and just reset everything to the state before a problem occurred, nothing is going to stop an intruder to repeat his ploy if the previously exploited security holes have not been closed.
  • have backups of everything – not just server data but also snapshot images of your client systems in a pristine/after-install state. Best thing to do is probably have a couple of snapshots over the lifetime of the computer: from fresh install with only the operating system, one after installing all the typical software tools and applications and to top it off you might even consider doing a backup snapshot once every year or quarter year so you have covered everything. I can highly recommend partimage or Ghost for Linux for this task, but even a simple dd or a commercial Symantec Ghost will do. If you want to be on the safe side – make your file backup a  3-2-1 backup: have them stored in three places, on at least two different media types and at least one of these media off-site.
  • if you are running computers in an exposed setting (e.g. in retail where strangers might try to gain access to your unattended computer), have a screen-saver activated after a minute of inactivity requiring a re-login when turned off; furthermore make sure passwords are strong. To ease the inconvenience of constantly having to relogin you could install fingerprint/RF/card readers.
  • last but not least, if at all possible tweak your system to be more secure: disable Autorun/Autoplay, have antivirus software installed and running. Microsoft released its Security Essentials package that has proven its’ worth thousandfold. It’s fast, pretty reliable and so far has been the least intrusive anti-virus experience I’ve had the fun to deal with.

It’s a very brief and quick summary of recommendations, but it’ll save you a ton of time when disaster strikes – not only in regards to security but also in case a harddrive goes.

The science of hiring

A couple of weeks ago I had a conversation with a friend of mine about how working in the States alters your perception of work – particularly if you’re working on a management level and need to pick your own team. Picking the right people, who get the job done on time and on budget is no easy task. Not just because it might be a rather unjust thing to judge people by one, two, three or X interviews lasting Y minutes over the course of Z days, but also because it’s very much about the environment, a team member’s personal life, the project schedule and so many other factors if someone performs well beyond expectations, just so so or doesn’t seem to get anything done at all. Of course, what matters most is if the chosen ones are performing well on average over the given work; after all, it’s no good to work on a project 150% for 3 weeks and then quit for 6 months because of a burn-out.

I’ve seen a good number of different interviewing styles and ways over the years; I’ve applied my fair share myself. So when I happened to read Joel Spolsky’s “Guerilla Guide to Inverviewing”, which is now in its third version, I had to smile. I know, it’s a long but really worthwhile read. There’s a couple of really interesting points he makes – some I’ve seen applied in the wild myself. The way I cited it, the title might be a little misleading though – the outlined techniques are limited to IT and are probably most applicable to hiring people in software development – after all, there’s certainly a reason why Joel’s blog is called “Joel on Software”.

I remember having a discussion once with a friend of mine, if quizzing someone in an interview about technical stuff and letting the interviewed person program something on paper is detrimental to the interview process. The point my friend tried to make was that the interviewer would not learn anything from him as a person by letting him write a “stupid” little program. However, this particular conversation focused on the comparison of technical versus soft skills. While I’ve got to agree that writing program code will not tell me very much about a person’s social competence, there’s a valid point in the necessity of checking one’s technical skills – especially if you plan to hire that person for programming and extending your team. Still, there are concepts and techniques in computer science that require a very special skill set. One of those may very well be C pointers, like mentioned in Joel’s article, others might be social skills in defusing tough meetings. Depending on the job profile, they might be both worth checking out, but that takes time – particularly if you want to check the technical and social competence of someone.

The aforementioned article outlines three concepts, which can probably gap social and technical:

  • Take a point where the interviewed person is right and pretend you think she is wrong. Argue and see if the person is capable of convincing you in a polite manner that I am in the wrong.
  • Let the interviewed person design something and see if she just starts painting some UML or diagrams or if the person is actually querying for more information before jumping into the unknown.
  • Drag some point along; just sit on it and act as if you cannot decide and see if the interviewed person is willing to pick up the ball and go along the line “you know what? Lets just take A now, we can still come back later, reconsider and take route B if we realize later that it’s the better option”

I imagine that these might create some very awkward situations if applied a little too forcefully – so keep in mind: you’re the interviewer – if you push too much you might intimidate and get a skewed view of a candidate.

Merry X-mas

Merry Christmas to all of you!

After a long break I figured it’s about time I’d revive this online estate; the holidays are also a good opportunity to sit back, collect your thoughts and document some of the amazing experiences you had over the course of the last year.

I’m thinking about writing an article series about one great tool I’ve been using heavily over the past year: Mantis; dubbed as a “bug tracking software” if really far exceeds one’s expectations of a typical bug tracker.

I created my own personal installation of it and tinkered around with it. In the process I found so many new features, which make it so much more useful as a productivity tool. So stay tuned for the first installment. Until then – merry Christmas to all of you!

Onslaught of functionality

Beginning of September I attended SafeCOMP 2009 in Hamburg to hold a talk on the challenges of object oriented programming languages in safety critical applications. One major point which was driven home again and again was that object oriented programming can certainly be used in safety-critical areas but it needs strict development rules which need to be followed rigorously. The topic by itself – software development in safety relevant applications – is incredibly complex. Just look at the DO-178B certification standard: its successor, the DO-178C has been under inspection and development for quite a while now. One of the biggest obstacles to overcome is, that changes and additions to the standard need to be done unanimously. Since there are so many interested parties involved in the standard’s development (anyone who wants to participate may do so, no particular prerequisite is required), it takes a lot of time to resolve discussions.

Any kind of software development team may learn a good deal from safety-related coding guidelines and recommendations, simply because it also has impacts in regards to stability, security, reliability and last but not least maintainability. One of the primary issues of object oriented technology in regards to safety is dead and deactivated (unused by “design”) code. For instance, when you think about complex software packages like JBoss 5, not many applications can say of themselves that they are truly using 100 percent of the functionality offering of its middleware framework. The thing is, that the more additional features there are – no matter if they are used or not – they can produce unexpected errors and problems.

A very good example is outlined in TWIT’s Security Now podcast episode 211. Developing “single-use” appliances with a Windows machine or a general PC underneath opens up so many potential attack routes that they are hard to manage and can barely be called secure or trustworthy to begin with. Every additional program or functionality stone you heap upon that pile of preexisting “conditions” aggravates the problem one or more degrees. For instance, just today I read about security attacks made possible due to lack of maintenance and security awareness. There are so many nobs you can turn in a Linux system, it can be overwhelming for some people. The result is insecure systems because people weren’t even aware of the potential risks and problems.

However, it’s not just the frameworks we are using, it’s also very much about the programming language we choose. While C++ will treat the programmer like a consenting adult, Pascal will treat her like a child and Ada like a criminal. C++ is a complex and powerful language, but this power can be misguided. We all know there are people who live the “it was hard to code, it should be hard to read” approach, but even if you try to write legible C++ code it happens quite often that a few lines of code can have ambiguous meaning. A misplaced bracket can wreak havoc, literally.

I worked on Flash- and Flex-based interactive kiosk-applications myself which were then deployed on Windows and Mac OS. Did it work? Sure it did. Were there safer alternatives? For sure, however sometimes time and budget constraints require you to choose a technology which has already many of the required functionality built-in. Even though this comes with additional, unnecessary functionality, it permits you to boost your development cycle and produce stable and good applications if done properly. I won’t recommend it for building an airplane navigation system but for a simple kiosk app it is certainly a good thing. This kind of trade-off between using preexisting packages, frameworks and libraries and avoiding dead and deactivated code by building your own specialized niche product will always be looming over IT.

I guess the point of this post is, that when doing software development, very much like running your own servers and applications, it is important to keep safety, security, reliability and maintainability in mind. To exaggerate the point a little: noone wants to crash in his car, because he changed the radio station…

Further references

IPMA and PMI certification differences

There are a various standards and methods in regards to project management. Trying to grasp them all at the same time can make one’s head spin. Particularly due to their different vocabulary covering basically the same things.

For instance, there are the two well known project management certification paths and associated methodologies by PMI and by the IPMA. Their principles are very similar – at least from what I’ve seen so far – but they differ in wording.

PMI’s definition of project management is separated into different knowledge areas:

Project Integration Management: defining the charter and scope statement plus a viable project plan; monitoring and controlling project work

Project Scope Management: creation and planning, protection and fulfillment of project scope, includes creation of work breakdown structure (WBS)

Project Time Management: definition of activities, their properties and how they fit into the project schedule; prioritize and put them into order, define how long they take and decide how long the project will take

Project Cost Management: planning, estimating, budgeting and control of costs; remember the iron triangle: bound to time and quality and scope

Project Quality Management: planning and controlling quality in a project

Project Human Resource Management: staffing or more general staff acquisition and team development (this alone could fill a book)

Project Communication Management: details about how to communicate in the project; defining repetitive communication and outline escalation channels, etc.

Project Risk Management: assessment of risk; planning, analysis, monitoring and control of risk

Project Procurement Management: make or buy decisions; procurements and contract management

It’s a beautiful structure which gives guidance and is easy to understand. IPMA’s competence baseline document (download here or a German version here) in comparison splits into different competence areas:

Technical competence: the foundation of project management procedures like requirements management, risk management, controlling and so on

Behavioral competence: soft skills required for project management; attitudes and inter-personal relationships between project members

Contextual competence: anything relating to the context of projects – the relationship between the project manager and the overall organization and management

From what I’ve read so far, the ICB looks a lot like a glossary of terms outlining all the important project management terms – from startup and writing a charter to project closure (or close-out as it is called in the ICB). It’s a good read but it will not give you any practical advice since this is one of the things the ICB is defined not to be: it is not meant to be a cookbook full of recipes but instead lay the foundation that any organization may pick up to develop into its actual project management implementation. A whole chapter is dedicated to the certification and re-certification procedure. I highly recommend it to anyone considering joining the IPMA or PMI.

A particular nice feature of the ICB is, that the IPMA clearly permits national chapters to add up to 10 percent in content and methods for nation-specific details when building a national competence baseline (NCB).

Overall I think that both programs – IPMA’s and PMI’s – work well even if combined. IPMA’s ICB is certainly more theoretical while PMI’s PMBOK sides more on the practical toolset side – even if just a tiny bit (e.g. by taking into account procedures like EVA and more practical applications in regards to risk management and scheduling).

Time is flying (by), or sometimes it takes the train

I’m currently working on a feature comparison between IPMA’s and PMI’s methodologies and certification processes. While this itself takes up a good amount of time, I’ve been attending SafeCOMP 2009 in Hamburg last Tuesday. I held one of the talks in regards to object oriented programming in safety-critical applications. The preparation filled what was left to be filled in my schedule. However, SafeCOMP was a very exciting experience. Usually one does not really appreciate how much effort goes into it software development wise when one rides the train or takes an airplane. I understand the papers and presentations will be published in the next days, so I’ll hopefully be able to post a link for anyone interested.

A new type of first-person game?

IBM has built a virtual replica of China’s Forbidden City in collaboration with the palace museum. The budget for the whole venture is said to have been around 2 million dollars. They named it “The Forbidden City: Beyond Space and Time,” and it’s built using a service-oriented architecture, according to IBM’s director of citizenship and technology, John Tolva. It’s free to download and use and gives you a chance to walk around in the Forbidden City as if you were really there. You can even see other users roaming the premises and interact with them. Really amazing – especially because of the unusual SOA approach.

Stand-up meetings

Over the last couple years, agile development methods have been picked up by more and more companies. Many of the ideas and concepts which were created out of the principles outlined by the agile manifesto have proven useful and can be used not only in agile development but in day-to-day practice independent of the business’ working area.

One of those methods adopted by many organizations worldwide is “standup meetings”. The basic idea: a meeting is being held with all participants standing, not sitting down. As long as some constraints are fulfilled, meetings become more effective. They take less time and get results faster. There’s even special furniture which can be used to hold such meetings. (more…)