Why so general?

November 14th, 2007 8 comments

I’ve been thinking a lot lately on the sad state of our business. A majority of software development projects fail in terms of money, time or quality. Why is this so? Why is developing software so expensive and so unpredictable?

Paul W. Homer touches an interesting subject in his post named Repeated Expressions. He means that use cases, test cases and ultimately the source code, are all different ways of specifying the same system. If we could cut down the redundancy, he argues, we would save time.

Others blame the methods we use for developing software. While this is true to some extent it isn’t the whole picture. The C3 project for example, the birthplace of Extreme Programming, were eventually canceled. So even the most agile approach may fail.

No, the main problem is most likely the complex nature of today’s software technology. But complexity is usually a sign of poor abstraction, and this fact has occupied my mind of late. The problem I see is that we are using general purpose languages to create domain-specific applications.

In a GPL we must make all kinds of low-level decisions: how to communicate, how to store the data, how to handle security, etc. We make our decisions based on technology, and once we make our choice we’re usually stuck with it. Wouldn’t it be great if we had a language in which we could focus on the business logic instead, wouldn’t it be great if technology was abstract? That would not only save us a lot of time, it would enable us to exchange one solution for another by running our code through a different compiler.

Is this an unrealistic approach? I don’t think so. The language would have to be domain specific, and you’d have to trade some creative freedom for the increased productivity. But many applications produced today don’t need that freedom anyway. In fact, many applications would benefit from being restricted and having good choices made for them.

Take websites as an example. Most of them have a similar structure, similar ways of communicating, similar ways of storing data, etc. Creating a DSL that would support the basic features of a web page is certainly possible, although you’d probably need pretty advanced layout and graphical features not to impose too much restrictions on the visual design.

The real problem would be to implement the different compilers to support the language. Communication protocols, data storage, formats, security, platform independence, all those things that makes web development difficult would have to be addressed by the compiler. A worthy task indeed, but – as with all complex things – better dealt with in one place. And think of the possibilities it would bring: one could build a compiler that generates an AMP site, one that generates the combination of ASP.NET and SQL Server, JSF, Ruby on rails, you name it. All from the same specification.

Wouldn’t that be great?

Cheers!

Categories: software development Tags:

The Extinction of Programmers

November 9th, 2007 31 comments

I read an article named The Future of Software Development by Alex Iskold. He predicts a future where only a few high quality software engineers will be able to serve the world’s need of computer systems.

With a bit of discipline and a ton of passion, high quality engineers are able to put together systems of great complexity on their own.

The idea is that fewer but more specialized people will be able to do more in less time.

Equipped with a modern programming language, great libraries, and agile methods, a couple of smart guys in the garage can get things done much better and faster than an army of mediocre developers.

I’d like to take this prediction a bit further. Well, quite a bit further actually. I see the extinction of software engineers altogether.

Programming as we know it, is a tedious, highly repetitive and error-prone business. It’s the task of telling the computer how to do things, rather than what to do. In other words: we are still dealing with computers at a very low level. To me, programming sounds like a task suitable for computers.

I see a future where we tell the computer, a kind of super-compiler, what we want to achieve. The input is our specification, and the output is a complete and tested system. All we have to do is specify, verify and then push the deploy button.

That would, using my definition, make us Developers rather than Programmers. There’ll be, as Alex pointed out, room for less and less programmers until they finally face extinction. The last programmer will probably live to see the super-compiler’s AI get clever enough to perform self-debugging and self-improvement. (I didn’t say this was in the near future.)

When I mentally put myself into this version of the future, a part of me protests: Where’s the fun in that? Well, I guess that’s just a sign of wrong focus. That part of me still embrace technology rather than business value.

Cheers!

Categories: software development Tags:

D Gotchas

November 6th, 2007 21 comments

I have been playing around with the D Programming Language lately, and I love it. D combines the low-level control of C and modern productivity features like garbage collection, a built in unit-testing framework and – the most recent feature – real closures.

But D is still a young language, and as such a little rough around the edges. It jumps out and bites me every now and then, forcing me to change some of my most common coding habits. Here are a couple of gotchas that made me trip more than once.

Premature Convertion

What’s the value of the variable after the assignment below?

real a = 5/2;

In D, the answer is 2. The reason for this unintuitive behavior is D’s arithmetic conversion rules which only takes operand types into consideration. A division between two integers result in an integer as well. The desired type, the type of the resulting variable, is disregarded.

To get the desired result we need to convert at least one of the operands to a floating-point number. This can be done in two ways. Either literally:

real a = 5.0/2; // a=>2.5

Or with an explicit cast:

real a = cast(real)5/2; // a=>2.5

Note that you must convert the operand, not the result. So this won’t work:

real a = cast(real)(5/2); // a=>2

The Premature Conversion gotcha is a particularly nasty one. It compiles and runs, which means only testing can reveal this bug.

Testing for identity

I’m a defensive programmer. I like to put assertions whenever I make assumptions in my code. By far, the most common assumption I make, is that an object is assigned. Here’s how I normally do it:

assert(o != null, "o should be assigned!");

In D, this is a big gotcha. The code above works as long as o is not null. If o is unassigned, we’ll get a nasty Access Violation Error. Here’s another example:

SomeObject o = null;
if (o == null) // <= Access Violation
  o = new SomeObject;

The reason is that D supports overloaded operators, in this case the equality operators (== and !=). Unlike Java, D converts the equality operator into a method call without checking for null references. So, internally, the above code gets converted to the following:

SomeObject o = null;
if (o.opEquals(null))
  o = new SomeObject;

Since o is null, the call to opEquals result in an Access Violation. Instead you should use the is operator to check for identity.

if(o is null) ...

Or

assert(o !is null, ...)

Despite the tripping, I actually like the idea of a separate identity operator. After all, “is a equivalent to b?” is a different question than “are a and b the same object?”. But, as we say in Sweden, It’s difficult to teach old dogs to sit.

Cheers!

Categories: D Programming Language Tags:

My Article On D Published

November 4th, 2007 No comments

An article I wrote for Sweden’s biggest computer magazine, Datormagazin, was published in the november issue. It’s an introduction to D, and I guess it’s the first time D gets mentioned in Swedish computer press.

Categories: D Programming Language Tags:

D got real closures

November 3rd, 2007 No comments

Thank you Vladimir for bringing this to my attention.

I have reported on this blog that D doesn’t have real closures. My opinion was that it didn’t matter all that much, but many people thought otherwise. Now it seems like D got it after all. The latest release of the experimental 2.0 version announce full closure support.

It would be interesting to know if the implementation is based on the solution that Julio César outlined in his blog reaction. Not that it matters.

Cheers!

Categories: D Programming Language Tags:

Abstraction is The Name of The Game

October 30th, 2007 6 comments

I just read a post by Uncle Bob that discusses the optimal length of a function. He quite correctly claims that the old functions-should-fit-on-a-screen rule lost its validity. He further states that “A well written function might be 4, 5, perhaps 8 lines long; but no longer.”

I’m not saying Uncle Bob is wrong, most well written functions are indeed short, but specific guidelines like that always makes me hesitant. The problem I see is that these rules tend to become an end in themselves, while the real purpose – the reason that the rule was once formulated – end up in the background.

I mean, you shouldn’t make a function short for the sake of making it short. A short function is not necessarily well-written, and a well-written function is not necessarily short. So, what makes a function well-written? Well, here’s my definition:

A piece of code is well-written if you with little effort can figure out what it does.

Keeping a function short certainly helps, but what really matters is how well it reflects intention and how well it hides details. So, don’t think lines of code, think abstraction.

Cheers!

Categories: programming Tags:

Tools of The Effective Developer: Make It Work – First!

October 29th, 2007 15 comments

I’ve come across many different types of developers during my nearly two decades in the business. In my experience there are two developer character type extremes: the ones that always seek and settle with the simplest solution, and the ones that seek the perfect solution, perfect in terms of efficiency, readability or code elegance.

Developers from the first group constantly create mess and agony among fellow developers. The second group contain developers that never produce anything of value since they care more for the code than they do for the result. The optimal balance is somewhere in between, but regardless of what type of developer you are: you should always start by making it work, meaning implement the simplest solution first.

Why spend time on an implementation that isn’t likely to be the final one, you might ask. Here’s why:

  1. The simple solution helps evolving the unit-testing safety net.
  2. The simple solution provide rapid feedback, and may prevent extensive coding of the wrong feature. It is like a prototype on the code level.
  3. The simple solution is often good enough, and – with a working solution ready – you are less inclined to proceed and implement a more complex solution unless you really have to. Thus avoiding premature optimization and premature design, that makes you add features that might be needed in the future.
  4. With the simple solution in place, most integration and programming interfacing is done. That makes it easier to implement a more complex solution.
  5. While implementing the simple solution, your knowledge of the system increases. This helps you make good decisions later on.

This may all sound simple enough to you. After all, the habit of Making It Work First comes naturally to many developers. Unfortunately, for me, I’m not one of those developers. I still let more or less insignificant design issues consume an unnecessary amount of time. The thing is, it is hard to find the perfect design on the first try. The perfect design may not even exist, or cost too much to be worth the effort.

That is why I struggle to attain the habit of Making It Work First.

Previous posts in the Tools of The Effective Developer series:

  1. Tools of The Effective Developer: Personal Logs
  2. Tools of The Effective Developer: Personal Planning
  3. Tools of The Effective Developer: Programming By Intention
  4. Tools of The Effective Developer: Customer View
  5. Tools of The Effective Developer: Fail Fast!

I’m back!

October 28th, 2007 No comments

I’m back from my three week vacation!

I had a great time, but as suspected I wasn’t able to stay away from computers. In the warm evenings, just for fun, I started to implement a ray tracer in the D Programming Language.

I have been looking for a suitable project that would give me a chance to get deep into D, and a ray tracer seems to be the perfect fit. D is supposed to be great at floating point programming and now I have the chance to find out on my own.

To make it a little more interesting I have used a more top-down breath-first kind of approach than I normally do. I want to see how that affects a test-driven development technique. As a part of the experiment I keep a detailed development log which I plan to share with you when I reach a certain point. It could be within a week or take several months depending on work load and inspiration level.

So stay tuned. I’ll be back with ray tracing, or other topics that comes across my sphere of interest.

Cheers!

Vacation – at last!

October 7th, 2007 No comments

Tomorrow I’m off to a well needed vacation. We’re going to Turkey for three weeks and I plan to stay as far away from computers as possible. That almost never works so don’t be surprised if posts keep coming.

Well, off I go, so

Cheers!

Categories: blogging Tags:

How to automate acceptance tests

October 5th, 2007 No comments

In a comment to my previous post AC wonders how I automate acceptance testing. He considers that as being done by real testers. Well, he’s absolutely right. I expressed myself a bit sloppy, so let me use this post to explain what I meant to say.

Acceptance testing is done by the customer to make sure she got what she ordered, to make sure you delivered the right system. This process cannot and should not be automated. What you can do – and this was what I meant to say – is to automate acceptance tests, and use them during construction.

The customer defines the acceptance tests. These are valuable to the developer since they can be used to validate the system as it develops. The sooner you get a hold on these test cases the better, so make sure you press the customer to produce them early. Better yet, help the customer in the process. That way you can help making the test cases automatable.

So, how do you design a test so that it can be automated? First of all you need to stop thinking in terms of user interface. Instead you should describe the test, in plain text, and in terms of action, function and result. Look for possible test data, edge cases, exceptional uses and describe those as well.

The second thing you need to do is to make the system in such a way that it can be automated. Separating the GUI from the business logic is often all you need to do to achieve that. You then automate the acceptance tests in the same way as you automate integration tests. In fact, they could even become a part of your integration testing harness. The only difference being the fact that they are defined by the customer.

I’m not saying this is easy. (Here’s a nice post which discusses when to automate.) I’m not saying it can be done for all of your acceptance tests. What I say is: it can be done for far more tests than you might think. For example, we are customizing a GIS system for an internal customer’s needs. The application is user controlled and involve plenty of complex actions, modifying graphical data and properties. Still we were able to automate most of the acceptance tests.

We spent a lot of time writing code to set up fixtures, initiate actions and check the results. It was worth every second though. You see, manually running one of our acceptance test cases usually takes several minutes. By having the computer do all the work, time is reduced significantly and free up developer time. We use the automated test cases individually, almost like an extended compiler, to verify features as we implement them. And at night we run all of our acceptance tests to get feedback on how far we’ve come and to spot unexpected problems.

But the real acceptance testing is still going to be done manually, by the customer, at the end of the project. Just like AC pointed out.

Cheers!

Categories: software development, test-driven Tags: