Archive

Archive for the ‘software development’ Category

Tools of the Effective Developer: Regular Expressions

May 12th, 2010 8 comments

Whenever I suggest using regular expressions to solve a string parsing problem, more often than not I’m met with skepticism and frowning faces. Regular expressions have a bad reputation among many of my fellow developers.

(Yes, they are mostly Windows developers, Xnix users don’t seem to have this problem.)

But can you blame them? I mean, have a look at this regular expression for validating email addresses.

[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?

That’s enough to send any normally functioning individual out the door screaming. Fortunately, regular expressions aren’t always this messy. In fact, the simpler the problem, the more effective and beautiful they get.

Let’s look at an example that shows the power and density of regular expressions.
Suppose we have a chunk of text. We know that somewhere in it there’s a social security number. Our job is to extract it.

This is one of those tasks that involve a decent amount of code unless your language of choice has support for regular expressions. In Ruby, on the other hand, it’s a single line of code.


text = “Test data that 123-45-6789 contains a  social security number.”

if text =~ /\d\d\d-\d\d-\d\d\d\d/
  puts $~
else
  puts “No match”
end

If  the match operator (=~) and the magical match result variable ($1) puts you off, here’s how it’s done in C# that doesn’t have a special notation for regular expressions, but support them through the .Net framework.


String text = “Test data that 123-45-6789 contains a  social security number.”;

Regex ssnReg = new Regex(@"\d\d\d-\d\d-\d\d\d\d");
Match match = ssnReg.Match(text);

if ( match.Success ) {
  Console.WriteLine(match);
} else {
  Console.WriteLine("No match");
}

A beautiful thing with regular expressions is that it’s really simple to extract the parts of a match. For instance, if we need to extract the area code all we need to do is to put parenthesis around the part we’re interested in. Then we can easily extract that information, in C# by using the match.Groups property.


String text = “Test data that 123-45-6789 contains a  social security number.”;

Regex ssnReg = new Regex(@"(\d\d\d)-\d\d-\d\d\d\d");
Match match = ssnReg.Match(text);

if ( match.Success ) {
  Console.WriteLine(match);
  Console.WriteLine(“Area Number: “ + match.Groups[1]);
} else {
  Console.WriteLine("No match");
}

As intimidating as they may seem, the payoff using regular expressions is huge. And, since efficiency is what we strive for, they have a natural place in our bag of tools. So, learn the basics of regular expressions. You’ll be happy you did.

Finally some advice sprung from my own experience with regular expressions.

•   Get your regular expressions right first. Use Rubular, or an equivalent tool, for pain free experimentation before you implement them in your code.
•    Document your regular expressions. They are notoriously difficult to read so a short description with example match data is almost always a good idea.

Cheers!

Previous posts in the Tools of The Effective Developer series:

  1. Tools of The Effective Developer: Personal Logs
  2. Tools of The Effective Developer: Personal Planning
  3. Tools of The Effective Developer: Programming By Intention
  4. Tools of The Effective Developer: Customer View
  5. Tools of The Effective Developer: Fail Fast!
  6. Tools of The Effective Developer: Make It Work – First!
  7. Tools of The Effective Developer: Whetstones
  8. Tools of The Effective Developer: Rule of Three
  9. Tools of The Effective Developer: Touch Typing
  10. Tools of The Effective Developer: Error Handling Infrastructure
Categories: C#, ruby, software development Tags:

Analog or Digital Task Board? That’s the Question.

April 8th, 2010 No comments

As a project manager, the best tool I’ve come across to help me monitor and control an iteration is The Sprint Task Board. It strikes a perfect balance between expressiveness and visual feedback. One quick look is all you need to make a diagnose of the ongoing iteration. Yet, it contains most of the information needed to make informed decisions.

A drawn example of a task board, containing for instance a burndown-chart, sprint goal, and tasks.

The task board is absolutely invaluable and I’ll probably never do team related work without it again. The question, though, is whether to use a computerized or a real world one.

Henrik Kniberg recommends the latter. The following quote is from Henrik’s excellent book Scrum and XP from the Trenches.

We’ve experimented with different formats for the sprint backlog,
including Jira, Excel, and a physical taskboard on the wall. In the
beginning we used Excel mostly, there are many publicly available Excel
templates for sprint backlogs, including auto generated burn-down charts
and stuff like that. […]

Instead […] what we have found to be the most
effective format for the sprint backlog [is] a wall-based taskboard!

And there are good reasons for picking an analog task board. Maybe the most significant being:

  • Visibility. A wall-based dashboard is always up, always visible. If positioned near the team it’ll work as a constant reminder of the shape of the iteration.
  • Simplicity. Planning and re-planning is simply a matter of moving physical cards around, tearing them up or writing new ones.

In my current project we’re using an application (Scrum Dashboard on top of TFS). The reasons are:

  • Availability. Actually, this is the only reason for us to use a digitalized task board. We’re having dislocated team members from time to time (did I say we’re ScrumButs?) but since our dashboard is a web application they are able to access it remotely. It would be a lot more painful to relay the information otherwise.
  • Automatic burndown update. No need for me (or any one else) to do a work remaining summary and update the burndown chart each and every day.

What are your reasons? I’d be most interested in hearing your point of view on this subject.

Cheers!

Categories: software, software development, tools Tags:

I am a ScrumBut

March 26th, 2010 No comments

Many organizations that are going agile are experiencing difficulties in applying the whole concept. Most of them yield and make some kind of adaption to better suit their current situation. Ken Schwaber, the father of Scrum, thinks this is just a way to avoid dealing with the true problems in the organization. He calls them “ScrumButs”.

In my organization we only fulfill two thirds of the Scrum Checklist, so I guess that makes us ScrumButs. I’m not going to feel bad about it though; Change takes time and we’re pretty happy we’ve gotten as far as we have. But Ken’s right, we really should do something about our most difficult problems and go the full ten yards. Maybe later. 🙂

Cheers!

Software Development Principles Worth Knowing

March 19th, 2010 No comments

I’ve just finished writing an article on common software development principles for a swedish computer magazine (Datormagazin, the article is scheduled for the may issue). The research work for the article was great fun and gave me the opportunity to freshen up on stuff that sunk into some kind of sub-conscious knowledge state years ago.

Anyway, I compiled a list of the principles I think are worth knowing. You can find it in the Principles section of this website.

Cheers!

Categories: software development Tags:

APM Delivers Nothing

January 22nd, 2010 No comments

I’m reading Jim Highsmith’s book Agile Project Management. It’s a good book, as the frequent use of my marker pen shows, except for maybe this one sentence:

“APM reliably delivers innovative results to customers within cost and schedule constraints.”

Let me take that sentence out of its context and make a point I believe needs to be stressed. Software development methodologies (Agile Project Management in this case) delivers nothing! People, on the other hand, do.

Some years ago I developed a special interest in software development methodologies. Since then I’ve spent much time reading literature on Scrum, eXtreme Programming, etc, aiming to optimize the processes we use in our projects. During that time I’ve learned a lot, but also come to realize that I’ve been focusing too much on the wrong things.

In the past I thought the problems we were experiencing were problems with the methodology, an easy conclusion to draw from the propaganda-like information out there. However, I’ve found that the methodology matters little in comparison to the quality of my project team. Today I focus more on the team, the product, and the customer, and less on the latest within agile.

Thus, my formula for a successful project is this:

  • Spend less energy on the methodology; i.e. pick a simple one and adapt it when (and only when) needed, or stick to the one you’re currently using
  • Do more to get the right people
  • Make sure they are motivated and connected
  • If you think you’ll fail, do it fast

Summary

If we focus too much on the methodology, and give it too much importance, we risk loosing sight on the real goal, namely producing the right system. And for that you need the right people.

Cheers!

The Bad Practices of Exception Handling

October 31st, 2009 12 comments

Exception handling has truly been a blessing to us software developers. Without it, dealing with special conditions and writing robust programs was a lot more painful. But, like any powerful tool, badly used it could cause more harm than good. This article name the top three on my Exception handling bad practices list, all of which I’ve been practicing in the past but now stay away from.

Swallowing Exceptions

Have you ever come across code like this?

try
{
  DoSomeNonCriticalStuff();
}
catch (Exception e)
{
  // Ignore errors
}

DoStuffThatMustBeDoneDispiteAnyErrorsAbove();

Of all the bad exception handling practices, this is the worst since its effect is the complete opposite of the programmer’s intention. The reasoning goes something like this: Catching exceptions where they don’t hurt makes my program more robust since it’ll continue working even when conditions aren’t perfect.

The reasoning could have been valid if it wasn’t for Fatal exceptions; Here described by Eric Lippert.

Fatal exceptions are not your fault, you cannot prevent them, and you cannot sensibly clean up from them. They almost always happen because the process is deeply diseased and is about to be put out of its misery. Out of memory, thread aborted, and so on. There is absolutely no point in catching these because nothing your puny user code can do will fix the problem. Just let your “finally” blocks run and hope for the best.

Catching and ignoring these fatal exceptions makes your program less robust since it will try to carry on as if nothing happened in the worst of conditions, most likely making things worse. Not very Fail fastish.

So, am I saying that ignoring exceptions is bad and should always be avoided? No, the bad practice is catching and ignoring general exceptions. Specific exceptions on the other hand is quite OK.

try
{
  DoSomeNonCriticalStuff();
}
catch (FileNotFoundException e)
{
  // So we couldn't find the settings file, big deal!
}

Bad example, I know, but you get the point.

Throwing Exception

Here’s another bad practice I come across every now and then.

throw new Exception("No connection!");


The problem is that in order to handle Exception we have to catch Exception, and if we catch Exception we have to be prepared to handle every other type of Exception, including the Fatal exceptions that we discussed in the previous section.

So, if you feel the need to throw an exception, make sure it’s specific.

throw new NoConnectionException();


If the idea of defining lots of specific exceptions puts you off, then the very least thing you should do is to define your own application exception to use instead of the basic Exception. This is nothing I recommend though, since general exceptions, like ApplicationException, violate the Be specific rule of the Three rules for Effective Exception Handling.  It’ll make you depend heavily on the message property to separate different errors, so don’t go there.

Overusing exceptions

Exceptions is a language construct to handle exceptional circumstances, situations where a piece of code discovers an error but don’t have the context necessary to handle it. It should not be used to handle valid program states. The reasons are:

  1. Performance. Throwing an exception with all that’s involved, like building a stack trace, will cost you a second or so.
  2. Annoyance. Debugging code where exceptions are a part of the normal execution flow can be frustrating.

Eric Lippert calls those exceptions Vexing exceptions, which I concider a great name given the second argument. Make sure you’ll check out his article (link at the beginning of this article).

Those were the three misuses of exception handling I concider worst. What’s on your list?

Cheers

Tools of the Effective Developer: Error Handling Infrastructure

September 3rd, 2009 2 comments

I often come across code with little or no infrastructure for error handling. This is a big mistake, one that’ll make you pay increasingly as the code base grows. Why? Because your code’ll end up with loads of this:

try
{
  ParseText(SomeText);
}
catch (Exception e)
{
  MessageBox.Show("Error while parsing.", "My Application",
         MessageBoxButtons.OK, MessageBoxIcon.Error);

  // Skipping the rest due to the error
  return;
}

It may look OK but there are several problems with the error handling code above, some more severe than others.

  • Code duplication
    A message box for displaying error messages to the user should almost always have an error icon, the application name as the title, and an OK button. Repeating this all over the code base is not very DRY and should be avoided.
  • Inconsistency
    A related problem is that when you leave programmers with only the general error handling routines of the platform chances are they’ll end up doing things differently, resulting in an inconsistent user experience; For example showing some error dialogs with an error icon and some without.
  • Information loss
    A more serious problem is the fact that the code in the example, by ignoring the exception object, drops information that could be important to track down a bug. We could of course mititgate that problem somewhat by showing the error message of the exception to the user (by concatinating e.Message), but we’d still miss an important piece of information: the stack trace.
    Since dumping a pile of function calls in the face of the users is a no no, what we’d really need is a way to put the details in a non-intrusive place, for example a log file. If the Error Handling Infrastructure doesn’t make this easy, we’re likely to leave that kind of information out completely. For shame.
  • Not automation-friendly
    But the most severe problem with a spread out usage of UI displaying methods like MessageBox is that it makes your code impossible to automate. If someone has to monitor a scheduled run of for example unit tests, and every now and then klick an OK button, it kind of beats the purpose of automation.

So here’s my advice: Implement a strategy for handling errors at the earliest possible time. That is, right after setting up continuous integration for the Hello world application.

Properties of an Error Handling Infrastructure

So how should a error handling infrastructure be like? There’s only one rule. It has to be easy!
If it isn’t simple to use, it won’t be used, and programmers end up using the general purpose message showing methods again, or worse, not doing any error handling at all.

I usually build my error handling interfaces around two use cases.

  1. Displaying error messages
  2. Logging error messages

These are often used in combination, for example

try
{
  ParseText(SomeText);
}
catch (Exception e)
{
  ApplicationEnvironment.ShowErrorMessage("Error while parsing: " + e.Message);
  ApplicationEnvironment.LogError(e);
  return;
}

Or, to avoid code duplication, combined into a single convenient method.

try
{
  ParseText(SomeText);
}
catch (Exception e)
{
  ApplicationEnvironment.HandleException("Error while parsing", e);
  return;
}

When you design your error handling interfaces don’t be afraid to add lots of convenient methods. Remember, it has to be easy to use and that’s what convenient methods are all about, as opposed to utility methods who generally need to be given more arguments. In this case I prefer the Humane interface design style instead of a minimal interface approach.

Needs to be configurable

Furthermore, the Error Handling Infrastructure needs to be configurable or support some other kind of dependency breaking technique like dependency injection. For instance, you should be able to mute all error messages intended for the user, and instead write them to the error log. This way your code will be able to run in a scripted environment, like during unit testing.

There are many ways to design an Error Handling Infrastructure. You could create your own message dialogs allowing the user to send a report right away, you could use the built in application log of the operating system or just plain text files, etc etc.

Whatever you choose to do, don’t forget to do it early and make it easy.

Cheers!

Previous posts in the Tools of The Effective Developer series:

  1. Tools of The Effective Developer: Personal Logs
  2. Tools of The Effective Developer: Personal Planning
  3. Tools of The Effective Developer: Programming By Intention
  4. Tools of The Effective Developer: Customer View
  5. Tools of The Effective Developer: Fail Fast!
  6. Tools of The Effective Developer: Make It Work – First!
  7. Tools of The Effective Developer: Whetstones
  8. Tools of The Effective Developer: Rule of Three
  9. Tools of The Effective Developer: Touch Typing

CodeGear, Please Fix the Anonymous Method Assymetry

June 3rd, 2009 4 comments

As I noted in my previous post, anonymous methods is a big new feature in Delphi 2009 for the Win32 platform. While “closures” is natural and much appreciated in other languages, most notably Ruby, the Delphi community is still a bit reluctant and hesitant. Segunda Feira put words on it in his post on the subject.

I am still not convinced that this convenience is worth getting all that excited about – It has not enabled anything that was previously impossible, nor even especially difficult either to implement or to understand

[…]

anonymous methods should be used only when you absolutely have to, and so far I have not yet seen an example where anonymous methods are the only way to achieve anything

I agree that more often than not a problem can be solved with equal elegance using a pure object orientated approach, but there are situations where anonymous methods may actually be the better alternative. One situation that comes to mind is the setting up of test fixtures.
Say, for instance, that we want to test the event firing mechanism of a component. An easy way to set this up could be like the following.

procedure TestSomeComponent.TestOnChange;
var
  SUT: TSomeComponent;
  OnChangeFired: Boolean;
begin
  OnChangeFired := False;
  // Set up the fixture
  SUT := CreateTheComponentToTest;
  SUT.OnChange :=
    procedure(ASender: TObject)
    begin
      OnChangeFired := True;
      CheckSame(SUT, ASender, 'The Sender event argument should be set');
    end;
  // Perform operation
  SUT.Text := 'Some text';
  // Check the result
  CheckTrue(OnChangeFired, 'Setting the Text property should fire the OnChange event');
end;

The above code checks that the component’s OnChange event is fired when the Text property is set, and that the component is set as the Sender reference to the event handler. Except for being more compact, the biggest advantage to using anonymous methods in this scenario is that we avoid the obscure test smell and that we don’t have to clutter our test class with an instance variable (like FOnChangeFired) to identify if the event was really fired or not.

The only problem is: it doesn’t work. Why? Well, because the OnChange property is most likely of an instance method reference type (i.e. TNotifyEvent), meaning it doesn’t accept a references to an anonymous method even if it has the same signature.

type TNotifyEvent = procedure(ASender: TObject) <strong>of object</strong>;

For our code to compile we need to redeclare TNotifyEvent and remove the “of object” keywords and instead use the method reference type.

type TNotifyEvent = <strong>reference to</strong> procedure(ASender: TObject);

But of course, that’s not an option. It would mean that the traditional way of setting up an event handler (by using an instance method) would not work.

I see a definite problem with how the Delphi language explicitly forces you to distinguish between instance methods (and class methods for that matter) and anonymous method references, even though they share the same signature.
This is most unfortunate since I feel that the situations where we’d like to support both kinds of event handlers are quite common. And with the current semantics we have to use code duplication in order to achieve that. Like the two Synchronize methods of the built in TThread class.

In Delphi 2009 an overloaded variant of the TThread.Synchronize method was introduced, one that make use of anonymous methods. Here are snippets from that code:

type
  TThreadMethod = procedure of object;
  TThreadProcedure = reference to procedure;
...
  class TThread = class
    …
    procedure Synchronize(AMethod: TThreadMethod); overload;
    procedure Synchronize(AThreadProc: TThreadProcedure); overload;
    ...
  end;

The two methods have almost identical implementations. The only real difference is the type of the argument.

procedure TThread.Synchronize(AMethod: TThreadMethod);
begin
  FSynchronize.FThread := Self;
  FSynchronize.FSynchronizeException := nil;
  FSynchronize.FMethod := AMethod;
  FSynchronize.FProcedure := nil;
  Synchronize(@FSynchronize);
end;

procedure TThread.Synchronize(AThreadProc: TThreadProcedure);
begin
  FSynchronize.FThread := Self;
  FSynchronize.FSynchronizeException := nil;
  FSynchronize.FMethod := nil;
  FSynchronize.FProcedure := AThreadProc;
  Synchronize(@FSynchronize);
end;

I may be overly sensitive, but code like that really disturbs me. Unfortunately it gets worse. If we follow the execution chain down into the Synchronize class method that is invoked by the two, we find this.

class procedure TThread.Synchronize(ASyncRec: PSynchronizeRecord; QueueEvent: Boolean = False);
var
  SyncProc: TSyncProc;
  SyncProcPtr: PSyncProc;
begin
  if GetCurrentThreadID = MainThreadID then
  begin
    if Assigned(ASyncRec.FMethod) then
      ASyncRec.FMethod()
    else if Assigned(ASyncRec.FProcedure) then
      ASyncRec.FProcedure();
    end else
      …
end;

It would be a lot nicer if the two reference types were joined under a common reference type. And I can’t see why it couldn’t be done. When I look at the “of object” keywords I get a feeling that the language is leaking compiler implementation through the language interface; information that is indifferent to the developer. What matters from a callers’ perspective is the method signature, not whether the method has a self pointer or not.

I hope CodeGear recognizes this problem and find a way to clean this assymetry from the language. Anonymous methods would be so much more useful if they do.

Cheers!

Hacker or Developer?

July 29th, 2008 3 comments

Jay Fields has a post up today where he makes a distinction between Developers and Hackers.

Time after time I see requirements gathered and presented in a way that is totally disconnected from the business problem that’s being addressed. There are two ways to handle the situation.

  1. Write the best code you can.
  2. Talk to the business.

Hackers generally go for the first choice, which doesn’t guarantee failure. In fact, a good hacker can still deliver code quickly enough that the business will be happy. Often, he gets it right on the 2nd or 3rd try.

But, a developer can deliver what the business is looking for the first time. A(n often) quick conversation with the business ensures that the developer knows what to work on and how it will benefit the business. This dialog often leads to a superior solution implementation where the business gets what it wants and the developer is able to do it in the most efficient way (e.g. I don’t need a beautiful website, I need a command line application).

It’s essentially the same distinction I tried to make a while ago, although I used the term Programmer instead of Hacker. Maybe if I’d used the term Hacker as well, I might have avoided some of the heat I took for those definitions 🙂

Interestingly Jay too suggests — although he doesn’t say it outright — a definition where human interaction and taking the Customer’s view is what distinguishes a developer, rather than coding skills.

Cheers!

Is agile only for elites?

March 28th, 2008 18 comments

I’m back from the ESRI Developer Summit. While suffering from severe jet lag I’ve spent the last couple of days in slow reflection. The biggest impact the conference had on me was a keynote held by Alan Cooper.

Early in his talk he put me in a defensive mode by stating that agile processes are bad for developing quality software. Alan means that the idea of little or no upfront design is ridiculous and will result in either expensive development costs or crappy software.
Instead he believes in having all outstanding uncertainties removed with a thorough and detailed design, thus developing a “blueprint” for a production team to follow. Additionally, in opposition to the agile manifesto, change is nothing he seems to embrace.

Most business executives believe that writing production code is a good thing. They assume that getting to production coding early is better than getting to it later. This is not correct. Writing production code is extremely expensive and extremely permanent. Once you’ve written it, it tends to stay written. Any changes you might make to production code are 1) harmful to the conceptual integrity of the code, and 2) distracting and annoying to the programmers. Annoying your programmers is more self-destructive to a company than is annoying the Board of Directors. The annoyed programmers hold your company’s operations, products, and morale in the palms of their hands.

So he wants us to go back to Waterfall. Doesn’t that give me the right to discard his thoughts without further reflection? No, I don’t think so.

It’s easy to forget that the “traditional” development processes were not created to make our lives as developers miserable. They emerged from common knowledge of that time, and they were formulated to address real problems. We would be foolish to disregard that experience from the past.

Let me be clear with one thing. I don’t agree with Alan. I do believe we can produce high quality software with agile methods, where design evolves with the production code. But I did, after my initial defensive reflex, find his perspective refreshing.

Alan’s talk is not published anywhere, but the general ideas are documented on his company’s website.

Software construction is slow, costly, and unpredictable.
[…]
Unpredictable is by far the nastiest of these three horsemen of the software apocalypse. Unpredictable means 1) you don’t know what you are going to get; and 2) you won’t get what you want. In badness, slow and costly pale in comparison to unpredictable.
[…]
The key, it seems, is vanquishing unpredictability, which means determining in advance what the right product design is, determining the resources necessary to build it, and doing so. As the airline pilots say, “Plan your flight, and fly your plan.”

Alan’s solution to the development problems of today is to divide work into three separate fields of responsibilities, something he calls “The Triad”.

Interaction design is design for humans, design engineering is design for computers, and production engineering is implementation. Recognizing these three separate divisions and organizing the work accordingly is something I call “The Triad.” While it cannot exist without interaction designers, it depends utterly on teasing apart the two kinds of engineering which today, in most organizations, are almost inextricably linked. It will take some heroic efforts to segregate them.

Collaboration with the customer (or users), as the agile methodologies suggest, is out of the question according to Alan. Why let the least qualified make the most important decisions, he reasons. Instead, Alan Cooper advocates the use of interaction designers (HCI experts). Thus, he identifies three key roles: design engineers, production engineers and interaction designers.

Production engineers are good programmers who are personally most highly motivated by seeing their work completed and used for practical purposes by other people. Design engineers are good programmers who are personally most highly motivated by assuring that their solutions are the most elegant and efficient possible.
Interaction designers’ motivations are very similar to those of design engineers, but interaction designers are not programmers. Although most programmers imagine that they are also excellent interaction designers, all you have to do to dissuade them of this mistaken belief is to explain that interaction designers spend much of their time interviewing users.

Alan doesn’t rule out agile methods completely. He thinks they have a place, but only as a part of the design process.

Currently there is a pitched battle raging in the programmer world between conventional engineering methods and Agile methods. What neither side sees is a path of reconciliation; where Agile and conventional methods can effectively coexist. Providentially, the Triad reconciles them very well. The lean, iterative, problem-solving work of the software design engineer is the archetype of Agile programming. The purposeful, methodical construction work of the production engineer is the quintessence of conventional software engineering, particularly the type espoused by disciples of Grady Booch’s Rational Unified Process, or RUP. Both methods are correct, but only when used at the correct time and with the correct medium.

Despite Alan’s thought provoking keynote, I’m still a believer of agile methods for the whole development process. I think it’s possible to build rigid software with little upfront design, a readiness for change, rapid feedback and customer collaboration. The problem I see is that it demands a lot more from us developers. Knowing the language and how to program the platform is no longer enough. We need system and interface design skills, as well as social skills. We also need to master important but difficult techniques like unit testing and code refactoring.

Maybe agile is only for teams of elites?

Categories: software development Tags: