Common agile pitfalls

Tags

, , , ,

I had the chance of conducting a project launch agile workshop with Alan Jackson from Aptivate back in June. We were getting ready to kickoff a new project and wanted every team member to be on the same level on the process we were following. We did a two-day developers workshop before engaging users in a three-day backlog identification workshop. It was quite a success both in terms of the buy in we got from the client and the amount of stories we gathered at the end of the workshop. One of the most interesting conversations the development team had was around the common concerns and pitfalls in any agile project most teams face. This post is a compilation of the list of items we had discussed during our session.

Client upset at the end

Making sure that the client understands the underlying principles of Scrum is a very important step to take at the beginning of every project. It is very easy to misunderstand the value propositions agile bring to the table. It should be effectively communicated early on the whole concept of prioritization and not everything in the product backlog gets done at the end of the project. This is especially true when it comes to fixed-budget and schedule projects. In a true agile fashion the team needs to work with product owner about release plan while doing prioritization.

Specification in contract

This causes quite a friction between the client and developer when they sit down to sign-off/handover the product. If the initial contract signed between them includes product specifications at the start of the project, those items might be deprioritize during the development effort causing conflict to arise. In my opinion it is very tricky to both sides to articulate specification in a contract. If it is important to put specifications in a contract then it is better to put very broad and high-level product features instead of elaborated requirements. Having stories and detail spec in a contract makes prioritization and requirement changes very difficult as the project progresses.

Product owner hassle the team

The way the communication between the team and that of product owner should be on the basis that team members can call product owner anytime but product owner can only do so through the scrum master. It is always good to have a clear code of conduct when it comes to communication between the team and product owner.

No communication with product owner

Inactive product owner can be considered as a major risk when it comes to agile projects. In cases where the engagement of the product owner is in question, it is always good to put clauses about the duties and obligations of the designated product owner. This however should be supported with a good agile training for anyone being considered as product owner.

No time for estimation

Most teams consider estimation as some sort of magic and with good reasons. For most teams estimation efforts are considered as waste of time since its value to the development effort is not directly visible. Every team should invest a portion of its time in improving its skill in estimation, be it story points or otherwise to help identify it’s velocity. A team’s velocity is probably the differentiating factor between over or under committing in a given iteration. So putting an effort in learning the different methods of estimation will definitely help during release/sprint planning.

Having non-stories in a backlog

This might not seem an immediate issue when starting new projects since most of the items in a backlog can be considered as user stories. Fast-forward a couple of releases after users start playing with the product where bugs and improvements start coming in. Having a separate board for stories and bugs becomes handy in separating regular development from that of support efforts. So it is always a good idea to have a separate backlog for bugs. In my current project we have a scrum board representing our product backlog while another Kanban board is used represent and prioritize our bug and feedback.

Scrum board

Scrum board

Kanban board

Kanban board

Demo blows up

There is a common term for this ‘demo death’! It is a fact of life in software demonstration that something will always goes wrong but at least something can be done about it. In our team we introduced the concept of ‘code –freeze’ a couple of days before our demonstration. The team will continue to work on the project while we tag a stable version and make it ready for the demo. Of course nothing beats proper preparation for demo by preparing a scenario to walkthrough participants during the demo.

Technology dreams

This is probably an issue with a technically obsessed team wanting to come up with technically excelling solution. The team should always focus its efforts on what matters the most from the point of view the client. Yes it never hurts to use cutting edge technologies but the team need to embrace the ‘just enough’ principle whenever possible. And it never hurts to check the burndown chart every often to make sure that the team is progressing as planned. If there is time left at the end of the sprint then the team and product owner can decide on whether to introduce on those nifty technical items or engage in something else.

Product owner looses ownership of backlog

Product owner should be empowered in making any type of decision related with the backlog. In fact he/she should be responsible for managing and maintain it therefore product owner should be involved in all activities that involve the backlog especially grooming sessions. Another related pitfall is a product owner who has no authority or vision. In such scenario the product owner must be empowered to take decisions.

Stealth-holders

This usually happens during demonstration whereby senior management is present and comments are given by senior management about a feature of the product without a clear understanding of the reasons for such decisions. This is a very serious problem because it undermines the authority of product owner and confidence of users in guiding the product. Yes comments from management is what guides the vision of the product but in order to do so any one from management who wants to give comments and feedback should attend most if not all demo sessions otherwise they will be causing more problems than productive comments. Having ground rules during demonstrations and ensuring that decision makers attend demonstration sessions regularly can ease the friction

Changing requirements mid sprint

One of the principles of scrum is the concept of time-boxing iteration where the team engages in delivering a portion of the backlog. There might be situations that force product owner to change a significant proportion of stores mid sprint. If this is happening frequently for more than a couple of sprints in a project then it might be a good time to consider using Kanban rather than scrum which puts less focus on time-boxing an iteration.

No retrospective

If I have to pick the single most important ceremony from scrum it has to be the retrospective meeting. It allows a team to continue learning about the agile process in a true ‘inspect and adapt’ approach. No retrospective means learning freezes!

Restoring NuGet packages for streamlined team workflow

I think NuGet is among one of the best things to come out of the .net open source community. NuGet has seen quite a significant growth in the past couple of years as more and more projects are added to the repository. Add to this the ease of VS Package Manager and you have a streamlined developer experience inside VS. As an indi-developer one might not face that much issues with NuGet packages in a solution because he/she is the only one sharing the code and the fact that all packages reside inside the ‘packages’ directory of the solution. Even if one is using source control then there is no need for adding binary packages into the repository.

This will change if you are working as part of a developer team. I had been recently involved in a project with a reasonable team size but the potential of growing as the project progresses. The project is being built using ASP.NET MVC 4 with a handful of NuGet packages. Since the project is open sourced there is high possibility more contributors will end up cloning and/or forking the project. The other important thing is the team decided that binary files should not be included inside the git repo to minimize its size and for the most part no one would be interested to version binary files. So the problem becomes how do we ensure that developers will find it easy to clone the project and start hacking away immediately without worrying about dependencies. One option would be to add the entirety of the packages folder of the solution to the repo so that everything is included in the repo which will bloat its size considerably. The second approach would be to make sure that every developer installs all of the dependencies when he/she clones the project for the first time. This sounds great but wouldn’t it be nice if we can automate this process? Well NuGet 2.0 has support for package restore and that is exactly what I’ll walk you through in this post.

The first thing to do is setup your solution structure and add all the necessary packages to your projects. You can do this either through the Package Manager or the Add Package Reference dialog box (I’ll not go into how to use NuGet packages in this post but if you are interested you’ll fine a lot of information on how to use NuGet packages and VS package manager at the official NuGet website http://docs.nuget.org/).  Next you’ll need to enable missing package downloads in Visual Studio. This option needs to be set on each developer machine for package restore to work. To enable this setting:

  1. Within Visual Studio goto Tools –> Options –> Package Manager
  2. Check “Allow NuGet to download missing packages during build” checkbox

restore missing packages

Later on you’ll need to enable package restore for the solution you are working on. Right click on the solution inside Solution Explorer in VS and select “Enable Package Restore”

EnablePackage Restore

This will create a .nuget folder as part of your solution which includes NuGet.config, NuGet.exe and NuGet.targets.

enable package restore for the solution

Depending on your source control settings the binary (NuGet.exe) might not be added to your commit automatically. If you want you can add it manually and commit it as part of your changes but in our case we wanted to leave it as it is and instead download the binary as part of the build process during cloning.

If you add the binary to your repo then you are done! Commit your changes and anyone who clones your code and building it for the first time will notice that VS is downloading the packages and configuring your solution. If you are wondering how VS knows which packages to download then you might want to have a look at the packages.config file that is part of every project which uses at least on package.

But on the other hand if you are like me and don’t want binaries inside your repo then there is one last little step you have to follow to make it work. One of the files inside .nuget folder inside your solution is called NuGet.targets which is actually MSBuild file which you can modify. There is a line which reads as:

<DownloadNuGetExe Condition=” ‘$(DownloadNuGetExe)’ == ” “>false</DownloadNuGetExe>

Change the value to true and you are good to go. You might need to restart Visual Studio to reload the solution after that. There you go! now clone the source to a new location and build it, you will find out that it has downloaded NuGet.exe, downloaded and installed all packages in your solution.

Custom authentication in ASP.NET MVC 4

In any application security (authentication and authorization) is probably one of the most infamous non functional requirement. Specially if the application is targeted to be used by a segment of your user base, then one must cater for implementing the proper way of authenticating users.

The security feature of ASP.NET has been one of the most complete stack offered out of the box right in the framework. ASP.NET MVC being built on the core ASP.NET framework leans towards the same security infrastructure. Right out of the box you have windows and forms authentication and if you want to go with the Membership provider then you have all sorts of user, profile and role management baked into the Membership API.

Recently I was creating a web front end for an existing business application which existed for a while. The desktop app has its own user management with basic features. The authorization part of the application uses the open source .NetSqlAzMan authorization framework which in my opinion is the most feature complete authorization management tools for .NET applications. The existing user object has application specific attributes such as location, registered date, status and employee name and employee no. In addition there are custom implementations of IIdentity and IPrincipal from System.Security.Principal namespace. The immediate option was to utilize the existing implementation and use them inside the new web interface.

The first thing I did was to configure the web application to use forms authentication inside the web.config file. Next I made some changes to my Login action method to use the existing user authentication library which was conveniently named SecurityHelper.Authenticate. What this method does is check if we have a user with the provided user name and validate the supplied password. In addition it sets the principal for the main thread with the custom one i.e UserPrincipal which in turn includes the UserIdentity class with all the additional attributes not found in the CustomIdentity built in class.

Code Snippet
  1. publicstaticbool Authenticate(string userName, string password)
  2.         {
  3.             var context = newSecurityContext();
  4.             User user = null;
  5.             // Check if the provided user is found in the database. If not tell the user that the user account provided
  6.             // does not exist in the database.
  7.             try
  8.             {
  9.                 user = context.Users.First(u => u.UserName == userName);
  10.             }
  11.             catch (Exception ex)
  12.             {
  13.                 thrownewApplicationException(“The requested user could not be found.”, ex);
  14.             }
  15.             // If the user account is disabled then we dont need to allow login instead we need to throw an exception
  16.             // stating that the account is disabled.
  17.             if (null != user.Disabled)
  18.             {
  19.                 if (user.Disabled)
  20.                     thrownewApplicationException(
  21.                         “The user account is currently disabled. Please contact your administrator.”);
  22.             }
  23.             // Fianlly check if the passwords match
  24.             if (null != user)
  25.             {
  26.                 if (user.Password == HashPassword(password))
  27.                 {
  28.                     //Add the current Identity and Principal to the current thread.
  29.                     var identity = newUserIdentity(userName);
  30.                     var principal = newUserPrincipal(identity);                    
  31.                     Thread.CurrentPrincipal = principal;
  32.                     returntrue;
  33.                 }
  34.                 else
  35.                 {
  36.                     thrownewApplicationException(“The supplied user name and password do not match.”);
  37.                 }
  38.             }
  39.             returnfalse;
  40.         }

The Login action method is as follows:

Code Snippet
  1. [HttpPost]
  2. [AllowAnonymous]
  3. [ValidateAntiForgeryToken]
  4. publicActionResult Login(LoginModel model, string returnUrl)
  5. {
  6.     // Check if the supplied credentials are correct.
  7.     try
  8.     {
  9.         if (SecurityHelper.Authenticate(model.UserName, model.Password))
  10.         {
  11.             FormsAuthentication.SetAuthCookie(model.UserName, model.RememberMe);
  12.             return RedirectToLocal(returnUrl);
  13.         }
  14.     }
  15.     catch (Exception exception)
  16.     {
  17.         ViewBag.HasError = true;
  18.         ModelState.AddModelError(“”, exception.Message);
  19.     }
  20.     // If we got this far, something failed, redisplay form            
  21.     return View(model);
  22. }

So far everything is working as expected. It successfully authenticate the user, set the Thread.CurrentPrincipal with the custom implementation – so it seems.

When I debug the application it indeed authenticated the user but when I wanted to cast the HttpContext.Current.User.Identity to my custom UserIdentity class I got a cast error like the following:

error

The reason being the HttpContext Identity and Principal properties still reference the FormsIdentity and FormsPrincipal types not the ones I implemented. So I googled and found out that I have to use the authentication token used by forms authentication instead of directly using the user name provided by the user. In order to do that I added Application_AuthenticateRequest method to Global.asax.cs file which decrypts the forms authenticaion token and creates the UserIdentity based on the name property of the token as found below:

Code Snippet
  1. protectedvoid Application_AuthenticateRequest(object sender, EventArgs e)
  2. {
  3.     HttpCookie authCookie = Request.Cookies[FormsAuthentication.FormsCookieName];
  4.     if(authCookie!=null)
  5.     {
  6.         FormsAuthenticationTicket ticket = FormsAuthentication.Decrypt(authCookie.Value);
  7.         UserIdentity identity = newUserIdentity(ticket.Name);
  8.         UserPrincipal principal = newUserPrincipal(identity);
  9.         HttpContext.Current.User = principal;
  10.     }
  11. }

The last thing I did was to modify the web.config file as follows:

Code Snippet
  1. <authenticationmode=Forms>
  2.   <formsloginUrl=~/Account/Logintimeout=2880name=./AUTHTICKET />
  3. </authentication>

Now I have access to the HttpContext.Current.User.Identity property and cast it to UserIdentity to get all of my custom properties and everything works as expected.

Here is the complete code:

Multiple search criteria and EF code first

I run into the infamous advanced user search requirement that every database application user requests. There is an entity/model that the user need to search based on a combination of several criteria. In my case it is a single view in a database which is exposed using a read-only entity object with all relevant fields that are needed to display in a list.

list

The user is able to search the records using one of the three criteria shown above. As for the user interface part of the app, it is a no-brainer since all that is required is a form post when he/she clicks search.

The really annoying part is when handling the postback action. The first thing that comes to mind is to have a series of if… else blocks to handle the different cases. That really sucks! The good thing is there is a really neat hack that you can apply if you are using EF code first as your data access layer.

By writing a simple Linq expression extension to handle the different cases. Here is how it works.

First create a static class in your project and add this function to it:

Code Snippet
  1. public static class SearchExtension
  2. {
  3.     public static IQueryable<TSource> WhereIf<TSource>(this IQueryable<TSource> source, bool condition, Expression<Func<TSource, bool>> predicate)
  4.     {
  5.         if (condition)
  6.         {
  7.             return source.Where(predicate);
  8.         }
  9.         return source;
  10.     }
  11. }

 

Next use the above method inside your controllers by passing the value together with the condition if it should be included in the criteria or not like:

Code Snippet
  1. public ActionResult Index(ChildrenFilter filter)
  2. {
  3.     var query = context.ChildInfoes.WhereIf(filter.FullName != null, c => c.FullName.Contains(filter.FullName))
  4.         .WhereIf(filter.SexId != 0, c => c.SexId == filter.SexId)
  5.         .WhereIf(filter.ProjectSiteId != 0, c => c.ProjectSiteId == filter.ProjectSiteId);
  6.  
  7.     ViewBag.Children = query.ToList();
  8.  
  9.     return View();
  10. }

I added other features to display the search criteria by passing in the ViewBag but the real genius lies in the WhereIf extension method where by it adds the predicate if the condition variable evaluates to true. This will work for any number of filter criteria thanks to the method chaining feature of C#.

The result will be:

result

ASP.NET MVC 4 WebAPI

I have spent a good proportion of the past year working on ASP.NET MVC 2/3 and I must say it is one hell of a framework. With v4 the MVC team has added what has been a very cumbersome task in WCF Web API world. I have downloaded the beta bits and playing with it especially the WebAPI stuff. While I followed the documentation on asp.net site to get started with the new features like WebAPI, single page application and mobile sites; I found this article by Shawn Wildermuth explain the WebAPI feature in a nice and concise manner. I will be writing about my experience with ASP.NET MVC 4 in the days to come meanwhile I highly recommend you to checkout the article by Shawn Wildermuth here.

The Lean Startup

I have come across this book accidently when I was looking at New Relic’s website. They have this promotion in which they will give you ebook version of the book if you test drive their website monitoring application. It somehow got me interested and start reading right away. I must say it is a must read if you are anywhere near in the business of technological innovation.

lean-startup-cover

Check out the website at http://theleanstartup.com/

Cheers.

My first foray into the agile world!

My first encounter to agile software development methods were around late 2007 where our team was struggling to deliver project we were undertaking. Back then we were mostly engaged in consulting work, as it is the case for many if not all development shops based in Ethiopia. The day I came across Scrum my first reaction was – this thing really wont work in our context! Later on I started to learn more about it through blogs and articles and familiarize my self with its practices and principles and things started to make sense. During the time I was trying to master my skills in becoming a competent Scrum Master, I was presented with a great opportunity to put my learning to work when I got the green light to practices it in one of the projects I was involved in. It was a government contract for the development of a public sector business process automation application. By any standard it was one of the largest projects I was involved in. It was staffed with not less than 30 developers organized into five teams each one assigned a couple of business process to implement. Two of my colleagues and I were tasked with Scrum implementation throughout each team and as a whole in the organization. The advantage we had was at the moment the project was dragging considerably and the organization was willing to try out anything even something as radically in conflict with traditional methods as Scrum.

Team organization

The first thing we did was to reorganize each team so that we have at least one team member knows a good deal about the problem domain. We made this decision based on the fact that most of the staff were involved in and upfront analysis process which almost took more than half of the planned project schedule. We were lucky enough the team had good understanding of the domain but don’t have any clue on how to implement it. At the end of the day we had around five teams each one staffed with an average of five members.

Advocating Scrum

We had an easy run with management on selling Scrum because they were in a dire need of interventions to move the project forward. The biggest challenge was to sell the idea to the actual development team. Most of the team members were junior developers not more than a year development experience except for a handful of them. In my experience I find it very easy to sell Scrum to developers even to those die hard traditional process junkies. I guess it has to do with its extreme simplicity and how it fits right into our logical thinking process. We did a couple of workshops for presenting the concept and the things that we planned to introduce as part of the process change. Together with the methodology we also introduced additional concepts like testing, refactoring, and software patterns. The transition to agile demands a team to adopt quite a number of practices in both process and technical aspects and our team needed both.

One of the biggest challenges we faced early on was how to come up with a consistent development practice so that we can measure and control the development effort. We were also worried that we will end up changing a lot of things that the team will get frustrated and productivity will go down the drain. In order to ease the transition and still having full control on the development process we used Visual Studio project templates and a handful of CodeSmith templates to generate most of the repetitive and error prone code. In fact to our surprise the team overwhelmingly accepted the custom project templates and code generation. One of the things we missed out during this stage was to involve and inform the customer about Scrum. We totally left the customer out about our processes, which led to a handful of conflicts at latter stages.

Backlog Identification

The way we went about product backlog identification was to look at the analysis documents we had about the business processes. We had quite a considerable amount of documentation about the domain so we never had any problem in finding out features and user stories. The only thing we did was to convince the team that this list is a starting point not an exhaustive one so by the time we are through with the first sprint we might end up adding or removing items from it and this cycle will continue until we are done with the project.

Estimation and Planning

If I have to point out the single biggest challenge in implementing agile in general and scrum in particular it has to be task estimation and planning specially in a less experienced team. Estimation by itself is a very tricky business unless one posses both understanding of the domain problem and experience in similar (technically) projects otherwise it would end up being a never ending Ping-Pong session. Just like it is the case in planning poker one team member will estimate it as one and another as hundred points with out any sufficient explanation. Initially we had to intervene the discussion and make decisions on behalf of the team, which is not a recommended practice in the most purist form of Scrum. But after a couple of sprints the team gained more understanding about the underlying technology and architecture members become considerably precise in their estimation.

Development practices

The development process was fairly stable and consistent thanks to the frameworks and tools we were using. We were using ASP.NET as our framework and MVP as our pattern. This helped us to force the team to use software factories to automate visual studio in creating a consistent code base making it easy for code review. Most of the business and data layer code was generated using code generators due to its repetitive nature adding a huge productivity boost to the team. If we had to make changes to both layers all we had to do was make modifications to the code generator templates and run it on the solution.

Our definition of DONE

As it is customary to any Scrum team, we did have our own definition of done though it was very rudimentary. Most of the emphasis went into answering the question “are we addressing the user’s problem?” and overlooked validity tests, code smell and adherence to coding standards.

Scrum ceremonies

During the first two sprints the team did show mild resistance to Scrum meetings especially to daily-scrum and everyone thought it was a waste of time. The main reason for this reaction was due wrong estimation of tasks and developers were not able to finish their tasks. Not having a good understanding of the problem and how to solve it attributed a great deal to this problem and this issue was raised consistently during our retrospective meetings. As the team developed good estimation skills these and related issues gradually faded away.

Lessons

If I had the chance to do the above project all over again, here are the things I would improve.

1. Involve the customer through out the project by bypassing upfront analysis. There were times where the client didn’t showed up for more than four sprints to view what has been done putting the team in vain whether what has been done is correct or not. It would make a whole lot of difference had the client gave feedback at the completion of each sprint.

2. Give more focus to overlooked practices like unit testing and daily build. The team only did tests and builds whenever there was a pilot deployment or demonstration. Making a manual build of more than twenty web applications is quite painful to anyone after all.

3. More elaborated and strict definition of done. Not only the application builds successfully or passes a handful of trivial tests; it should include a high percentage of code coverage, coding standards and conventions, put policies in place to make sure that code commits do not break the build by having CI tools in place.

Conclusion

Even though this project was my first large scale Scrum application I had first hand experience that the practice is so good that it even works in this kind of abnormal situation. It can be argued that we followed an approach that can be considered as a classic example of “Scrum but” in many aspects but it allowed us to deliver the project against all odds.

What was your first experience with Scrum like?

Taking the pain out of source control.

I remember the first time I was introduced to source code version control systems. Back then our team was using Visual Basic 5 and I was just a rookie developer probably the most junior one. Everyone was against it so I was given the task of selecting a tool and introduce it to the team. At the time our team only heard of Visual SourceSafe 5 which came with Visual Studio 97 (VB 5). At first nobody in our team liked the idea since it was horrible to migrate from no source control to the worst source control system but as we go along we developed a way to work around the limitations of SourceSafe. All of our development team were located at the same office and we only work during work hours and we all used our workstations to do all development work. We were happy for the time being.

Part-timers/contractors

Moving forward five years and it was the time that TFS 2005 together with VS 2005 was released. I was excited to try out the product which was replacing SourceSafe and how it would change our way of work. Honestly it was a huge leap forward compared to SourceSafe but it couldn’t handle the new challenges related to distributed application developments. This time we had a couple of team members who are not always connected to our intranet and will be working while disconnected. Suddenly we started facing new challenges related to merging and shelved changesets. In order to work disconnected I had to detach the solution from source control binding and rebind it when I have a connection our network and it quickly become apparent to most of us that this has a huge productivity impact on the whole team. Some team members even suggested that we dump source control all together and go with the ‘stone age’ technique of communicating changes through shared folders on a server. To be honest this came from a guy who never checks in/commit his code even for a week saying that he is still working on his tasks and he is not done yet. In fact he didn’t stay on the project very long since this it was not an issue of source control rather personality.

When VS 2008 was released I was hoping that TFS would add this capabilities but it again failed short of addressing distributed source control management issues. It was at the peak of my frustration that I heard about DVCS (distributed version control systems) like Git and Mercurial. The principle behind DVCS differs a great deal from that of a centralized and connected systems, I found if very difficult to understand the concept at first but as time passes by I immediately fall in love with them.

Mindset change

Understanding DVCS was only the tip of the iceberg of challenges I was about to face the prominent one being trying to advocate them in a team where no one had heard of them. I have always been responsible for administering and managing source control systems in almost every team I participated but this time it was a little more challenging. The first thing I did was trying to migrate our team from TFS to Subversion since it is still a centralized system and allows to make changes while disconnected from the network. Then again another set of problems surfaced namely ‘Merge Hell’. Everyone in the team never had a problem of merging since I deliberately locked sensitive files like .sln, and .dbml  files so that they can only be edited by one user at a time to avoid conflicts. But this time it became an everyday event to resolve conflicts. I still like how svn handles merges for the most part but the team’s main complaint was that they keep on loosing their changes trying to resolve conflicts but this somehow becomes less and less of a problem as the team learns and starts to understand how the whole scenario of conflict resolution works.

EDMX – Source Control Kryptonite

I have to pick one of the most notorious files to merge entity framework’s desinger (.edmx) file has to be it. Because of the fact that .edmx are xml files you would think there is no reason source control systems are unable to handle merging these files well you would be wrong. EDMX files have a very strict structure to handle the conceptual, logical and mapping of entity objects with database objects and you can’t just treat them as simple text files like you would with a C# or VB code and depending on the number of database objects you placed on your diagram you might be forced to skim through thousand lines of xml markup -  yea it is horrible. And God forbid if you messed up one element or attribute then everything that depends on the diagram will blowup right in your face.

EDMX Hate!

Now that I have successfully scared the hell out of you regarding the use of .edmx files you might be saying ‘is there a solution to it?”. Well there is but it might or might not be applicable to your situation. In fact it is not only EF designer files that pose this kind of problem to source control systems, there are a number of files within visual studio that are composed by xml markup (I mainly work in Visual Studio but this issue will be applicable to other IDEs and platforms too) and you will be facing this problem quite often. The solution I have followed to come across this issue are the following:

  1. Adopt DVCS
  2. Restructure the solution.

The biggest advantage of DVCS is the fact that everyone will have a copy of the source code repository on their machine allowing them to make changes as they wish plus commit their changes to their own local repository. This solved the problem we were facing during resolving conflicts in svn where by developers were loosing their changes when merging. In addition to the easy merge process all commits are treated as a single changeset making the changes you made in different files a single unit. I have to admit that DVCS has their own learning curve but it is definitely worth it.

The other approach is only applicable if you are working with EF and using the supplied designer to define your model and mapping. As I said earlier our team heavily utilizes EF as our choice of data access method but it’s file structure (.edmx) proves to be a huge pain when it is modified at different developer workstations at the same time. The solutions we found to this problem is to migrate to the more recent version of EF (4.1 and above) and embrace the Code First approach. This way we were able to define our models and the mapping with the corresponding tables/views using simple C# classes and no more xml based .edmx files. Well there you have to analyze your solution carefully before you make the switch; namely from EF 4 and earlier based ObjectContext to the newer one based on DbContext. Nonetheless we were able to solve this problem by introducing DVCS and a little bit of EF Code First magic into our practices.

Well one might argue that there is still value in centralized source control systems and I completely agree. If you are practicing CI (continuous integration) to automate your build process, you will face a challenge since your code is dispersed all over your team and there is no concept of centralized repo in Mercurial/Git for instance. To come across this problem you might use one of the many code hosting sites which support these systems like github.com, bitbucket.org to mention some. In a follow-up post I’ll try to talk about some of the lessons and best practices that made the transition easier meanwhile if you are interested in learning about DVCS you can start with hg init by introducing yourself to Mercurial (my choice of DVCS).

Cheers!

Deploying SCSF app using WiX

I am a firm believer in that my applications setup experience is the first introduction to my users and hence it should be smooth enough and engaging to the user. Packaging a windows app have come a long way from VB’s Packaging and Deployment wizard to Windows Installer (of course there are high end installation packages one can use such as Wise/Altris and InstallShield).

Visual Studio .NET have supported the authoring of MSI packages within VS since 2003 and with each version since then there have been a slow increment of features. But around 2005 WiX (Windows Installer XML) appeared to the picture causing a major leap in setup authoring for windows. WiX provided developers the ability to write setup packages in XML and compile it to MSI packages. I am not going to go to the details of authoring a setup package in WiX rather a problem I have been facing while authoring a package for an applications developed with the Smart Client Software Factory (SCSF).

I created the app using SCSF for VS 2008 and it has a couple of modules plus the Shell Layout is in a separate module. I created the package using the Voltive toolset and it compiled without a problem to produce the MSI. After I deployed the app on test machines I started to notice that all of my shortcuts are not working as I expected. Actually it is not a problem with the MSI package rather how SCSF apps work. The shortcut is pointed to the Shell.exe bootstrapper to load the application so that all related modules will be loaded to the shell app by reading from ProfileCatalog.xml file. As far as my experience goes this is the normal process for any SCSF app but for my surprise when it is launched from the shortcut created by the installer it only loads the Shell.exe but no modules get loaded including the Shell Layout only showing the user a blank form. I traced the application and found out that no modules are loaded even if they are listed in the ProfileCatalog.xml file. When I launch the app directly from the installation folder it works perfectly but the problem seems on the shortcut so I created a shortcut from windows and find out a slight difference between the two. The one created by the setup package has the ‘Start In’ field blank while the one I created directly from the exe has the field set to the same folder as the Shell.exe.

fig_1

Latter I tweaked the .wxs file to include the WorkingDirectory parameter and it looked like this:

<Component Id=”sufupscid_0001″ Guid=”E57262EE-B7AE-4442-95FB-A179B9029847″>
<Shortcut Id=”shortcutHRM” Name=”HRM” Target=”[Folder_0001]Hrm.exe” WorkingDirectory=”Folder_0001″ Directory=”StartMenuAppFolder” Show=”maximized”/>
<RemoveFolder Id=”removeStartMenuAppFolder” Directory=”StartMenuAppFolder” On=”uninstall”/>
<RegistryValue Root=”HKCU” Key=”Software\SetupFactory\InstalledComponents\F994AC65-111F-4E2B-A452-BE39B926F650\E57262EE-B7AE-4442-95FB-A179B9029847″ KeyPath=”yes” Type=”string” Value=””/>
</Component>

I finally rebuild the setup package and installed it on my test machine and everything worked as expected. The Shell loaded all required modules including the layout.

Ethiopic Calendar Resources

If you have been developing even the simplest app which support Ethipic script/calendar, you will defiently have run into the problem of handling and manipulating date types. In this post I will try to point out resource which I have used in the past to solve this problem.

  1. Calendar section on ethiopic.org
  2. Simple implementation of the JDN approach using Java
  3. Date conversion algorithm
  4. Good article about Julian Day Numbers
  5. jQuery plugin that provides support for many of the world calendars including Ethiopic calendar.

Cheers!